Lines Matching full:we

84 	 * Pass log block 0 since we don't have an addr yet, buffer will be  in xlog_alloc_buffer()
94 * We do log I/O in units of log sectors (a power-of-2 multiple of the in xlog_alloc_buffer()
95 * basic block size), so we round up the requested size to accommodate in xlog_alloc_buffer()
103 * blocks (sector size 1). But otherwise we extend the buffer by one in xlog_alloc_buffer()
255 * h_fs_uuid is null, we assume this log was last mounted in xlog_header_check_mount()
334 * range of basic blocks we'll be examining. If that fails, in xlog_find_verify_cycle()
335 * try a smaller size. We need to be able to read at least in xlog_find_verify_cycle()
336 * a log sector, or we're out of luck. in xlog_find_verify_cycle()
391 * a good log record. Therefore, we subtract one to get the block number
393 * of blocks we would have read on a previous read. This happens when the
456 * We hit the beginning of the physical log & still no header. Return in xlog_find_verify_log_record()
466 * We have the final block of the good log (the first block in xlog_find_verify_log_record()
467 * of the log record _before_ the head. So we check the uuid. in xlog_find_verify_log_record()
473 * We may have found a log record header before we expected one. in xlog_find_verify_log_record()
474 * last_blk will be the 1st block # with a given cycle #. We may end in xlog_find_verify_log_record()
475 * up reading an entire log record. In this case, we don't want to in xlog_find_verify_log_record()
477 * record do we update last_blk. in xlog_find_verify_log_record()
493 * eliminated when calculating the head. We aren't guaranteed that previous
494 * LR have complete transactions. We only know that a cycle number of
495 * current cycle number -1 won't be present in the log if we start writing
529 * log so we can store the uuid in there in xlog_find_head()
561 * we set it to log_bbnum which is an invalid block number, but this in xlog_find_head()
569 * In this case we believe that the entire log should have in xlog_find_head()
570 * cycle number last_half_cycle. We need to scan backwards in xlog_find_head()
572 * containing last_half_cycle - 1. If we find such a hole, in xlog_find_head()
587 * In the 256k log case, we will read from the beginning to the in xlog_find_head()
589 * We don't worry about the x+1 blocks that we encounter, in xlog_find_head()
590 * because we know that they cannot be the head since the log in xlog_find_head()
597 * In this case we want to find the first block with cycle in xlog_find_head()
598 * number matching last_half_cycle. We expect the log to be in xlog_find_head()
602 * be where the new head belongs. First we do a binary search in xlog_find_head()
604 * search may not be totally accurate, so then we scan back in xlog_find_head()
607 * the log, then we look for occurrences of last_half_cycle - 1 in xlog_find_head()
608 * at the end of the log. The cases we're looking for look in xlog_find_head()
612 * ^ but we want to locate this spot in xlog_find_head()
616 * ^ we want to locate this spot in xlog_find_head()
630 * we actually look at the block size of the filesystem. in xlog_find_head()
635 * We are guaranteed that the entire check can be performed in xlog_find_head()
647 * We are going to scan backwards in the log in two parts. in xlog_find_head()
648 * First we scan the physical end of the log. In this part in xlog_find_head()
649 * of the log, we are looking for blocks with cycle number in xlog_find_head()
651 * If we find one, then we know that the log starts there, as in xlog_find_head()
652 * we've found a hole that didn't get written in going around in xlog_find_head()
657 * last_half_cycle, then we check the blocks at the start of in xlog_find_head()
658 * the log looking for occurrences of last_half_cycle. If we in xlog_find_head()
660 * first occurrence of last_half_cycle is wrong and we move in xlog_find_head()
661 * back to the hole we've found. This case looks like in xlog_find_head()
664 * Another case we need to handle that only occurs in 256k in xlog_find_head()
669 * x + 1 blocks. We need to skip past those since that is in xlog_find_head()
671 * last_half_cycle-1 we accomplish that. in xlog_find_head()
702 * Now we need to make sure head_blk is not pointing to a block in in xlog_find_head()
722 /* We hit the beginning of the log during our search */ in xlog_find_head()
746 * When returning here, we have a good block number. Bad block in xlog_find_head()
747 * means that during a previous crash, we didn't have a clean break in xlog_find_head()
748 * from cycle number N to cycle number N-1. In this case, we need in xlog_find_head()
763 * Given a starting log block, walk backwards until we find the provided number
788 * Walk backwards from the head block until we hit the tail or the first in xlog_rseek_logrec_hdr()
806 * If we haven't hit the tail block or the log record header count, in xlog_rseek_logrec_hdr()
836 * Given head and tail blocks, walk forward from the tail block until we find
862 * Walk forward from the tail block until we hit the head or the last in xlog_seek_logrec_hdr()
880 * If we haven't hit the head block or the log record header count, in xlog_seek_logrec_hdr()
926 * We also have to handle the case where the tail was pinned and the head
932 * recovery because we have no way to know the tail was updated if the
971 * Run a CRC check from the tail to the head. We can't just check in xlog_verify_tail()
1017 * CRC verification. While we can't always be certain that CRC verification
1018 * failure is due to a torn write vs. an unrelated corruption, we do know that
1045 * head until we hit the tail or the maximum number of log record I/Os in xlog_verify_head()
1047 * we don't trash the rhead/buffer pointers from the caller. in xlog_verify_head()
1068 * We've hit a potential torn write. Reset the error and warn in xlog_verify_head()
1115 * We need to make sure we handle log wrapping properly, so we can't use the
1119 * The log is limited to 32 bit sizes, so we use the appropriate modulus
1158 * Look for unmount record. If we find it, then we know there was a in xlog_check_unmount_rec()
1160 * log, we convert to a log block before comparing to the head_blk. in xlog_check_unmount_rec()
1163 * below. We won't want to clear the unmount record if there is one, so in xlog_check_unmount_rec()
1164 * we pass the lsn of the unmount record rather than the block after it. in xlog_check_unmount_rec()
1206 * Reset log values according to the state of the log when we in xlog_set_state()
1207 * crashed. In the case where head_blk == 0, we bump curr_cycle in xlog_set_state()
1210 * point we have guaranteed that all partial log records have been in xlog_set_state()
1211 * accounted for. Therefore, we know that the last good log record in xlog_set_state()
1238 * lsn. The entire log record does not need to be valid. We only care
1241 * We could speed up search by using current head_blk buffer, but it is not
1284 * seriously wrong if we can't find it. in xlog_find_tail()
1313 * Verify the log head if the log is not clean (e.g., we have anything in xlog_find_tail()
1318 * Note that we can only run CRC verification when the log is dirty in xlog_find_tail()
1344 * Note that the unmount was clean. If the unmount was not clean, we in xlog_find_tail()
1346 * headers if we have a filesystem using non-persistent counters. in xlog_find_tail()
1354 * because we allow multiple outstanding log writes concurrently, in xlog_find_tail()
1357 * We use the lsn from before modifying it so that we'll never in xlog_find_tail()
1360 * Do this only if we are going to recover the filesystem in xlog_find_tail()
1363 * However on Linux, we can & do recover a read-only filesystem. in xlog_find_tail()
1364 * We only skip recovery if NORECOVERY is specified on mount, in xlog_find_tail()
1365 * in which case we would not be here. in xlog_find_tail()
1368 * We can't recover this device anyway, so it won't matter. in xlog_find_tail()
1386 * the X blocks. This will cut down on the number of reads we need to do.
1437 /* we have a partially zeroed log */ in xlog_find_zeroed()
1446 * we scan over the defined maximum blocks. At this point, the maximum in xlog_find_zeroed()
1457 * We search for any instances of cycle number 0 that occur before in xlog_find_zeroed()
1458 * our current estimate of the head. What we're trying to detect is in xlog_find_zeroed()
1469 * Potentially backup over partial log record write. We don't need in xlog_find_zeroed()
1470 * to search the end of the log because we know it is zero. in xlog_find_zeroed()
1534 * a smaller size. We need to be able to write at least a in xlog_write_log_records()
1535 * log sector, or we're out of luck. in xlog_write_log_records()
1546 /* We may need to do a read at the start to fill in part of in xlog_write_log_records()
1565 /* We may need to do a read at the end to fill in part of in xlog_write_log_records()
1598 * in front of the log head. We do this so that we won't become confused
1599 * if we come up, write only a little bit more, and then crash again.
1600 * If we leave the partial log records out there, this situation could
1602 * have the current cycle number. We get rid of them by overwriting them
1607 * the log so that we will not write over the unmount record after a
1609 * any valid log records in it until a new one was written. If we crashed
1610 * during that time we would not be able to recover.
1630 * and the tail. We want to write over any blocks beyond the in xlog_clear_stale_blocks()
1631 * head that we may have written just before the crash, but in xlog_clear_stale_blocks()
1632 * we don't want to overwrite the tail of the log. in xlog_clear_stale_blocks()
1661 * If the head is right up against the tail, we can't clear in xlog_clear_stale_blocks()
1672 * we could have and the distance to the tail to clear out. in xlog_clear_stale_blocks()
1673 * We take the smaller so that we don't overwrite the tail and in xlog_clear_stale_blocks()
1674 * we don't waste all day writing from the head to the tail in xlog_clear_stale_blocks()
1681 * We can stomp all the blocks we need to without in xlog_clear_stale_blocks()
1694 * We need to wrap around the end of the physical log in in xlog_clear_stale_blocks()
1710 * This writes the remainder of the blocks we want to clear. in xlog_clear_stale_blocks()
1711 * It uses the current cycle number since we're now on the in xlog_clear_stale_blocks()
1712 * same cycle as the head so that we get: in xlog_clear_stale_blocks()
1714 * ^^^^^ blocks we're writing in xlog_clear_stale_blocks()
1805 * there's nothing to replay from them so we can simply cull them
1806 * from the transaction. However, we can't do that until after we've
1819 * in a "free" state before we remove the unlinked inode list pointer.
1824 * But there's a problem with that - we can't tell an inode allocation buffer
1825 * apart from a regular buffer, so we can't separate them. We can, however,
1826 * tell an inode unlink buffer from the others, and so we can separate them out
1835 * Note that we add objects to the tail of the lists so that first-to-last
1837 * list means when we traverse from the head we walk them in last-to-first
1839 * but for all other items there may be specific ordering that we need to
2080 * This works because all regions must be 32 bit aligned. Therefore, we
2081 * either have both fields or we have neither field. In the case we have
2082 * neither field, the data part of the region is zero length. We only have
2084 * later. If we have at least 4 bytes, then we can determine how many regions
2101 /* we need to catch log corruptions here */ in xlog_recover_add_to_trans()
2117 * records. If we don't have the whole thing here, copy what we in xlog_recover_add_to_trans()
2224 * Callees must not free the trans structure. We'll decide if we need to in xlog_recovery_process_trans()
2239 /* success or fail, we are now done with this transaction. */ in xlog_recovery_process_trans()
2265 * Either way, return what we found during the lookup - an existing transaction
2286 * skip over non-start transaction headers - we could be in xlog_recover_ophdr_to_trans()
2327 /* Do we understand who wrote this op? */ in xlog_recover_process_ophdr()
2353 * The recovered buffer queue is drained only once we know that all in xlog_recover_process_ophdr()
2366 * In other words, we are allowed to submit a buffer from log recovery in xlog_recover_process_ophdr()
2367 * once per current LSN. Otherwise, we may incorrectly skip recovery in xlog_recover_process_ophdr()
2370 * We don't know up front whether buffers are updated multiple times per in xlog_recover_process_ophdr()
2389 * transaction structure is in a normal state. We have either seen the
2390 * start of the transaction or the last operation we added was not a partial
2391 * operation. If the last operation we added to the transaction was a
2392 * partial operation, we need to mark r_state with XLOG_WAS_CONT_TRANS.
2413 /* check the log format matches our own - else we can't recover */ in xlog_recover_process_data()
2453 * to regrant every roll so that we can make forward progress in xlog_finish_defer_ops()
2468 * Transfer to this new transaction all the dfops we captured in xlog_finish_defer_ops()
2503 * corresponding log done items should be in the AIL. What we do now
2506 * Since we process the log intent items in normal transactions, they
2508 * from just walking down the list processing each one. We'll use a
2509 * flag in the intent item to skip those that we've already processed
2513 * When we start, we know that the intents are the only things in the
2514 * AIL. As we process them, however, other items are added to the
2539 * We're done when we see something other than an intent. in xlog_recover_process_intents()
2551 * We should never see a redo item with a LSN higher than in xlog_recover_process_intents()
2552 * the last transaction we found in the log at the start in xlog_recover_process_intents()
2586 * A cancel occurs when the mount has failed and we're bailing out.
2602 * We're done when we see something other than an intent. in xlog_recover_cancel_intents()
2712 * We can't read in the inode this bucket points to, or this inode in xlog_recover_process_one_iunlink()
2713 * is messed up. Just ditch this bucket of inodes. We will lose in xlog_recover_process_one_iunlink()
2714 * some inodes and space, but at least we won't hang. in xlog_recover_process_one_iunlink()
2726 * This is called during recovery to process any inodes which we unlinked but
2728 * AGI blocks. What we do here is scan all the AGIs and fully truncate and free
2733 * If everything we touch in the agi processing loop is already in memory, this
2736 * until we either run out of inodes to process, run low on memory or we run out
2742 * space. Hence we need to yield the CPU when there is other kernel work
2769 * We should probably mark the filesystem as corrupt in xlog_recover_process_iunlinks()
2770 * after we've recovered all the ag's we can.... in xlog_recover_process_iunlinks()
2777 * Because we are not racing with anyone else here for the AGI in xlog_recover_process_iunlinks()
2778 * buffer, we don't even need to hold it locked to read the in xlog_recover_process_iunlinks()
2779 * initial unlinked bucket entries out of the buffer. We keep in xlog_recover_process_iunlinks()
2781 * while we need the buffer. in xlog_recover_process_iunlinks()
2843 * sets old_crc to 0 so we must consider this valid even on v5 supers. in xlog_recover_process()
2854 * We're in the normal recovery path. Issue a warning if and only if the in xlog_recover_process()
2976 * h_size (iclog size) is hardcoded to 32k. Now that we in xlog_do_recovery_pass()
3024 * we can't do a sequential recovery. in xlog_do_recovery_pass()
3056 * - we increased the buffer size originally in xlog_do_recovery_pass()
3061 * - we read the log end (LR header start) in xlog_do_recovery_pass()
3118 * - we increased the buffer size originally in xlog_do_recovery_pass()
3123 * - we read the log end (LR header start) in xlog_do_recovery_pass()
3207 * Do the recovery of the log. We actually do this in two phases.
3216 * and freed at this level, since only here do we know when all of
3297 * We now update the tail_lsn since much of the recovery has completed in xlog_do_recover()
3299 * or iunlinks, we can free up the entire log and set the tail_lsn to in xlog_do_recover()
3302 * or iunlinks they will have some entries in the AIL; so we look at in xlog_do_recover()
3308 * Now that we've finished replaying all buffer and inode updates, in xlog_do_recover()
3377 * called), we just go ahead and recover. We do this all in xlog_recover()
3378 * under the vfs layer, so we can get away with it unless in xlog_recover()
3379 * the device itself is read-only, in which case we fail. in xlog_recover()
3386 * Version 5 superblock log feature mask validation. We know the in xlog_recover()
3388 * in what we need to recover. If there are unknown features in xlog_recover()
3429 * In the first part of recovery we replay inodes and buffers and build
3431 * we process the extent free items and clean up the on disk unlinked
3434 * between the two stages. This is necessary so that we can free space
3442 * Now we're ready to do the transactions needed for the in xlog_recover_finish()
3445 * lists. At this point, we essentially run in normal mode in xlog_recover_finish()
3446 * except that we're still performing recovery actions in xlog_recover_finish()
3455 * we don't leave them pinned in the AIL. This can in xlog_recover_finish()
3458 * this) before we get around to xfs_log_mount_cancel. in xlog_recover_finish()