Lines Matching full:migration

3  * Memory Migration functionality - linux/mm/migrate.c
7 * Page migration was first developed in the context of the memory hotplug
8 * project. The main authors of the migration code are:
88 * compaction threads can race against page migration functions in isolate_movable_page()
92 * being (wrongly) re-isolated while it is under migration, in isolate_movable_page()
140 * from where they were once taken off for compaction/migration.
181 * Restore a potential migration pte to a working pte entry
205 /* PMD-mapped THP migration entry */ in remove_migration_pte()
219 * Recheck VMA as permissions can change since migration started in remove_migration_pte()
269 * Get rid of all migration entries and replace them by
286 * Something used the pte of a page under migration. We need to
287 * get to the page and wait until migration is finished.
310 * Once page cache replacement of page migration started, page_count in __migration_entry_wait()
672 * Migration functions
725 * async migration. Release the taken locks in buffer_migrate_lock_buffers()
821 * Migration function for pages with buffers. This function can only be used
870 * migration. Writeout may mean we loose the lock and the in writeout()
872 * At this point we know that the migration attempt cannot in writeout()
887 * Default handling if a filesystem does not provide a migration function.
893 /* Only writeback pages in full synchronous migration */ in fallback_migrate_page()
947 * for page migration. in move_to_new_page()
957 * isolation step. In that case, we shouldn't try migration. in move_to_new_page()
1039 * Only in the case of a full synchronous migration is it in __unmap_and_move()
1061 * of migration. File cache pages are no problem because of page_lock() in __unmap_and_move()
1062 * File Caches may use write_page() or lock_page() in migration, then, in __unmap_and_move()
1109 /* Establish migration ptes */ in __unmap_and_move()
1132 * If migration is successful, decrease refcount of the newpage in __unmap_and_move()
1207 * If migration is successful, releases reference grabbed during in unmap_and_move()
1243 * Counterpart of unmap_and_move_page() for hugepage migration.
1246 * because there is no race between I/O and migration for hugepage.
1254 * hugepage migration fails without data corruption.
1256 * There is also no race when direct I/O is issued on the page under migration,
1257 * because then pte is replaced with migration swap entry and direct I/O code
1258 * will wait in the page fault for migration to complete.
1273 * This check is necessary because some callers of hugepage migration in unmap_and_move_huge_page()
1276 * kicking migration. in unmap_and_move_huge_page()
1368 * If migration was not successful and there's a freeing callback, use in unmap_and_move_huge_page()
1382 * supplied as the target for the page migration
1386 * as the target of the page migration.
1387 * @put_new_page: The function used to free target pages if migration
1390 * @mode: The migration mode that specifies the constraints for
1391 * page migration, if any.
1392 * @reason: The reason for page migration.
1433 * during migration. in migrate_pages()
1451 * THP migration might be unsupported or the in migrate_pages()
1496 * removed from migration page list and not in migrate_pages()
1552 * clear __GFP_RECLAIM to make the migration callback in alloc_migration_target()
1759 /* The page is successfully queued for migration */ in do_pages_move()
1973 * Returns true if this is a safe migration target node for misplaced NUMA
2027 * migrate_misplaced_transhuge_page() skips page migration's usual in numamigrate_isolate_page()
2029 * has been isolated: a GUP pin, or any other pin, prevents migration. in numamigrate_isolate_page()
2045 * disappearing underneath us during migration. in numamigrate_isolate_page()
2143 /* Prepare a page as a migration target */ in migrate_misplaced_transhuge_page()
2408 * any kind of migration. Side effect is that it "freezes" the in migrate_vma_collect_pmd()
2421 * set up a special migration page table entry now. in migrate_vma_collect_pmd()
2429 /* Setup special migration page table entry */ in migrate_vma_collect_pmd()
2480 * @migrate: migrate struct containing all migration information
2512 * migrate_page_move_mapping(), except that here we allow migration of a
2536 * GUP will fail for those. Yet if there is a pending migration in migrate_vma_check_page()
2537 * a thread might try to wait on the pte migration entry and in migrate_vma_check_page()
2539 * differentiate a regular pin from migration wait. Hence to in migrate_vma_check_page()
2541 * infinite loop (one stoping migration because the other is in migrate_vma_check_page()
2542 * waiting on pte migration entry). We always return true here. in migrate_vma_check_page()
2562 * @migrate: migrate struct containing all migration information
2588 * a deadlock between 2 concurrent migration where each in migrate_vma_prepare()
2669 * migrate_vma_unmap() - replace page mapping with special migration pte entry
2670 * @migrate: migrate struct containing all migration information
2672 * Replace page mapping (CPU page table pte) with a special migration pte entry
2728 * @args: contains the vma, start, and pfns arrays for the migration
2749 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
2759 * properly set the destination entry like for regular migration. Note that
2761 * migration was successful for those entries after calling migrate_vma_pages()
2762 * just like for regular migration.
2972 * @migrate: migrate struct containing all migration information
2975 * struct page. This effectively finishes the migration from source page to the
3055 * @migrate: migrate struct containing all migration information
3057 * This replaces the special migration pte entry with either a mapping to the
3058 * new page if migration was successful for that page, or to the original page