Lines Matching full:areas

11  * The percpu allocator handles both static and dynamic areas.  Percpu
12 * areas are allocated in chunks which are divided into units. There is
171 /* chunks which need their map areas extended, protected by pcpu_lock */
386 * pcpu_next_fit_region - finds fit areas for a given allocation request
451 * Metadata free area iterators. These perform aggregation of free areas
749 /* iterate over free areas and update the contig hints */ in pcpu_block_refresh_hint()
1052 * skip over blocks and chunks that have valid free areas.
1110 * free areas, smaller allocations will eventually fill those holes.
1837 /* clear the areas and return address relative to base address */ in pcpu_alloc()
1954 * areas can be scarce. Destroy all free chunks except for one. in __pcpu_balance_workfn()
2157 * static percpu areas are not considered. For those, use
2366 * static areas on architectures where the addressing model has
2379 * for vm areas.
2386 * percpu areas. Units which should be colocated are put into the
2387 * same group. Dynamic VM areas will be allocated according to these
2842 void **areas = NULL; in pcpu_embed_first_chunk() local
2856 areas = memblock_alloc(areas_size, SMP_CACHE_BYTES); in pcpu_embed_first_chunk()
2857 if (!areas) { in pcpu_embed_first_chunk()
2881 areas[group] = ptr; in pcpu_embed_first_chunk()
2884 if (ptr > areas[highest_group]) in pcpu_embed_first_chunk()
2887 max_distance = areas[highest_group] - base; in pcpu_embed_first_chunk()
2908 void *ptr = areas[group]; in pcpu_embed_first_chunk()
2924 ai->groups[group].base_offset = areas[group] - base; in pcpu_embed_first_chunk()
2936 if (areas[group]) in pcpu_embed_first_chunk()
2937 free_fn(areas[group], in pcpu_embed_first_chunk()
2941 if (areas) in pcpu_embed_first_chunk()
2942 memblock_free_early(__pa(areas), areas_size); in pcpu_embed_first_chunk()
3111 panic("Failed to initialize percpu areas."); in setup_per_cpu_areas()
3139 panic("Failed to allocate memory for percpu areas."); in setup_per_cpu_areas()