Lines Matching refs:memory
13 other isolated compute and memory units. Each node typically has its own local
14 memory, and CPUs within a node can access this memory with lower latency than
17 usually located in the memory of a single node. This approach introduces two key
21 centralized allocation becomes a bottleneck. The memory capacity of a single
23 limits scalability in systems where each node has limited local memory.
27 memory. From bottom to top: \`.text\`, \`.rodata\`, \`.data\`,
31 the top. The memory extends from the local memory start address at
34 *Figure: Typical BL31/BL32 binary storage in local memory*
36 - **Non-Uniform Memory Access (NUMA) Latency:** In multi-node systems, memory
61 the local memory of each NUMA node. The figure below illustrates how per-CPU
62 objects are allocated in the local memory of their respective nodes.
66 :alt: Diagram comparing the TF-A BL31 memory layout with NUMA disabled versus
67 NUMA enabled. When NUMA is disabled, Node 0 contains a local memory
73 per-CPU memory regions.
75 *Figure: BL31/BL32 binary storage in local memory of per node when per-cpu NUMA
85 accessed variables may be logically independent, their proximity in memory can
94 own cache, connected through a shared interconnect to main memory. At
102 memory, leading to false sharing*
214 modifications made to per-CPU data are written back to memory, making them
215 visible to other CPUs or system components that may access this memory. This