| a969e99e | 16-Jan-2025 |
Etienne Carriere <etienne.carriere@foss.st.com> |
core: mm: zero initialize tee_mm pool structures
Zero initialize tee_mm_pool_t instance when such pool is initialized. This change fixes an issue where phys_mem pool max_allocated field may contain
core: mm: zero initialize tee_mm pool structures
Zero initialize tee_mm_pool_t instance when such pool is initialized. This change fixes an issue where phys_mem pool max_allocated field may contain a fuzzy value because it was not zero-initialized when allocated by the commit referred below.
Fixes: c596d8359eb3 ("core: add phys_mem allocation functions") Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| 04e46975 | 16-Dec-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
tree-wide: use ROUNDUP_DIV() where applicable
Use ROUNDUP_DIV() instead of ROUNDUP(..., size) / size where applicable.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Je
tree-wide: use ROUNDUP_DIV() where applicable
Use ROUNDUP_DIV() instead of ROUNDUP(..., size) / size where applicable.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 76d6685e | 17-Dec-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
tree-wide: use power-of-2 rounding macros where applicable
Use ROUNDUP2(), ROUNDUP2_OVERFLOW(), ROUNDUP2_DIV() and ROUNDDOWN2() at places where the rounding argument is a variable value and we want
tree-wide: use power-of-2 rounding macros where applicable
Use ROUNDUP2(), ROUNDUP2_OVERFLOW(), ROUNDUP2_DIV() and ROUNDDOWN2() at places where the rounding argument is a variable value and we want to leverage the implementation of these routines optimized for a power-of-2 rounding argument.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8fda89c7 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()
Moves the implementation of core_mmu_init_virtualization() into core_mmu_init_phys_mem().
This simplifies init_primary() in c
core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()
Moves the implementation of core_mmu_init_virtualization() into core_mmu_init_phys_mem().
This simplifies init_primary() in core/arch/arm/kernel/boot.c.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| e712be7a | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: initialize guest physical memory early
Initialize guest physical memory in virt_guest_created() before the first entry into the guest from normal world. This replaces the call to core_mmu_init
core: initialize guest physical memory early
Initialize guest physical memory in virt_guest_created() before the first entry into the guest from normal world. This replaces the call to core_mmu_init_phys_mem() in init_tee_runtime().
Remove unused code in core_mmu_init_phys_mem() and the now unused functions core_mmu_get_ta_range() and virt_get_ta_ram().
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| f1284346 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: allocate temporary memory map array
With CFG_BOOT_MEM enabled, allocate a temporary memory map array using boot_mem_alloc_tmp() instead of using the global static_mmap_regions[]. core_mmu_
core: mm: allocate temporary memory map array
With CFG_BOOT_MEM enabled, allocate a temporary memory map array using boot_mem_alloc_tmp() instead of using the global static_mmap_regions[]. core_mmu_save_mem_map() is added and called from boot_init_primary_late() before the temporary memory is reused.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| fe85eae5 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add CFG_BOOT_MEM and boot_mem_*() functions
Adds CFG_BOOT_MEM to support stack-like memory allocations during boot before a heap has been configured.
Signed-off-by: Jens Wiklander <jens.wikl
core: add CFG_BOOT_MEM and boot_mem_*() functions
Adds CFG_BOOT_MEM to support stack-like memory allocations during boot before a heap has been configured.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| d2e95293 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm,pager: map remaining physical memory
For CFG_WITH_PAGER=y map the remaining memory following the VCORE_INIT_RO memory to make sure that all physical TEE memory is mapped even if VCORE_INIT_
core: mm,pager: map remaining physical memory
For CFG_WITH_PAGER=y map the remaining memory following the VCORE_INIT_RO memory to make sure that all physical TEE memory is mapped even if VCORE_INIT_RO doesn't cover it entirely.
This will be used in later patches to use the temporarily unused memory while booting.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 9c1d818a | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: map memory using requested block size
TEE memory is always supposed to be mapped with 4k pages for maximum flexibility, but can_map_at_level() doesn't check the requested block size for a
core: mm: map memory using requested block size
TEE memory is always supposed to be mapped with 4k pages for maximum flexibility, but can_map_at_level() doesn't check the requested block size for a region, so fix that. However, assign_mem_granularity() assigns smaller than necessary block sizes on page aligned regions, so fix that by only requesting 4k granularity for TEE memory and PGDIR granularity for the rest.
This is needed in later patches where some TEE memory is unmapped.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 177b77f7 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: virt: phys_mem_core_alloc() use both pools
With CFG_NS_VIRTUALIZATION=y let phys_mem_core_alloc() allocate from both the core_pool and ta_pool since both pools keep equally secure memory. This
core: virt: phys_mem_core_alloc() use both pools
With CFG_NS_VIRTUALIZATION=y let phys_mem_core_alloc() allocate from both the core_pool and ta_pool since both pools keep equally secure memory. This is needed in later patches when some translation tables are dynamically allocated from spare physical core memory.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 1c1f8b65 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: unify secure core and TA memory
In configurations where secure core and TA memory is allocated from the same contiguous physical memory block, carve out the memory needed by OP-TEE core an
core: mm: unify secure core and TA memory
In configurations where secure core and TA memory is allocated from the same contiguous physical memory block, carve out the memory needed by OP-TEE core and make the rest available as TA memory.
This is needed by later patches where more core memory is allocated as needed from the pool of TA memory.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 2f2f69df | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But sec
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But secure read-only or execute core memory is not covered as that would defeat the purpose of CFG_CORE_RWDATA_NOEXEC.
Since the partition TA memory isn't accessed via MEM_AREA_TA_RAM any longer, don't map it using the partition specific map.
This is needed later where unification of OP-TEE core and physical TA memory is possible.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 06a25806 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: allow unmapping VCORE_FREE
Allow unmapping core memory in the VCORE_FREE range when the original boot mapping isn't needed any more.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.o
core: mm: allow unmapping VCORE_FREE
Allow unmapping core memory in the VCORE_FREE range when the original boot mapping isn't needed any more.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| a5ac48d6 | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add VCORE_FREE_{PA,SZ,END_PA}
Add VCORE_FREE_{PA,SZ,END_PA} defines to identify the unused and free memory range at the end of TEE_RAM_START..(TEE_RAM_START + TEE_RAM_VA_SIZE).
VCORE_FREE_SZ
core: add VCORE_FREE_{PA,SZ,END_PA}
Add VCORE_FREE_{PA,SZ,END_PA} defines to identify the unused and free memory range at the end of TEE_RAM_START..(TEE_RAM_START + TEE_RAM_VA_SIZE).
VCORE_FREE_SZ is 0 in a pager configuration since all the memory is used by the pager.
The VCORE_FREE range is excluded from the TEE_RAM_RW area for CFG_NS_VIRTUALIZATION=y and instead put in a separate NEX_RAM_RW area. This makes each partition use a bit less memory and leaves the VCORE_FREE range available for the Nexus.
The VCORE_FREE range is added to the TEE_RAM_RW area for the normal configuration with CFG_NS_VIRTUALIZATION=n and CFG_WITH_PAGER=n. It's in practice unchanged behaviour in this configuration.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 1fbe848c | 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove CORE_MEM_TA_RAM
The buffer attribute CORE_MEM_TA_RAM isn't used to query the status of a buffer anywhere. So remove the attribute to allow future simplifications.
Signed-off-by: Jens W
core: remove CORE_MEM_TA_RAM
The buffer attribute CORE_MEM_TA_RAM isn't used to query the status of a buffer anywhere. So remove the attribute to allow future simplifications.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 0ccf6468 | 21-Nov-2024 |
Sahil Malhotra <sahil.malhotra@nxp.com> |
core: mm: check return value from tee_mm_init()
Check return value from tee_mm_init() function.
Signed-off-by: Sahil Malhotra <sahil.malhotra@nxp.com> Reviewed-by: Etienne Carriere <etienne.carrier
core: mm: check return value from tee_mm_init()
Check return value from tee_mm_init() function.
Signed-off-by: Sahil Malhotra <sahil.malhotra@nxp.com> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Fixes: c596d8359eb3 ("core: add phys_mem allocation functions")
show more ...
|
| 32360649 | 04-Oct-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
core: mm: use fdt_reg_info()
Use fdt_reg_info() instead of fdt_reg_base_address() and fdt_reg_size() to optimize look up in the DT due to finding parent node.
Signed-off-by: Etienne Carriere <etien
core: mm: use fdt_reg_info()
Use fdt_reg_info() instead of fdt_reg_base_address() and fdt_reg_size() to optimize look up in the DT due to finding parent node.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 3de913f6 | 21-Oct-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: fix mobj_tee_ram_rw initialization
Until this patch, for CFG_CORE_RWDATA_NOEXEC=n and CFG_CORE_ASLR=y there's an error in mobj_init() when the length of the combined TEE_RAM_RWX is calcula
core: mm: fix mobj_tee_ram_rw initialization
Until this patch, for CFG_CORE_RWDATA_NOEXEC=n and CFG_CORE_ASLR=y there's an error in mobj_init() when the length of the combined TEE_RAM_RWX is calculated.
The relocatable address VCORE_UNPG_RW_PA is mixed with the absolute address TEE_RAM_START. Relocated addresses only changes with CFG_CORE_ASLR=y so before ASLR this expression was correct.
The combined TEE_RAM_RWX is only used with CFG_CORE_RWDATA_NOEXEC=n so that is also a prerequisite for the error. The calculated length field is usually not more wrong than code depending on mobj_tee_ram_rw/mobj_tee_ram_rx still works. So the error wasn't visible until length checks for phys_to_virt() was introduced with the commit c2e4eb43b7b7 ("core_mmu: fix phys_to_virt() to check length").
Fix this by using VCORE_START_VA instead of TEE_RAM_START since the former is a relocated address.
Fixes: c2e4eb43b7b7 ("core_mmu: fix phys_to_virt() to check length") Fixes: 170e9084a84f ("core: add support for CFG_CORE_ASLR") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| 7c04952c | 29-Oct-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix race in mobj_reg_shm_get_by_cookie()
Until this patch in mobj_reg_shm_get_by_cookie() there's a small window after cpu_spin_unlock_xrestore() before the reference counter is increased with
core: fix race in mobj_reg_shm_get_by_cookie()
Until this patch in mobj_reg_shm_get_by_cookie() there's a small window after cpu_spin_unlock_xrestore() before the reference counter is increased with mobj_get(). Fix that by calling mobj_get() before unlocking reg_shm_slist_lock.
Fixes: b96514926b8e ("core: reference count struct mobj") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 1502e43d | 14-Aug-2024 |
Yu Chien Peter Lin <peterlin@andestech.com> |
core: mm: core_mmu: don't use check_va_matches_pa() on RISC-V
The arch_va2pa_helper() in the RISC-V implements a software page table walker. It requires phys_to_virt() to convert the physical page o
core: mm: core_mmu: don't use check_va_matches_pa() on RISC-V
The arch_va2pa_helper() in the RISC-V implements a software page table walker. It requires phys_to_virt() to convert the physical page on the PTE to the virtual address of the next level page table. The process can lead to a stack overflow caused by indirect recursion as below:
phys_to_virt() <--------------------------------. -> check_va_matches_pa() | -> virt_to_phys() | -> arch_va2pa_helper() | -> core_mmu_xlat_table_entry_pa2va()-'
As arch_va2pa_helper() can return true if va matches pa, we don't use and check_va_matches_pa() when CFG_TEE_CORE_DEBUG is enabled.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alvin Chang <alvinga@andestech.com> Tested-by: Alvin Chang <alvinga@andestech.com> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 90c16066 | 15-Aug-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename to core_mmu_init_phys_mem()
Rename core_mmu_init_ta_ram() to core_mmu_init_phys_mem() for a more accurate name of the function.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org
core: rename to core_mmu_init_phys_mem()
Rename core_mmu_init_ta_ram() to core_mmu_init_phys_mem() for a more accurate name of the function.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| de19cacb | 08-May-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: replace tee_mm_sec_ddr with phys_mem functions
Replace the tee_mm_sec_ddr mm pool with the phys_mem functions. This doesn't change the behaviour.
Signed-off-by: Jens Wiklander <jens.wiklander
core: replace tee_mm_sec_ddr with phys_mem functions
Replace the tee_mm_sec_ddr mm pool with the phys_mem functions. This doesn't change the behaviour.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| c596d835 | 26-Jul-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add phys_mem allocation functions
Add nex_phys_mem and phys_mem allocation functions. These functions are intended to replace the previous calls to tee_mm functions on with the virt_mapper_poo
core: add phys_mem allocation functions
Add nex_phys_mem and phys_mem allocation functions. These functions are intended to replace the previous calls to tee_mm functions on with the virt_mapper_pool or tee_mm_sec_ddr as arguments.
The pool of physical memory is divided into two parts, core and ta. All physical TA memory allocations are done from the core pool if a ta pool isn't added. This might be the case if core and ta physical memory resides in the same physical memory range.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| 10b19e73 | 09-Jul-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: add core_mmu_for_each_map()
Add core_mmu_for_each_map() to iterate over all memory regions, struct tee_mmap_region.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by:
core: mm: add core_mmu_for_each_map()
Add core_mmu_for_each_map() to iterate over all memory regions, struct tee_mmap_region.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| e53d1206 | 16-Jul-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add_phys_mem(): fix mergeable physical memory
The test in add_phys_mem() to see if two physical memory ranges can be merged only checks for overlapping memory ranges, but consecutive ranges ar
core: add_phys_mem(): fix mergeable physical memory
The test in add_phys_mem() to see if two physical memory ranges can be merged only checks for overlapping memory ranges, but consecutive ranges are not detected even if they can be merged. Fix this by also checking if the byte after the lowest range matches the beginning of the next range.
The resulting merged entry might be mergeable with the previous or next entry, so add checks for that and merge if possible.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|