| #
00338334 |
| 31-Oct-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: support dynamic protected memory lending
With CFG_CORE_DYN_PROTMEM=y support dynamic protected memory lending.
A new internal struct mobj_ffa_rsm is added to handle dynamic protected memory f
core: support dynamic protected memory lending
With CFG_CORE_DYN_PROTMEM=y support dynamic protected memory lending.
A new internal struct mobj_ffa_rsm is added to handle dynamic protected memory for FF-A.
A new internal struct mobj_protmem is add to handle dynamic protected memory without FF-A.
Lending non-secure memory to OP-TEE to use it as protected memory means that it should to become inaccessible by the normal world as part of the process. This part is currently not supported, since it must be done in a platform specific way for platforms that support that. QEMU don't support that.
Adding two platform specific functions, plat_get_protmem_config() and plat_set_protmem_range() for dynamic protected memory. The functions has __weak implementation to allow easier testing. However, plat_set_protmem_range() requires CFG_INSECURE=y since it doesn't change memory protection.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
e06a9ea5 |
| 26-Jul-2024 |
Volodymyr Babchuk <volodymyr_babchuk@epam.com> |
mmu: ignore VA spaces in core_mmu_get_type_by_pa
VA spaces have no valid PA addresses stored in memory map, so they are not valid return values for core_mmu_get_type_by_pa() function.
This issues w
mmu: ignore VA spaces in core_mmu_get_type_by_pa
VA spaces have no valid PA addresses stored in memory map, so they are not valid return values for core_mmu_get_type_by_pa() function.
This issues was discovered when OP-TEE tried to access a device tree that was stored at the very beginning of physical address space. In may case it had PA address 0x112C0, which was "covered" by RES_VASPACE:
D/TC:0 0 dump_mmap_table:838 type RES_VASPACE va 0x1d800000..0x1e1fffff pa 0x00000000..0x009fffff size 0x00a00000 (pgdir)
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
2cd578ba |
| 23-May-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_CONFIG=y since it requires extensive changes in the ASAN framework.
The VCORE_FREE area is moved right before the .asan_shadow area.
init_asan() calls boot_mem_init_asan() to tag access to already allocated boot memory.
entry_a32.S is updated to skip allowing access to stacks in the .asan_shadow area for CFG_DYN_CONFIG=y since stacks are stored elsewhere in that configuration.
entry_a64.S is updated to initialize the .asan_shadow area in the same way as in entry_a32.S.
The .asan_shadow area is mapped explicitly in collect_mem_ranges() instead of relying on the now non-existent coverage of MEM_AREA_TEE_RAM_RW.
CFG_DYN_CONFIG=y and CFG_WITH_PAGER=y is not yet known to work.
Fixes: 1c1f8b65b5c6 ("core: mm: unify secure core and TA memory") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
26685a91 |
| 15-Mar-2025 |
Yu-Chien Peter Lin <peter.lin@sifive.com> |
core: mm: factor out virtual address range validation to arch code
Move virtual address range validation into architecture-specific code since different architectures have different constraints on v
core: mm: factor out virtual address range validation to arch code
Move virtual address range validation into architecture-specific code since different architectures have different constraints on valid VA ranges:
- For ARM, addresses must be within the VA width supported by the MMU - For RISC-V, additional checks are needed on RV64 to ensure addresses are canonically valid
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com> Reviewed-by: Alvin Chang <alvinga@andestech.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
232f1cde |
| 08-Mar-2025 |
Yu-Chien Peter Lin <peter.lin@sifive.com> |
core: mm: refactor ASLR mapping for architecture support
To allow adding RISC-V ASLR support, add arch_aslr_base_addr() which will be used to apply architecture specific ASLR base calculation.
Sign
core: mm: refactor ASLR mapping for architecture support
To allow adding RISC-V ASLR support, add arch_aslr_base_addr() which will be used to apply architecture specific ASLR base calculation.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com> Suggested-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Alvin Chang <alvinga@andestech.com>
show more ...
|
| #
6a2e17e9 |
| 20-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: shared xlat tables for NEX_DYN_VASPACE
Mappings in MEM_AREA_NEX_DYN_VASPACE belong to the nexus and are must to be the same for all partitions. Since these mappings must be updated in the
core: mm: shared xlat tables for NEX_DYN_VASPACE
Mappings in MEM_AREA_NEX_DYN_VASPACE belong to the nexus and are must to be the same for all partitions. Since these mappings must be updated in the partitions after the MMU has been enabled. Partitions share translation tables for this mappings, so we only need to update in one translation table when adding or removing mappings.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
7d5b298b |
| 09-Apr-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix discovered ns-mem check
When discovering or assigning available non-secure physical memory it's checked against overlaps with other memory types. Memory types reserving virtual memory spac
core: fix discovered ns-mem check
When discovering or assigning available non-secure physical memory it's checked against overlaps with other memory types. Memory types reserving virtual memory space should be excluded including the two recently added types MEM_AREA_NEX_DYN_VASPACE and MEM_AREA_TEE_DYN_VASPACE. This was missed when the memory types where added so add the check to exclude them now.
This fixes an error like: E/TC:0 check_phys_mem_is_outside:455 Non-sec mem (0:0x60000000) overlaps map (type 10 0:0x100000) E/TC:0 Panic at core/mm/core_mmu.c:459 <check_phys_mem_is_outside>
Fixes: 96f43358c593 ("core: add nex_dyn_vaspace and tee_dyn_vaspace areas") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
96f43358 |
| 26-Feb-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add nex_dyn_vaspace and tee_dyn_vaspace areas
Add MEM_AREA_NEX_DYN_VASPACE and MEM_AREA_TEE_DYN_VASPACE areas for dynamic Nexus and TEE memory mapping. This will be used to map additional heap
core: add nex_dyn_vaspace and tee_dyn_vaspace areas
Add MEM_AREA_NEX_DYN_VASPACE and MEM_AREA_TEE_DYN_VASPACE areas for dynamic Nexus and TEE memory mapping. This will be used to map additional heap and the stacks in later patches.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
d5f3d146 |
| 26-Feb-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mmu: fix dynamic VA region dummy mapping
The commit 873f5f6c7201 ("core: mmu: Add dynamic VA regions' mapping to page table") populated page tables so all are available later when needed. Howe
core: mmu: fix dynamic VA region dummy mapping
The commit 873f5f6c7201 ("core: mmu: Add dynamic VA regions' mapping to page table") populated page tables so all are available later when needed. However, it also mapped physical address 0 in all those ranges. So fix this by setting attributes to 0 when the physical address is 0.
Fixes: 873f5f6c7201 ("core: mmu: Add dynamic VA regions' mapping to page table") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
bea90f04 |
| 03-Mar-2025 |
Alvin Chang <alvinga@andestech.com> |
core: Implicitly enable CFG_BOOT_MEM
Now both ARM and RISC-V architectures support and enable CFG_BOOT_MEM by default. It's unnecessary to define CFG_BOOT_MEM. This commit removes CFG_BOOT_MEM and r
core: Implicitly enable CFG_BOOT_MEM
Now both ARM and RISC-V architectures support and enable CFG_BOOT_MEM by default. It's unnecessary to define CFG_BOOT_MEM. This commit removes CFG_BOOT_MEM and relevant dead code.
Signed-off-by: Alvin Chang <alvinga@andestech.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
show more ...
|
| #
873f5f6c |
| 12-Feb-2025 |
Mark Zhang <markz@nvidia.com> |
core: mmu: Add dynamic VA regions' mapping to page table
When optee boots, the initial mapping for MEM_AREA_RES_VASPACE and MEM_AREA_SHM_VASPACE should be added into page tables and replicated to al
core: mmu: Add dynamic VA regions' mapping to page table
When optee boots, the initial mapping for MEM_AREA_RES_VASPACE and MEM_AREA_SHM_VASPACE should be added into page tables and replicated to all CPU cores too. This fixes an issue when the VA of MEM_AREA_RES_VASPACE or MEM_AREA_SHM_VASPACE is not in a same 1GB region with other memory regions.
Link: https://github.com/OP-TEE/optee_os/issues/7275 Signed-off-by: Mark Zhang <markz@nvidia.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
be4e7607 |
| 11-Feb-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: fix carve_out_phys_mem()
carve_out_phys_mem() is prior to this patch not handling cases where the memory to be carved out isn't covered entirely by the physical memory. So fix carve_out_ph
core: mm: fix carve_out_phys_mem()
carve_out_phys_mem() is prior to this patch not handling cases where the memory to be carved out isn't covered entirely by the physical memory. So fix carve_out_phys_mem() to handle carving out memory that may only overlap partially with the physical memory.
Add debug prints in core_mmu_set_discovered_nsec_ddr() to list the non-secure RAM areas.
Fixes: 941dec3a7f6f ("core: adjust nsec ddr memory size correctly") Fixes: 490c50dfdb33 ("core: assign non-sec DDR configuration from DT") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
a7aaad05 |
| 11-Feb-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: fix panic with TEE_SDP_TEST_MEM
The commit 2f2f69df5afe ("core: mm: replace MEM_AREA_TA_RAM") uses MEM_AREA_SEC_RAM_OVERALL to map practically all secure memory. This conflicts with TEE_SD
core: mm: fix panic with TEE_SDP_TEST_MEM
The commit 2f2f69df5afe ("core: mm: replace MEM_AREA_TA_RAM") uses MEM_AREA_SEC_RAM_OVERALL to map practically all secure memory. This conflicts with TEE_SDP_TEST_MEM where MEM_AREA_SEC_RAM_OVERALL covers TEE_SDP_TEST_MEM and triggers a panic in verify_special_mem_areas().
The commit 1c1f8b65b5c6 ("core: mm: unify secure core and TA memory") changed to use vaddr_to_phys() to find the physical address for TEE_SDP_TEST_MEM_BASE. This isn't right since it refers to physical memory only.
So fix these problems.
Fixes: 2f2f69df5afe ("core: mm: replace MEM_AREA_TA_RAM") Fixes: 1c1f8b65b5c6 ("core: mm: unify secure core and TA memory") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
34150464 |
| 24-Jan-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix partially unmapped MEM_AREA_TEE_RAM_RW
The commit 06a258064a92 ("core: mm: allow unmapping VCORE_FREE") allows unmapping pages from the VCORE_FREE virtual memory range, but no bookkeeping
core: fix partially unmapped MEM_AREA_TEE_RAM_RW
The commit 06a258064a92 ("core: mm: allow unmapping VCORE_FREE") allows unmapping pages from the VCORE_FREE virtual memory range, but no bookkeeping is added apart from what's recorded in the translation tables. Later, the commit 7c9b85432343 ("core: allow partially unmapped MEM_AREA_TEE_RAM_RW") does lookups the translation tables using arch_va2pa_helper() to find out if pages in the VCORE_FREE virtual memory range are mapped. This works well on arm, but not on riscv which must traverse the translation tables in software and then is caught in an infinite recursive loop.
Fix this problem by updating the memory regions in the struct memory_map (splitting, shrinking, and removing) as needed.
Reported-by: Huang Borong <huangborong@bosc.ac.cn> Closes: https://github.com/OP-TEE/optee_os/issues/7237 Fixes: 06a258064a92 ("core: mm: allow unmapping VCORE_FREE") Fixes: 7c9b85432343 ("core: allow partially unmapped MEM_AREA_TEE_RAM_RW") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
9b941cd7 |
| 23-Jan-2025 |
Sungbae Yoo <sungbaey@nvidia.com> |
core: mmu: fix memory regions found from ff-a manifest
Fix the 5th parameter of add_phys_mem() in collect_device_mem_ranges() that has to be the size of memory region and not the end address of the
core: mmu: fix memory regions found from ff-a manifest
Fix the 5th parameter of add_phys_mem() in collect_device_mem_ranges() that has to be the size of memory region and not the end address of the region.
Fixes: b8ef8d0b6ff4 ("core: mm: introduce struct memory_map") Signed-off-by: Sungbae Yoo <sungbaey@nvidia.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
ef0d00c1 |
| 10-Jul-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: extend temporary dummy memory map
core_init_mmu_map() uses a temporary dummy memory map for the virt_to_phys() and phys_to_virt() conversions to avoid asserting while setting up translatio
core: mm: extend temporary dummy memory map
core_init_mmu_map() uses a temporary dummy memory map for the virt_to_phys() and phys_to_virt() conversions to avoid asserting while setting up translation tables before the MMU is enabled. CFG_DYN_CONFIG will need a larger range of memory since translation tables might not be allocated from .nozi memory only. So for CFG_DYN_CONFIG extend of end of the unused memory range that the boot_mem_*() functions allocate memory from.
Introduce CFG_DYN_CONFIG, enabled by default if CFG_BOOT_MEM is enabled and CFG_WITH_PAGER disabled. CFG_DYN_CONFIG conflicts with CFG_WITH_PAGER since the pager uses a different mechanism for memory allocation.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
7c9b8543 |
| 16-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: allow partially unmapped MEM_AREA_TEE_RAM_RW
Add special checks in phys_to_virt_tee_ram() to see that a virtual address indeed is mapped before return the address if the memory area is MEM_ARE
core: allow partially unmapped MEM_AREA_TEE_RAM_RW
Add special checks in phys_to_virt_tee_ram() to see that a virtual address indeed is mapped before return the address if the memory area is MEM_AREA_TEE_RAM_RW since the VCORE_FREE may be unmapped.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
76d6685e |
| 17-Dec-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
tree-wide: use power-of-2 rounding macros where applicable
Use ROUNDUP2(), ROUNDUP2_OVERFLOW(), ROUNDUP2_DIV() and ROUNDDOWN2() at places where the rounding argument is a variable value and we want
tree-wide: use power-of-2 rounding macros where applicable
Use ROUNDUP2(), ROUNDUP2_OVERFLOW(), ROUNDUP2_DIV() and ROUNDDOWN2() at places where the rounding argument is a variable value and we want to leverage the implementation of these routines optimized for a power-of-2 rounding argument.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
8fda89c7 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()
Moves the implementation of core_mmu_init_virtualization() into core_mmu_init_phys_mem().
This simplifies init_primary() in c
core: merge core_mmu_init_phys_mem() and core_mmu_init_virtualization()
Moves the implementation of core_mmu_init_virtualization() into core_mmu_init_phys_mem().
This simplifies init_primary() in core/arch/arm/kernel/boot.c.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
e712be7a |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: initialize guest physical memory early
Initialize guest physical memory in virt_guest_created() before the first entry into the guest from normal world. This replaces the call to core_mmu_init
core: initialize guest physical memory early
Initialize guest physical memory in virt_guest_created() before the first entry into the guest from normal world. This replaces the call to core_mmu_init_phys_mem() in init_tee_runtime().
Remove unused code in core_mmu_init_phys_mem() and the now unused functions core_mmu_get_ta_range() and virt_get_ta_ram().
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
f1284346 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: allocate temporary memory map array
With CFG_BOOT_MEM enabled, allocate a temporary memory map array using boot_mem_alloc_tmp() instead of using the global static_mmap_regions[]. core_mmu_
core: mm: allocate temporary memory map array
With CFG_BOOT_MEM enabled, allocate a temporary memory map array using boot_mem_alloc_tmp() instead of using the global static_mmap_regions[]. core_mmu_save_mem_map() is added and called from boot_init_primary_late() before the temporary memory is reused.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
d2e95293 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm,pager: map remaining physical memory
For CFG_WITH_PAGER=y map the remaining memory following the VCORE_INIT_RO memory to make sure that all physical TEE memory is mapped even if VCORE_INIT_
core: mm,pager: map remaining physical memory
For CFG_WITH_PAGER=y map the remaining memory following the VCORE_INIT_RO memory to make sure that all physical TEE memory is mapped even if VCORE_INIT_RO doesn't cover it entirely.
This will be used in later patches to use the temporarily unused memory while booting.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
9c1d818a |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: map memory using requested block size
TEE memory is always supposed to be mapped with 4k pages for maximum flexibility, but can_map_at_level() doesn't check the requested block size for a
core: mm: map memory using requested block size
TEE memory is always supposed to be mapped with 4k pages for maximum flexibility, but can_map_at_level() doesn't check the requested block size for a region, so fix that. However, assign_mem_granularity() assigns smaller than necessary block sizes on page aligned regions, so fix that by only requesting 4k granularity for TEE memory and PGDIR granularity for the rest.
This is needed in later patches where some TEE memory is unmapped.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
1c1f8b65 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: unify secure core and TA memory
In configurations where secure core and TA memory is allocated from the same contiguous physical memory block, carve out the memory needed by OP-TEE core an
core: mm: unify secure core and TA memory
In configurations where secure core and TA memory is allocated from the same contiguous physical memory block, carve out the memory needed by OP-TEE core and make the rest available as TA memory.
This is needed by later patches where more core memory is allocated as needed from the pool of TA memory.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
2f2f69df |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But sec
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But secure read-only or execute core memory is not covered as that would defeat the purpose of CFG_CORE_RWDATA_NOEXEC.
Since the partition TA memory isn't accessed via MEM_AREA_TA_RAM any longer, don't map it using the partition specific map.
This is needed later where unification of OP-TEE core and physical TA memory is possible.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|