| #
232f1cde |
| 08-Mar-2025 |
Yu-Chien Peter Lin <peter.lin@sifive.com> |
core: mm: refactor ASLR mapping for architecture support
To allow adding RISC-V ASLR support, add arch_aslr_base_addr() which will be used to apply architecture specific ASLR base calculation.
Sign
core: mm: refactor ASLR mapping for architecture support
To allow adding RISC-V ASLR support, add arch_aslr_base_addr() which will be used to apply architecture specific ASLR base calculation.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com> Suggested-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Alvin Chang <alvinga@andestech.com>
show more ...
|
| #
21773c96 |
| 02-Mar-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
core: arm: mm: use thread_unmask_exceptions() where applicable
Change cache_op_outer() to use thread_unmask_exceptions() instead of thread_set_exceptions() as the function unmasks interruptions it p
core: arm: mm: use thread_unmask_exceptions() where applicable
Change cache_op_outer() to use thread_unmask_exceptions() instead of thread_set_exceptions() as the function unmasks interruptions it previously masked with thread_set_exceptions(). This change makes the implementation more consistent.
No functional change.
Reviewed-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
5ca2c365 |
| 10-Jan-2024 |
Clement Faure <clement.faure@nxp.com> |
core: remove unnecessary includes
Remove unnecessary includes.
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander
core: remove unnecessary includes
Remove unnecessary includes.
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
bace0716 |
| 07-Dec-2023 |
Clement Faure <clement.faure@nxp.com> |
core: arm: allow cache_op_outer() to operate on non-secure buffers
According the ARM PL310 documentation, if the operation is specific to the PA, the behavior is presented in the following manner:
core: arm: allow cache_op_outer() to operate on non-secure buffers
According the ARM PL310 documentation, if the operation is specific to the PA, the behavior is presented in the following manner: - Secure access: The data in the cache is only affected by the the operation if it is secure. - Non-secure access: The data in the cache is only affected by the operation if it is non-secure.
Depending on the buffer location, use the secure or non-secure PL310 base address to do physical address based cache operation on the buffer.
Link: https://developer.arm.com/documentation/ddi0246/a/programmer-s-model/register-descriptions/register-7--cache-maintenance-operations Signed-off-by: Clement Faure <clement.faure@nxp.com> Signed-off-by: Cedric Neveux <cedric.neveux@nxp.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
fe16b87b |
| 08-Jun-2023 |
Alvin Chang <alvinga@andestech.com> |
core: mm: Rename "mva" to "va" for TLB operations
The terminology "mva" is specific for older ARM architecture which has FCSE extension. To support multiple architecture, it would be good to rename
core: mm: Rename "mva" to "va" for TLB operations
The terminology "mva" is specific for older ARM architecture which has FCSE extension. To support multiple architecture, it would be good to rename "mva" to common terminology, such as "va". This PR renames "mva" to "va" in TLB operations for ARM64 and RISC-V. For ARM32, "mva" is reserved because it is really defined in ARM32's documentations.
Signed-off-by: Alvin Chang <alvinga@andestech.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
d8ba4bae |
| 08-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: split core/arch/arm/mm/core_mmu.c
Splits core/arch/arm/mm/core_mmu.c into one generic and one architecture specific file.
Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene
core: split core/arch/arm/mm/core_mmu.c
Splits core/arch/arm/mm/core_mmu.c into one generic and one architecture specific file.
Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
01ef8af4 |
| 08-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: introduce TRUSTED_{S,D}RAM_*
Introduces TRUSTED_{S,D}RAM_* intended to replace TZ{S,D}RAM_* on the longer term. In this patch we're cleaning up core_mmu.c to make it less architecture dependen
core: introduce TRUSTED_{S,D}RAM_*
Introduces TRUSTED_{S,D}RAM_* intended to replace TZ{S,D}RAM_* on the longer term. In this patch we're cleaning up core_mmu.c to make it less architecture dependent.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
c02edd30 |
| 08-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: split core_mmu_private.h
Splits core_mmu_private.h into <mm/core_mmu_arch.h> and <mm/core_mmu.h>
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene Boubakri <marouene.bou
core: split core_mmu_private.h
Splits core_mmu_private.h into <mm/core_mmu_arch.h> and <mm/core_mmu.h>
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
9e6889eb |
| 17-Dec-2021 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: arm: mmu: fix find_map_by_pa() on areas end addresses
Fix find_map_by_pa() to test the inclusive end address of an area to prevent issues when end address overlaps size field byte size.
Revie
core: arm: mmu: fix find_map_by_pa() on areas end addresses
Fix find_map_by_pa() to test the inclusive end address of an area to prevent issues when end address overlaps size field byte size.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
4f69ab71 |
| 06-Dec-2021 |
Jerome Forissier <jerome@forissier.org> |
core: arm: mmu: fix compile time assertion to allow 48-bit VA space
The compile time assertion on CFG_LPAE_ADDR_SPACE_BITS is inconsistent with the one in <mm/core_mmu.h>. It should allow a 48-bit s
core: arm: mmu: fix compile time assertion to allow 48-bit VA space
The compile time assertion on CFG_LPAE_ADDR_SPACE_BITS is inconsistent with the one in <mm/core_mmu.h>. It should allow a 48-bit size.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
2380d700 |
| 27-Aug-2021 |
Lionel Debieve <lionel.debieve@foss.st.com> |
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 addres
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 address usage. The size must be used rather than the high address to avoid this overflow issue. Update the corresponding files due to API modification.
Signed-off-by: Lionel Debieve <lionel.debieve@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
16dfecc2 |
| 28-Oct-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix ASLR problem with short-descriptor table mappings
With short-descriptor table mappings, that is without LPAE, the user va range is defined at the lowest addresses. Depending on the seed su
core: fix ASLR problem with short-descriptor table mappings
With short-descriptor table mappings, that is without LPAE, the user va range is defined at the lowest addresses. Depending on the seed supplied this could conflict with chosen base address for core mappings. Add a check early in assign_mem_va() to avoid such conflicts.
Without this patch there's a risk of occasional panics like: E/TC:0 0 Panic 'issue in linear address space' at core/arch/arm/mm/core_mmu.c:2147 <check_pa_matches_va> E/TC:0 0 TEE load address @ 0xa34000 E/TC:0 0 Call stack: E/TC:0 0 0x00a3a901
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
9e788d37 |
| 07-Sep-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: virt: check pa at end of check_pa_matches_va()
Prior to this patch did check_pa_matches_va() skip the final catchall check on the physical address. It should be possible to perform this check
core: virt: check pa at end of check_pa_matches_va()
Prior to this patch did check_pa_matches_va() skip the final catchall check on the physical address. It should be possible to perform this check with virtualization enabled so enable it for virtualization too.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
a94111b9 |
| 31-Aug-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: virtualization.h: add dummy static inline functions
Adds dummy static inline functions to replace the normal virt_*() functions in virtualization.h when CFG_VIRTUALIZATION is not configured.
core: virtualization.h: add dummy static inline functions
Adds dummy static inline functions to replace the normal virt_*() functions in virtualization.h when CFG_VIRTUALIZATION is not configured.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
0187e477 |
| 07-Jun-2021 |
Izik Dubnov <izik@amazon.com> |
core: mmu: replace "1 << x" with "BIT64(x)"
"1" instead of "1ULL" caused issues with calculations when address width is higher than 32 bits. Uses BIT64() instead of explicit "1ULL".
Signed-off-by:
core: mmu: replace "1 << x" with "BIT64(x)"
"1" instead of "1ULL" caused issues with calculations when address width is higher than 32 bits. Uses BIT64() instead of explicit "1ULL".
Signed-off-by: Izik Dubnov <izik@amazon.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
0d206ea0 |
| 07-Jun-2021 |
Izik Dubnov <izik@amazon.com> |
core: lpae: use "base table" naming instead of "l1 table"
This is a preparation for supporting base table which is not level 1 (i.e. support level 0). Tries not to change anything functional, but ra
core: lpae: use "base table" naming instead of "l1 table"
This is a preparation for supporting base table which is not level 1 (i.e. support level 0). Tries not to change anything functional, but rather just a renaming. "base table" terminology is referenced from TF-A Renamed CORE_MMU_L1_TBL_OFFSET -> CORE_MMU_BASE_TABLE_OFFSET Added CORE_MMU_BASE_TABLE_LEVEL instead of hard-coded "1" Added CORE_MMU_BASE_TABLE_SHIFT instead of hard-coded "30" Few new defines were copied from TF-A xlat_tables_def.h, like the existing XLAT related defines.
Signed-off-by: Izik Dubnov <izik@amazon.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
c2e4eb43 |
| 23-May-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because physical address can be mapped in the same time in different virtual regions with different length. So the first found region which contains the requested physical address possibly doesn`t have enough mapped data. This is fixed by adding the length parameter to phys_to_virt() function. Length parameter can be set to 1 if caller knows that requested (pa + len) doesn`t cross mapping granule boundary.
core_mmu_get_va() and io_pa_or_va() functions now are take length parameter too as they based on phys_to_virt() in case of MMU enabled.
Signed-off-by: Anton Rybakov <a.rybakov@omp.ru> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (stm32mp1-157C_DK2) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6qpsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6sllevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ullevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulzevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7dsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7ulpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mmevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mnevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mqevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qmmek) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qxpmek)
show more ...
|
| #
6e733a8b |
| 18-Aug-2021 |
Jelle Sels <jelle.sels@arm.com> |
core: rename TA_VASPACE to TS_VASPACE
The TA_VASPACE memory will be used by both TAs and SPs. Rename it to TS_VASPACE so it is clearer that it can be used by both.
Signed-off-by: Jelle Sels <jelle.
core: rename TA_VASPACE to TS_VASPACE
The TA_VASPACE memory will be used by both TAs and SPs. Rename it to TS_VASPACE so it is clearer that it can be used by both.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
b715a420 |
| 09-Jul-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
mm: fix mobj split by adding core_mmu_find_mapping_exclusive() helper
Fixes: ff01e2452169 ("mm: split mobj_tee_ram onto rw/rx parts")
This fixes mobj splitting onto RX/RW parts. Now split can be do
mm: fix mobj split by adding core_mmu_find_mapping_exclusive() helper
Fixes: ff01e2452169 ("mm: split mobj_tee_ram onto rw/rx parts")
This fixes mobj splitting onto RX/RW parts. Now split can be done incorrectly if RX and RW regions doesn`t mapped contiguosly. Added helper core_mmu_find_mapping_exclusive() allows to find unique mapping for specified type and length independently of their order, so then RX/RW regions for mobjects should be determined correctly.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Anton Rybakov <a.rybakov@omp.ru>
show more ...
|
| #
ff902aaf |
| 27-Jul-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add new init and nexus memory types
Adds the new memory types MEM_AREA_INIT_RAM_RO, MEM_AREA_INIT_RAM_RX and MEM_AREA_NEX_RAM_RO to make sure that the memory types MEM_AREA_TEE_RAM_RX, MEM_ARE
core: add new init and nexus memory types
Adds the new memory types MEM_AREA_INIT_RAM_RO, MEM_AREA_INIT_RAM_RX and MEM_AREA_NEX_RAM_RO to make sure that the memory types MEM_AREA_TEE_RAM_RX, MEM_AREA_TEE_RAM_RO and MEM_AREA_TEE_RAM_RW are used only once. This is needed when to uniquely identify those memory areas in mobj_init() and mobj_phys_init().
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Anton Rybakov <a.rybakov@omp.ru>
show more ...
|
| #
bc9618c0 |
| 17-May-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
core_mmu: fix implicit behavior of core_mmu_add_mapping()
In core_mmu_add_mapping() requested physical address rounded up/down to granule size (0x100000), which leads to establishing of virtual mapp
core_mmu: fix implicit behavior of core_mmu_add_mapping()
In core_mmu_add_mapping() requested physical address rounded up/down to granule size (0x100000), which leads to establishing of virtual mappings with overlapped physical counterparts. If two virtual mappings overlaps due to such roundings, then following phys_to_virt() can implicitly return result of virtual address from unexpected mapping. This patch fix such behavior by returning virtual address of newly established mapping.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Anton Rybakov <a.rybakov@omp.ru>
show more ...
|
| #
a808f49e |
| 30-Apr-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: core_mmu.[ch]: use U() for unsigned constants
Updates with the U() macro as described in the recently updated coding guidelines.
Acked-by: Ruchika Gupta <ruchika.gupta@linaro.org> Acked-by: J
core: core_mmu.[ch]: use U() for unsigned constants
Updates with the U() macro as described in the recently updated coding guidelines.
Acked-by: Ruchika Gupta <ruchika.gupta@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
5bf80bb4 |
| 26-Mar-2021 |
Sughosh Ganu <sughosh.ganu@linaro.org> |
core: mm: Use nexus memory allocation api's in carve_out_phys_mem()
During discovery of the non-secure memory, the memory attributes like address and size are stored as part of the core_mmu_phys_mem
core: mm: Use nexus memory allocation api's in carve_out_phys_mem()
During discovery of the non-secure memory, the memory attributes like address and size are stored as part of the core_mmu_phys_mem structure. Memory for this structure is allocated on the nexus heap area. Subsequently, when memory for this structure is reallocated, this is done using the plain realloc call. Use the nex_realloc api for the reallocation.
Signed-off-by: Sughosh Ganu <sughosh.ganu@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
show more ...
|
| #
37b2459d |
| 16-Mar-2021 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: arm: mobj: some mobjs may have no physical address
Change mobj_with_fobj_get_pa() to return TEE_ERROR_NOT_SUPPORTED when a virtual memory address has no assigned physical address. This can occ
core: arm: mobj: some mobjs may have no physical address
Change mobj_with_fobj_get_pa() to return TEE_ERROR_NOT_SUPPORTED when a virtual memory address has no assigned physical address. This can occur when the related memory is pageable and pager is enabled. This is the only memory object for which the object physical address range is volatile because under pager control.
With this change, mobj_get_pa() now can return TEE_ERROR_NOT_SUPPORTED for mapped addresses. Only check_pa_matches_va() must be updated, all other calls to mobj_get_pa() already handle the return code values they need to.
Update check_pa_matches_va() to not panic when vm_va2pa() returns this code because it can't convert the virtual address because the effective physical address of the memory cell is volatile as when target memory is paged and pager is enabled.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d6ad67f6 |
| 11-Mar-2021 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: mm: change vm_pa2va() to return a virtual address
Change vm_pa2va() to return target virtual address or NULL if the physical address cannot be resolved which can happen when pager is enabled a
core: mm: change vm_pa2va() to return a virtual address
Change vm_pa2va() to return target virtual address or NULL if the physical address cannot be resolved which can happen when pager is enabled and the target physical page belongs to the pager page pool. This change makes vm_pa2va() helper function simpler and its only caller doesn't differentiate error return codes.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|