| #
04e46975 |
| 16-Dec-2024 |
Etienne Carriere <etienne.carriere@foss.st.com> |
tree-wide: use ROUNDUP_DIV() where applicable
Use ROUNDUP_DIV() instead of ROUNDUP(..., size) / size where applicable.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Je
tree-wide: use ROUNDUP_DIV() where applicable
Use ROUNDUP_DIV() instead of ROUNDUP(..., size) / size where applicable.
Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
2f2f69df |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But sec
core: mm: replace MEM_AREA_TA_RAM
Replace MEM_AREA_TA_RAM with MEM_AREA_SEC_RAM_OVERALL.
All read/write secure memory is covered by MEM_AREA_SEC_RAM_OVERALL, sometimes using an aliased map. But secure read-only or execute core memory is not covered as that would defeat the purpose of CFG_CORE_RWDATA_NOEXEC.
Since the partition TA memory isn't accessed via MEM_AREA_TA_RAM any longer, don't map it using the partition specific map.
This is needed later where unification of OP-TEE core and physical TA memory is possible.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
de19cacb |
| 08-May-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: replace tee_mm_sec_ddr with phys_mem functions
Replace the tee_mm_sec_ddr mm pool with the phys_mem functions. This doesn't change the behaviour.
Signed-off-by: Jens Wiklander <jens.wiklander
core: replace tee_mm_sec_ddr with phys_mem functions
Replace the tee_mm_sec_ddr mm pool with the phys_mem functions. This doesn't change the behaviour.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
c79fb6d4 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename load_offset in struct core_mmu_config
Renames the field load_offset in struct core_mmu_config to the more accurate name map_offset.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro
core: rename load_offset in struct core_mmu_config
Renames the field load_offset in struct core_mmu_config to the more accurate name map_offset.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
a0e8ffe9 |
| 04-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add support for MTE
Adds support for the Armv8.5-A Memory Tagging Extension with CFG_MEMTAG=y.
A memtag.h API is introduced to handle this extension. If CFG_MEMTAG=n the API doesn't add any o
core: add support for MTE
Adds support for the Armv8.5-A Memory Tagging Extension with CFG_MEMTAG=y.
A memtag.h API is introduced to handle this extension. If CFG_MEMTAG=n the API doesn't add any overhead and the behaviour is unchanged. With CFG_MEMTAG=y a check is performed to see if the platform can support MTE and the API is dynamically configured accordingly. This means that it's safe to have CFG_MEMTAG=y even for platforms not supporting MTE. There will be some minimal overhead then, but likely not noticeable.
An entry is also added in the TEE_PROPSET_TEE_IMPLEMENTATION for a u32 property "org.trustedfirmware.optee.cpu.feat_memtag_implemented". The property is set to a non-zero value only if CFG_CORE_MEMTAG is configured and the underlying CPU supports FEAT_MTE.
This commit still only uses the default tag with the value 0 resulting in unchanged pointers when accessing memory. However, all plumbing is in place allowing for instance tagging of the heap in a later commit.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
6105aa86 |
| 12-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: map TA memory using TEE_MATTR_MEM_TYPE_TAGGED
Maps TA memory using the TEE_MATTR_MEM_TYPE_TAGGED which results in tagged cached memory if the system has it enabled.
Acked-by: Etienne Carriere
core: map TA memory using TEE_MATTR_MEM_TYPE_TAGGED
Maps TA memory using the TEE_MATTR_MEM_TYPE_TAGGED which results in tagged cached memory if the system has it enabled.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
39e8c200 |
| 01-Feb-2022 |
Jerome Forissier <jerome@forissier.org> |
core: tag ops structures with __relrodata_unpaged
Global structures currently tagged with __rodata_unpaged need to use __relrodata_unpaged instead because they contain pointers which are subject to
core: tag ops structures with __relrodata_unpaged
Global structures currently tagged with __rodata_unpaged need to use __relrodata_unpaged instead because they contain pointers which are subject to relocation when CFG_CORE_ASLR=y. Doing so moves them out of .rodata which will now stay unmodified even with ASLR turned on.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
2380d700 |
| 27-Aug-2021 |
Lionel Debieve <lionel.debieve@foss.st.com> |
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 addres
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 address usage. The size must be used rather than the high address to avoid this overflow issue. Update the corresponding files due to API modification.
Signed-off-by: Lionel Debieve <lionel.debieve@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
be501eb1 |
| 05-Oct-2021 |
Jorge Ramirez-Ortiz <jorge@foundries.io> |
util: rename ALIGNMENT_IS_OK to IS_ALIGNED_WITH_TYPE
Implement the renamed macro using the IS_ALIGNED definition.
Signed-off-by: Jorge Ramirez-Ortiz <jorge@foundries.io> Reviewed-by: Etienne Carrie
util: rename ALIGNMENT_IS_OK to IS_ALIGNED_WITH_TYPE
Implement the renamed macro using the IS_ALIGNED definition.
Signed-off-by: Jorge Ramirez-Ortiz <jorge@foundries.io> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
c2e4eb43 |
| 23-May-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because physical address can be mapped in the same time in different virtual regions with different length. So the first found region which contains the requested physical address possibly doesn`t have enough mapped data. This is fixed by adding the length parameter to phys_to_virt() function. Length parameter can be set to 1 if caller knows that requested (pa + len) doesn`t cross mapping granule boundary.
core_mmu_get_va() and io_pa_or_va() functions now are take length parameter too as they based on phys_to_virt() in case of MMU enabled.
Signed-off-by: Anton Rybakov <a.rybakov@omp.ru> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (stm32mp1-157C_DK2) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6qpsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6sllevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ullevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulzevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7dsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7ulpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mmevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mnevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mqevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qmmek) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qxpmek)
show more ...
|
| #
00361c18 |
| 12-May-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: make __rodata_unpaged() symbols __weak
Makes the __rodata_unpaged tagged symbols __weak and non-static in order to be overridden in core/arch/arm/kernel/link_dummies_paged.c. This makes sure t
core: make __rodata_unpaged() symbols __weak
Makes the __rodata_unpaged tagged symbols __weak and non-static in order to be overridden in core/arch/arm/kernel/link_dummies_paged.c. This makes sure that these symbols doesn't bring in further symbols in the unpaged section.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
27c64925 |
| 12-May-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This w
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This will result in separate sections for each such variable and make it easier to debug the pruning of the dependency tree for unpaged sections.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d5ad7ccf |
| 10-Jan-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename struct tee_pager_area to vm_paged_region
Renames struct tee_pager_area to struct vm_paged_region and moves it next to the declaration of struct vm_region. Since areas are now called pag
core: rename struct tee_pager_area to vm_paged_region
Renames struct tee_pager_area to struct vm_paged_region and moves it next to the declaration of struct vm_region. Since areas are now called paged regions or regions also rename functions, variables and struct members accordingly.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
b757e307 |
| 19-Mar-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: introduce CFG_CORE_PAGE_TAG_AND_IV
Introduces CFG_CORE_PAGE_TAG_AND_IV which defaults to enabled if TA paging is enabled. Can be used to disable tag and IV paging for paged read-write pages.
core: introduce CFG_CORE_PAGE_TAG_AND_IV
Introduces CFG_CORE_PAGE_TAG_AND_IV which defaults to enabled if TA paging is enabled. Can be used to disable tag and IV paging for paged read-write pages.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
aad1cf6b |
| 25-Jan-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fobj_rw_paged_alloc() store tags and IVs in paged area
fobj_rw_paged_alloc() is updated to store tags and IVs in a designated paged area instead of storing them in the heap. This avoids large
core: fobj_rw_paged_alloc() store tags and IVs in paged area
fobj_rw_paged_alloc() is updated to store tags and IVs in a designated paged area instead of storing them in the heap. This avoids large heap allocations which also would suffer from a fragmented heap.
The previous ops_rw_paged and struct fobj_rwp are now replaced by ops_rwp_unpaged_iv and struct fobj_rwp_unpaged_iv respectively. These are now only used to support the area where other tags and IVs are stored.
A new ops_rwp_paged_iv and struct fobj_rwp_paged_iv are added for using the designated paged area.
A fobj based on the ops_rwp_unpaged_iv ops is allocated and registered with the pager via a callback registered with driver_init_late().
This effectively enables paging of IV and tags for pages mapping a TA.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
cfde90a6 |
| 05-Jun-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: call fobj_generate_authenc_key() via initcalls
Calls fobj_generate_authenc_key() via initcalls instead of a direct call in init_teecore().
Reviewed-by: Etienne Carriere <etienne.carriere@lina
core: call fobj_generate_authenc_key() via initcalls
Calls fobj_generate_authenc_key() via initcalls instead of a direct call in init_teecore().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
65401337 |
| 07-Jun-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove generic_ from generic_boot
Now that the CFG_GENERIC_BOOT configuration flag has been removed also remove "generic_" prefix from and in the related files.
Acked-by: Etienne Carriere <et
core: remove generic_ from generic_boot
Now that the CFG_GENERIC_BOOT configuration flag has been removed also remove "generic_" prefix from and in the related files.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d49bc745 |
| 08-Jun-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix ops_sec_mem in core/mm/fobj.c
Adds missing const attribute to ops_sec_mem in core/mm/fobj.c.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.b
core: fix ops_sec_mem in core/mm/fobj.c
Adds missing const attribute to ops_sec_mem in core/mm/fobj.c.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
3639b55f |
| 04-May-2020 |
Jerome Forissier <jerome@forissier.org> |
core: rename KEEP_INIT() and KEEP_PAGER()
The KEEP_INIT() and KEEP_PAGER() macros are quite often used in C files immediately after the definition of a function or a structure without a blank line i
core: rename KEEP_INIT() and KEEP_PAGER()
The KEEP_INIT() and KEEP_PAGER() macros are quite often used in C files immediately after the definition of a function or a structure without a blank line in between. This style mimics what the Linux kernel does for a similar use cases: EXPORT_SYMBOL().
Unfortunately, the checkpatch.pl tool expects a blank line after structure and function definitions, except for a few special cases such as EXPORT_SYMBOL(). As a result we often get unwanted warnings when we use KEEP_INIT() and KEEP_PAGER(). Among the exceptions are all words starting with DECLARE_ or DEFINE_, so by renaming our macros we could avoid the checkpatch warnings.
This commit renames KEEP_INIT() and KEEP_PAGER() to DECLARE_KEEP_INIT() and DECLARE_KEEP_PAGER(), respectively. The assembler macros are also renamed for consistency. No functional change is expected.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
7395539f |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fobj.c: use crypto_aes_expand_enc_key()
fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead to prepare the key used for paging.
Acked-by: Etienne Carriere <etienne.carriere@l
core: fobj.c: use crypto_aes_expand_enc_key()
fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead to prepare the key used for paging.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
c6744caa |
| 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add fobj_ro_reloc_paged_alloc()
Adds a new type of fobj, struct fobj_ro_reloc_paged, which is created with fobj_ro_reloc_paged_alloc(). It's like struct fobj_rop but with support for relocatio
core: add fobj_ro_reloc_paged_alloc()
Adds a new type of fobj, struct fobj_ro_reloc_paged, which is created with fobj_ro_reloc_paged_alloc(). It's like struct fobj_rop but with support for relocation too.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
b83c0d5f |
| 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pager: record fobj in pmem instead of area
Records the fobj backing the physical page represented by a pmem instead of struct tee_pager_area which was used prior to this patch. Each fobj can
core: pager: record fobj in pmem instead of area
Records the fobj backing the physical page represented by a pmem instead of struct tee_pager_area which was used prior to this patch. Each fobj can be used by several unrelated areas in the end allowing real shared memory between multiple user context.
Reference counting for the page tables is increasing in activity since entries which are hidden/unhidden also decrease/increase the count. This is because there's no difference between unhiding a pmem or just mapping it again in another page table.
The memory sharing is not fully taken advantage of in this patch.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
fbcaa411 |
| 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add fobj_sec_mem_alloc()
Adds fobj_sec_mem_alloc() which allocates physical memory from tee_mm_sec_ddr, to be used as TA memory.
Support is added in the MOBJ of with_fobj type to handle this
core: add fobj_sec_mem_alloc()
Adds fobj_sec_mem_alloc() which allocates physical memory from tee_mm_sec_ddr, to be used as TA memory.
Support is added in the MOBJ of with_fobj type to handle this new kind of fobj.
A fobj_ta_mem_alloc() macro is added to use either fobj_rw_paged_alloc() if paging of user TAs is enabled or else to use fobj_sec_mem_alloc() instead.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
ee546289 |
| 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add a file object interface
Adds a file object interface which is an abstraction of the storage part in a struct tee_pager_area. This adds no new features, just moves some code from tee_pager.
core: add a file object interface
Adds a file object interface which is an abstraction of the storage part in a struct tee_pager_area. This adds no new features, just moves some code from tee_pager.c into fobj.c.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|