| d8158fea | 14-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove references to OPTEE_SMC_SHM_CACHED
Removes references to OPTEE_SMC_SHM_CACHED in architecture independent code, the references are replaces by TEE_MATTR_CACHE_CACHED which is more accur
core: remove references to OPTEE_SMC_SHM_CACHED
Removes references to OPTEE_SMC_SHM_CACHED in architecture independent code, the references are replaces by TEE_MATTR_CACHE_CACHED which is more accurate.
Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| db01e12d | 14-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: vm.c: don't include sm/optee_smc.h
sm/optee_smc.h isn't needed in this file any longer so remove the include statement.
Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Je
core: vm.c: don't include sm/optee_smc.h
sm/optee_smc.h isn't needed in this file any longer so remove the include statement.
Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| fc5e0894 | 31-Dec-2021 |
Marouene Boubakri <marouene.boubakri@nxp.com> |
core: mm: move tee_mm.c to core/mm
Move tee_mm.c from core/arch/arm/mm to core/mm to reuse it with new architectures.
Signed-off-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Etien
core: mm: move tee_mm.c to core/mm
Move tee_mm.c from core/arch/arm/mm to core/mm to reuse it with new architectures.
Signed-off-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d8ba4bae | 08-Feb-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: split core/arch/arm/mm/core_mmu.c
Splits core/arch/arm/mm/core_mmu.c into one generic and one architecture specific file.
Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene
core: split core/arch/arm/mm/core_mmu.c
Splits core/arch/arm/mm/core_mmu.c into one generic and one architecture specific file.
Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Marouene Boubakri <marouene.boubakri@nxp.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8f2c0249 | 28-Dec-2021 |
Marouene Boubakri <marouene.boubakri@nxp.com> |
core: mm: vm.c: remove arm.h and include config.h
Removes arm.h from include list as it is not used and explicitly add config.h for IS_ENABLED macro.
Signed-off-by: Marouene Boubakri <marouene.boub
core: mm: vm.c: remove arm.h and include config.h
Removes arm.h from include list as it is not used and explicitly add config.h for IS_ENABLED macro.
Signed-off-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 9c4aaf67 | 11-Jan-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: make mobj_get_va() more secure
Adds a length parameter to allow mobj_get_va() to check that the entire va range requested is available.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.
core: make mobj_get_va() more secure
Adds a length parameter to allow mobj_get_va() to check that the entire va range requested is available.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 68c6ad9a | 09-Sep-2021 |
Jelle Sels <jelle.sels@arm.com> |
core: Add vm_get_mobj
Return the mobj of a va.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@li
core: Add vm_get_mobj
Return the mobj of a va.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| c75641dd | 02-Nov-2021 |
Ruchika Gupta <ruchika.gupta@linaro.org> |
core: mm : Enable GP bit for kernel mapping for userspace
Mark the kernel pages mapped in userspace as guarded.
Signed-off-by: Ruchika Gupta <ruchika.gupta@linaro.org> Reviewed-by: Jerome Forissier
core: mm : Enable GP bit for kernel mapping for userspace
Mark the kernel pages mapped in userspace as guarded.
Signed-off-by: Ruchika Gupta <ruchika.gupta@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2380d700 | 27-Aug-2021 |
Lionel Debieve <lionel.debieve@foss.st.com> |
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 addres
core: mmu: fix overflow with high address in tee_mm_pool_t
In case of TA_RAM defined at the end of address range, the high address will be defined outside the paddr_t limits which ends in a 0 address usage. The size must be used rather than the high address to avoid this overflow issue. Update the corresponding files due to API modification.
Signed-off-by: Lionel Debieve <lionel.debieve@foss.st.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| be501eb1 | 05-Oct-2021 |
Jorge Ramirez-Ortiz <jorge@foundries.io> |
util: rename ALIGNMENT_IS_OK to IS_ALIGNED_WITH_TYPE
Implement the renamed macro using the IS_ALIGNED definition.
Signed-off-by: Jorge Ramirez-Ortiz <jorge@foundries.io> Reviewed-by: Etienne Carrie
util: rename ALIGNMENT_IS_OK to IS_ALIGNED_WITH_TYPE
Implement the renamed macro using the IS_ALIGNED definition.
Signed-off-by: Jorge Ramirez-Ortiz <jorge@foundries.io> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c2e4eb43 | 23-May-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because
core_mmu: fix phys_to_virt() to check length
phys_to_virt() function without length parameter doesn`t always have ability to find the correct mapping for requested physical address. This is because physical address can be mapped in the same time in different virtual regions with different length. So the first found region which contains the requested physical address possibly doesn`t have enough mapped data. This is fixed by adding the length parameter to phys_to_virt() function. Length parameter can be set to 1 if caller knows that requested (pa + len) doesn`t cross mapping granule boundary.
core_mmu_get_va() and io_pa_or_va() functions now are take length parameter too as they based on phys_to_virt() in case of MMU enabled.
Signed-off-by: Anton Rybakov <a.rybakov@omp.ru> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (stm32mp1-157C_DK2) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6dlsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6qpsabreauto) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6sllevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ullevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx6ulzevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7dsabresd) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx7ulpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mmevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mnevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mqevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8mpevk) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qmmek) Tested-by: Clement Faure <clement.faure@nxp.com> (imx-mx8qxpmek)
show more ...
|
| b715a420 | 09-Jul-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
mm: fix mobj split by adding core_mmu_find_mapping_exclusive() helper
Fixes: ff01e2452169 ("mm: split mobj_tee_ram onto rw/rx parts")
This fixes mobj splitting onto RX/RW parts. Now split can be do
mm: fix mobj split by adding core_mmu_find_mapping_exclusive() helper
Fixes: ff01e2452169 ("mm: split mobj_tee_ram onto rw/rx parts")
This fixes mobj splitting onto RX/RW parts. Now split can be done incorrectly if RX and RW regions doesn`t mapped contiguosly. Added helper core_mmu_find_mapping_exclusive() allows to find unique mapping for specified type and length independently of their order, so then RX/RW regions for mobjects should be determined correctly.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Anton Rybakov <a.rybakov@omp.ru>
show more ...
|
| fbbf8944 | 13-Jul-2021 |
ZheTing <ztliu2652.cs@gmail.com> |
core: mm: remove redundant mobj_put() in vm_map_pad()
When mobj_get_cattr() fails vm_map_pad() doesn't need to call mobj_put() which is expected to balance mobj_get() called only after mobj_get_catt
core: mm: remove redundant mobj_put() in vm_map_pad()
When mobj_get_cattr() fails vm_map_pad() doesn't need to call mobj_put() which is expected to balance mobj_get() called only after mobj_get_cattr() succeeds. The issue was introduced in release 3.8.0 with struct mobj reference counting.
Signed-off-by: Gavin Liu <Gavin.Liu@mediatek.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ff01e245 | 24-Jun-2021 |
Anton Rybakov <a.rybakov@omp.ru> |
mm: split mobj_tee_ram onto rw/rx parts
Now mobj_tee_ram memory abstraction contains both TEE_RAM_RX and TEE_RAM_RW regions joined together. This patch splits it to mobj_tee_ram_rx and mobj_tee_ram_
mm: split mobj_tee_ram onto rw/rx parts
Now mobj_tee_ram memory abstraction contains both TEE_RAM_RX and TEE_RAM_RW regions joined together. This patch splits it to mobj_tee_ram_rx and mobj_tee_ram_rw to manage RX/RW memory objects separately.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Anton Rybakov <a.rybakov@omp.ru>
show more ...
|
| 00361c18 | 12-May-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: make __rodata_unpaged() symbols __weak
Makes the __rodata_unpaged tagged symbols __weak and non-static in order to be overridden in core/arch/arm/kernel/link_dummies_paged.c. This makes sure t
core: make __rodata_unpaged() symbols __weak
Makes the __rodata_unpaged tagged symbols __weak and non-static in order to be overridden in core/arch/arm/kernel/link_dummies_paged.c. This makes sure that these symbols doesn't bring in further symbols in the unpaged section.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 27c64925 | 12-May-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This w
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This will result in separate sections for each such variable and make it easier to debug the pruning of the dependency tree for unpaged sections.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| afb4ad9f | 18-May-2021 |
Jerome Forissier <jerome@forissier.org> |
core: pager: fix compiler warning with Clang
Function rwp_unpaged_iv_free() is reduced to a call to panic() when CFG_WITH_PAGER=y and CFG_CORE_PAGE_TAG_AND_IV=y. In this case, Clang 12 suggests a no
core: pager: fix compiler warning with Clang
Function rwp_unpaged_iv_free() is reduced to a call to panic() when CFG_WITH_PAGER=y and CFG_CORE_PAGE_TAG_AND_IV=y. In this case, Clang 12 suggests a noreturn attribute:
$ make -s CFG_WITH_PAGER=y COMPILER=clang core/mm/fobj.c:322:1: warning: function 'rwp_unpaged_iv_free' could be declared with attribute 'noreturn' [-Wmissing-noreturn] { ^ 1 warning generated.
However the attribute cannot be applied since it would be inappropriate when CFG_CORE_PAGE_TAG_AND_IV != y. Therefore, disable the warning for the file core/mm/fobj.c when the problematic configuration is enabled.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d5ad7ccf | 10-Jan-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename struct tee_pager_area to vm_paged_region
Renames struct tee_pager_area to struct vm_paged_region and moves it next to the declaration of struct vm_region. Since areas are now called pag
core: rename struct tee_pager_area to vm_paged_region
Renames struct tee_pager_area to struct vm_paged_region and moves it next to the declaration of struct vm_region. Since areas are now called paged regions or regions also rename functions, variables and struct members accordingly.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 185b4595 | 02-Apr-2021 |
Marouene Boubakri <marouene.boubakri@nxp.com> |
core: mm: move mobj.c to core/mm
mobj is abstract and it is used by many sources which are not architecture-specific such as core/kernel, core/pta and core/tee. Therefore, move mobj.c to core/mm and
core: mm: move mobj.c to core/mm
mobj is abstract and it is used by many sources which are not architecture-specific such as core/kernel, core/pta and core/tee. Therefore, move mobj.c to core/mm and its corresponding header file mobj.h to core/include/mm.
Signed-off-by: Marouene Boubakri <marouene.boubakri@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 42bb9a86 | 26-Mar-2021 |
Jerome Forissier <jerome@forissier.org> |
core: mm: fix infinite loop in vm_pa2va()
Commit d6ad67f674e5 ("core: mm: change vm_pa2va() to return a virtual address") moved the call to mobj_get_pa() up in a 'for' loop and added a 'continue' st
core: mm: fix infinite loop in vm_pa2va()
Commit d6ad67f674e5 ("core: mm: change vm_pa2va() to return a virtual address") moved the call to mobj_get_pa() up in a 'for' loop and added a 'continue' statement. Moving it was wrong because at this point 'size' is not yet updated which causes an infinite loop when the PA is not found.
Move the call back to its original location but keep the 'continue' which looks correct.
Fixes: d6ad67f674e5 ("core: mm: change vm_pa2va() to return a virtual address") Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| d6ad67f6 | 11-Mar-2021 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: mm: change vm_pa2va() to return a virtual address
Change vm_pa2va() to return target virtual address or NULL if the physical address cannot be resolved which can happen when pager is enabled a
core: mm: change vm_pa2va() to return a virtual address
Change vm_pa2va() to return target virtual address or NULL if the physical address cannot be resolved which can happen when pager is enabled and the target physical page belongs to the pager page pool. This change makes vm_pa2va() helper function simpler and its only caller doesn't differentiate error return codes.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b757e307 | 19-Mar-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: introduce CFG_CORE_PAGE_TAG_AND_IV
Introduces CFG_CORE_PAGE_TAG_AND_IV which defaults to enabled if TA paging is enabled. Can be used to disable tag and IV paging for paged read-write pages.
core: introduce CFG_CORE_PAGE_TAG_AND_IV
Introduces CFG_CORE_PAGE_TAG_AND_IV which defaults to enabled if TA paging is enabled. Can be used to disable tag and IV paging for paged read-write pages.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| aad1cf6b | 25-Jan-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fobj_rw_paged_alloc() store tags and IVs in paged area
fobj_rw_paged_alloc() is updated to store tags and IVs in a designated paged area instead of storing them in the heap. This avoids large
core: fobj_rw_paged_alloc() store tags and IVs in paged area
fobj_rw_paged_alloc() is updated to store tags and IVs in a designated paged area instead of storing them in the heap. This avoids large heap allocations which also would suffer from a fragmented heap.
The previous ops_rw_paged and struct fobj_rwp are now replaced by ops_rwp_unpaged_iv and struct fobj_rwp_unpaged_iv respectively. These are now only used to support the area where other tags and IVs are stored.
A new ops_rwp_paged_iv and struct fobj_rwp_paged_iv are added for using the designated paged area.
A fobj based on the ops_rwp_unpaged_iv ops is allocated and registered with the pager via a callback registered with driver_init_late().
This effectively enables paging of IV and tags for pages mapping a TA.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a5a72f28 | 05-Feb-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: clear user mappings from tables when removed
When a user mapping is removed clear it immediately from active or cached translation tables.
Reviewed-by: Etienne Carriere <etienne.carriere@lina
core: clear user mappings from tables when removed
When a user mapping is removed clear it immediately from active or cached translation tables.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 89c9728d | 19-Oct-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: replace tee_mmu prefix with vm
Replaces the tee_mmu prefix with vm. tee_mmu.h is renamed to vm.h and core/arch/arm/mm/tee_mmu.c is moved to core/mm/vm.c. Public functions belonging to these fi
core: replace tee_mmu prefix with vm
Replaces the tee_mmu prefix with vm. tee_mmu.h is renamed to vm.h and core/arch/arm/mm/tee_mmu.c is moved to core/mm/vm.c. Public functions belonging to these files are renamed with a vm prefix.
Introduces: vm_map_param(), vm_clean_param(), vm_buf_is_inside_private(), vm_buf_intersects_private(), vm_buf_to_mboj_offs(), vm_buf_is_inside_um_private(), vm_buf_intersects_um_private(), vm_add_rwmem(), vm_rem_rwmem(), vm_va2pa(), vm_pa2va(), vm_check_access_rights(), vm_set_ctx() replacing their tee_mmu_*() counterpart.
Acked-by: Joakim Bech <joakim.bech@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|