| bfdeae23 | 23-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pgt: support preallocated translation tables for S-EL0
With CFG_CORE_PREALLOC_EL0_TBLS=y translation tables are allocated for a user space context at the time when the mapping is added a struc
core: pgt: support preallocated translation tables for S-EL0
With CFG_CORE_PREALLOC_EL0_TBLS=y translation tables are allocated for a user space context at the time when the mapping is added a struct vm_region. The translation tables will be kept available for the S-EL0 context as long at the mappings are unchanged.
Secure Partitions (SPs) can depend on translation tables always being available and avoid having to wait for translation tables.
Memory for the translation tables is allocated from the same memory as used for TAs and SPs. The number of available translation tables are limited by the amount of TA/SP memory available.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d6e33310 | 22-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pgt: rename to pgt_put_all() and pgt_get_all()
The two functions pgt_free() and pgt_alloc() has names which doesn't match well what they do so rename them.
pgt_free() to pgt_put_all(): This m
core: pgt: rename to pgt_put_all() and pgt_get_all()
The two functions pgt_free() and pgt_alloc() has names which doesn't match well what they do so rename them.
pgt_free() to pgt_put_all(): This matches better how page tables are managed since pgt_put_all() doesn't free the tables, they are just put in a cache list from which they later can be free or re-allocated.
pgt_alloc() to pgt_get_all(): pgt_get_all() may actually not allocate a new table, not if it can be found in the cache list.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b1df82f1 | 08-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: use set_um_region() to update translation tables
Adds an internal function in core/mm/vm.c which is called when translation tables needs to be updated.
With a cache for recently used translat
core: use set_um_region() to update translation tables
Adds an internal function in core/mm/vm.c which is called when translation tables needs to be updated.
With a cache for recently used translation tables core_mmu_populate_user_map() will only update translation tables which are new and not populated yet.
Each user space context has a linked list of struct vm_region describing the logical memory map. To ensure that this logical memory map is kept in sync with the translation tables in use set_um_region() must be used to copy the content of a struct vm_region into translation tables as needed.
If the current context is updated then the pgts currently in use are updated. However, if the context isn't current then the cached tables are updated instead. When cached tables are updated some of the needed translation tables may actually be missing. This is ignored at this stage and later taken care of by core_mmu_populate_user_map() since those tables will be new and have the "populated" entry set to false. Once core_mmu_populate_user_map() has initialized tables "populated" is set to true for each table.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b7acc3c9 | 08-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: call pgt_flush_ctx() from vm_info_final()
Moves the call to pgt_flush_ctx() into vm_info_final() from destroy_context() and tee_ta_init_user_ta_session().
Reviewed-by: Etienne Carriere <etien
core: call pgt_flush_ctx() from vm_info_final()
Moves the call to pgt_flush_ctx() into vm_info_final() from destroy_context() and tee_ta_init_user_ta_session().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| f5154eb3 | 08-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: call tee_pager_rem_um_regions() from vm_info_final()
Moves the call to tee_pager_rem_um_regions() into vm_info_final() from free_utc() and stmm_ctx_destroy().
Reviewed-by: Etienne Carriere <e
core: call tee_pager_rem_um_regions() from vm_info_final()
Moves the call to tee_pager_rem_um_regions() into vm_info_final() from free_utc() and stmm_ctx_destroy().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e17e7a56 | 07-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move pgt_cache to struct user_mode_ctx
Moves pgt_cache from struct thread_specific_data to struct user_mode_ctx.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome F
core: move pgt_cache to struct user_mode_ctx
Moves pgt_cache from struct thread_specific_data to struct user_mode_ctx.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 60d3fc69 | 08-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: initialize struct user_mode_ctx with vm_info_init()
Broadens the scope of vm_info_init() to initialize the entire struct user_mode_ctx.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.
core: initialize struct user_mode_ctx with vm_info_init()
Broadens the scope of vm_info_init() to initialize the entire struct user_mode_ctx.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 237029d3 | 06-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove save_ctx parameter from pgt_free()
Prior to this patch was pgt_free() taking a save_ctx parameter which was only used if paging of TAs was enabled. If on the other hand paging of TAs wa
core: remove save_ctx parameter from pgt_free()
Prior to this patch was pgt_free() taking a save_ctx parameter which was only used if paging of TAs was enabled. If on the other hand paging of TAs was enabled this parameter was always true. So simplify the logic by removing this parameter and where used internally always do as if save_ctx was true. This means that pgts used for paging will always first be pushed to the cache list to later be reclaimed by other means.
This patch does not change the de facto behaviour.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7d716171 | 25-Jul-2022 |
Ming-Jen Chang <ming-jen.chang@mediatek.com> |
core: Avoid tee_ram_va equals 0 when CFG_CORE_ASLR is set
Optee OS use 0 as invalid va and tee_ram_va might equals 0 when CFG_CORE_ASLR=y. If tee_ram_va = 0, return directly to avoid it.
Signed-off
core: Avoid tee_ram_va equals 0 when CFG_CORE_ASLR is set
Optee OS use 0 as invalid va and tee_ram_va might equals 0 when CFG_CORE_ASLR=y. If tee_ram_va = 0, return directly to avoid it.
Signed-off-by: Ming-Jen Chang <ming-jen.chang@mediatek.com> Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b212ad1d | 30-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pager: fix get_linear_map_end()
With paging enabled there is an unpaged portion of OP-TEE which ends at the address returned by get_linear_map_end(). Without ASLR enabled this is both a virtua
core: pager: fix get_linear_map_end()
With paging enabled there is an unpaged portion of OP-TEE which ends at the address returned by get_linear_map_end(). Without ASLR enabled this is both a virtual and physical address. However, with ASLR enabled it's important to keep these addresses apart so add get_linear_map_end_va() and get_linear_map_end_pa() and use the right function in phys_to_virt_tee_ram() and is_unpaged().
This fixes occasional errors like: E/TC:0 0 Panic 'can't find mmu tables' at core/arch/arm/mm/tee_pager.c:549 <tee_pager_early_init> E/TC:0 0 TEE load address @ 0x50b9000 E/TC:0 0 Call stack: E/TC:0 0 0x050bf144
with paging and ASLR enabled.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (vexpress-qemu_armv8a) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| eb108a04 | 30-May-2022 |
Olivier Masse <olivier.masse@nxp.com> |
core: Define SDP in embedded DTB
Allow definition of the Secure Data Path memory region in an embedded DTB There is no memory intersection checking for such SDP area as embedded DTB is not available
core: Define SDP in embedded DTB
Allow definition of the Secure Data Path memory region in an embedded DTB There is no memory intersection checking for such SDP area as embedded DTB is not available during init of tee core memory mapping
Comply with reserved memory bindings Linux documentation file: Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml
Documented in: Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml
Signed-off-by: Olivier Masse <olivier.masse@nxp.com> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| 505c8fc4 | 07-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: handle large holes in S-EL0 map
Prior to this patch it was assumed that the memory map of a user mode context had no holes or very small holes. This leads to a higher pressure on the translati
core: handle large holes in S-EL0 map
Prior to this patch it was assumed that the memory map of a user mode context had no holes or very small holes. This leads to a higher pressure on the translation tables than necessary.
So fix this by skipping to allocate translation tables for holes in the memory map of a user mode context where possible.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 32a13944 | 10-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix error handling in vm_remap()
When remap tries to change the virtual address of a mapping and it fails in the middle of the process it has to undo the changes in order to restore the previo
core: fix error handling in vm_remap()
When remap tries to change the virtual address of a mapping and it fails in the middle of the process it has to undo the changes in order to restore the previous state before returning an error.
This fix addresses a corner case where the number of needed translation tables for the new map has been increased and hits a limit, so the remap request must be denied.
Fixes: 7d2b71d6d30f ("core: vm_set_prot() and friends works across VM regions") Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 66257dc2 | 08-Jun-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: deprecate vm_add_rwmem() and vm_rem_rwmem()
Deprecates vm_add_rwmem() and vm_rem_rwmem(), they should only be called from mobj_seccpy_shm_alloc() and mobj_seccpy_shm_free().
Reviewed-by: Etie
core: deprecate vm_add_rwmem() and vm_rem_rwmem()
Deprecates vm_add_rwmem() and vm_rem_rwmem(), they should only be called from mobj_seccpy_shm_alloc() and mobj_seccpy_shm_free().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a0e8ffe9 | 04-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add support for MTE
Adds support for the Armv8.5-A Memory Tagging Extension with CFG_MEMTAG=y.
A memtag.h API is introduced to handle this extension. If CFG_MEMTAG=n the API doesn't add any o
core: add support for MTE
Adds support for the Armv8.5-A Memory Tagging Extension with CFG_MEMTAG=y.
A memtag.h API is introduced to handle this extension. If CFG_MEMTAG=n the API doesn't add any overhead and the behaviour is unchanged. With CFG_MEMTAG=y a check is performed to see if the platform can support MTE and the API is dynamically configured accordingly. This means that it's safe to have CFG_MEMTAG=y even for platforms not supporting MTE. There will be some minimal overhead then, but likely not noticeable.
An entry is also added in the TEE_PROPSET_TEE_IMPLEMENTATION for a u32 property "org.trustedfirmware.optee.cpu.feat_memtag_implemented". The property is set to a non-zero value only if CFG_CORE_MEMTAG is configured and the underlying CPU supports FEAT_MTE.
This commit still only uses the default tag with the value 0 resulting in unchanged pointers when accessing memory. However, all plumbing is in place allowing for instance tagging of the heap in a later commit.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 6105aa86 | 12-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: map TA memory using TEE_MATTR_MEM_TYPE_TAGGED
Maps TA memory using the TEE_MATTR_MEM_TYPE_TAGGED which results in tagged cached memory if the system has it enabled.
Acked-by: Etienne Carriere
core: map TA memory using TEE_MATTR_MEM_TYPE_TAGGED
Maps TA memory using the TEE_MATTR_MEM_TYPE_TAGGED which results in tagged cached memory if the system has it enabled.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7c3ab774 | 04-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mm: add TEE_MATTR_MEM_TYPE_TAGGED
Adds TEE_MATTR_MEM_TYPE_TAGGED used to map tagged memory as defined in Armv8.5-A Memory Tagging Extension (MTE).
All OP-TEE core memory should be mapped as t
core: mm: add TEE_MATTR_MEM_TYPE_TAGGED
Adds TEE_MATTR_MEM_TYPE_TAGGED used to map tagged memory as defined in Armv8.5-A Memory Tagging Extension (MTE).
All OP-TEE core memory should be mapped as tagged memory when supported.
Memory potentially shared with non-secure world or other firmware should not be mapped as tagged since we don't have control over the tags then.
The mappings used by TEE_MATTR_MEM_TYPE_TAGGED is replaced by TEE_MATTR_MEM_TYPE_CACHED if MTE isn't supported or configured.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8afe7a7c | 11-Apr-2022 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename mobj_get_cattr() to mobj_get_mem_type()
Renames mobj_get_cattr() to mobj_get_mem_type(). The mobj operation get_ctype() is also renamed to get_mem_type().
This commit is only about ren
core: rename mobj_get_cattr() to mobj_get_mem_type()
Renames mobj_get_cattr() to mobj_get_mem_type(). The mobj operation get_ctype() is also renamed to get_mem_type().
This commit is only about renaming ctype to mem_type, no changes in behaviour.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bf31bf10 | 22-Mar-2022 |
Imre Kis <imre.kis@arm.com> |
core: Enable mapping DT from secure memory
Add CFG_MAP_EXT_DT_SECURE option to enable mapping the device tree from the secure memory. As the device tree in the secure memory would only have the even
core: Enable mapping DT from secure memory
Add CFG_MAP_EXT_DT_SECURE option to enable mapping the device tree from the secure memory. As the device tree in the secure memory would only have the event log address in the secure memory the property name is changed from tpm_event_log_sm_addr to the standard tpm_event_log_addr when CFG_MAP_EXT_DT_SECURE is enabled.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Imre Kis <imre.kis@arm.com>
show more ...
|
| 3aaf25d2 | 10-Mar-2022 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: mm: fix core virtual address range constraint in lpae
Changes strategy to set core virtual memory addresses in case pager is enabled (CFG_WITH_PAGER=y) with LPAE (CFG_WITH_LPAE=y). In this con
core: mm: fix core virtual address range constraint in lpae
Changes strategy to set core virtual memory addresses in case pager is enabled (CFG_WITH_PAGER=y) with LPAE (CFG_WITH_LPAE=y). In this configuration the virtual memory addresses are expected to fit in a single base translation table in order to save 4kB translation pages. This change makes core to fallback to the generic layout, possibly spreading virtual addresses over several base translation tables if the virtual memory addresses do not fit in the optimized address range preferred for that configuration.
Fixes: https://github.com/OP-TEE/optee_os/issues/5201 Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| e31a75b3 | 15-Mar-2022 |
Lejia Zhang <zhanlej@gmail.com> |
core: mm: fix mobj_shm_ops support .get_cattr()
ftrace use static shared memory returns an object of type mobj_shm_ops. But the get_cattr function is not implemented in mobj_shm_ops.This will cause
core: mm: fix mobj_shm_ops support .get_cattr()
ftrace use static shared memory returns an object of type mobj_shm_ops. But the get_cattr function is not implemented in mobj_shm_ops.This will cause ftrace to not work properly.
Signed-off-by: Lejia Zhang <zhanlej@gmail.com> Suggested-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 39e8c200 | 01-Feb-2022 |
Jerome Forissier <jerome@forissier.org> |
core: tag ops structures with __relrodata_unpaged
Global structures currently tagged with __rodata_unpaged need to use __relrodata_unpaged instead because they contain pointers which are subject to
core: tag ops structures with __relrodata_unpaged
Global structures currently tagged with __rodata_unpaged need to use __relrodata_unpaged instead because they contain pointers which are subject to relocation when CFG_CORE_ASLR=y. Doing so moves them out of .rodata which will now stay unmodified even with ASLR turned on.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 33d42c6e | 01-Mar-2022 |
Jelle Sels <jelle.sels@arm.com> |
core: Add support for DEVICE_nGnRnE
Currently OP-TEE only allows non-cached memory to be mapped as ATTR_DEVICE_nGnRE/Device. This patch adds support for ATTR_DEVICE_nGnRnE/Strongly-ordered.
Signed-
core: Add support for DEVICE_nGnRnE
Currently OP-TEE only allows non-cached memory to be mapped as ATTR_DEVICE_nGnRE/Device. This patch adds support for ATTR_DEVICE_nGnRnE/Strongly-ordered.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| f950bedc | 01-Mar-2022 |
Jelle Sels <jelle.sels@arm.com> |
core: Add mattr_is_cached()
mattr_is_cached() can be used to determine if the mattr is cached or not.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@lina
core: Add mattr_is_cached()
mattr_is_cached() can be used to determine if the mattr is cached or not.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 8b427282 | 01-Mar-2022 |
Jelle Sels <jelle.sels@arm.com> |
core: change TEE_MATTR_CACHE_ to TEE_MATTR_MEM_TYPE_
Some extra memory types will be added. This patch renames all TEE_MATTR_CACHE_ defines to TEE_MATTR_MEM_TYPE_. This will make the next patches ea
core: change TEE_MATTR_CACHE_ to TEE_MATTR_MEM_TYPE_
Some extra memory types will be added. This patch renames all TEE_MATTR_CACHE_ defines to TEE_MATTR_MEM_TYPE_. This will make the next patches easier to understand.
Signed-off-by: Jelle Sels <jelle.sels@arm.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|