| 8c6a8aff | 25-Oct-2017 |
wellsleep <wellsleeplz@gmail.com> |
Fix comment in tee_ree_fs.c
Signed-off-by: Liu Zheng <wellsleeplz@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| e4a1f581 | 23-Oct-2017 |
Sumit Garg <sumit.garg@nxp.com> |
entry_std.c: Initialize num_params to fix gcc warning
Signed-off-by: Sumit Garg <sumit.garg@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| b6449075 | 19-Oct-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
thread.c: free rpc arg mobj during cache disabling
Mobj, containing memory for RPC arguments was not deleted when client disabled argument cache. That would lead to resource leak.
Signed-off-by: Vo
thread.c: free rpc arg mobj during cache disabling
Mobj, containing memory for RPC arguments was not deleted when client disabled argument cache. That would lead to resource leak.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b45ff691 | 09-Oct-2017 |
Jerome Forissier <jerome.forissier@linaro.org> |
hikey, hikey960: enable dynamic shared memory
Enables dynamic shared memory by registering the non-secure memory range in plat-hikey/main.c.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro
hikey, hikey960: enable dynamic shared memory
Enables dynamic shared memory by registering the non-secure memory range in plat-hikey/main.c.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 9a85cc01 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: add v2p/p2v tests in embedded tests
Use the invocation test pseudo TA to test virt_to_phys and phys_to_virt conversions over TA memory reference parameters.
Convert in MEM_AREA_TA_VASPACE mem
core: add v2p/p2v tests in embedded tests
Use the invocation test pseudo TA to test virt_to_phys and phys_to_virt conversions over TA memory reference parameters.
Convert in MEM_AREA_TA_VASPACE memory when pTA client is a TA. Otherwise if means pTA client is in the non-secure world and the memref parameters are mapped straight to TEE core. Try in the static SHM, SDP memory and in the dynamic SHM.
Several configuration aside pager can make phys_to_virt() failing to find an existing valid virtual address. When so, do not report an error to the client.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemus, b2260)
show more ...
|
| 38830287 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: privileged land pa2va is not supported in dynamic SHM
Implementation currently does not support finding a mapped virtual memory address in the dynamic SHM range from a physical address.
T
core:mmu: privileged land pa2va is not supported in dynamic SHM
Implementation currently does not support finding a mapped virtual memory address in the dynamic SHM range from a physical address.
This change prevents phys_to_virt() from producing a faulty virtual address when dealing with dynamic SHM virtual address range.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 0d866655 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:debug: add verbosity when pa/va do not match
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 42d91b4b | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: fix userland pa2va conversion
When dealing with a memory object that are physically granulated, looking for a matching physical page requires to test each granule of the memory object.
Si
core:mmu: fix userland pa2va conversion
When dealing with a memory object that are physically granulated, looking for a matching physical page requires to test each granule of the memory object.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bbed97b6 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: fix userland va2pa conversion
This change takes care that the offset in granule of the target address to be converted is not added twice when computing the address physical page based on t
core:mmu: fix userland va2pa conversion
This change takes care that the offset in granule of the target address to be converted is not added twice when computing the address physical page based on the memory object reference.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| def98e21 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:unwind: check user context on stack print of panicked TAs
This change checks that the userland context pointer is valid before reading its content.
Note that this change only lowers the chance
core:unwind: check user context on stack print of panicked TAs
This change checks that the userland context pointer is valid before reading its content.
Note that this change only lowers the chance of malformed TA being able to crash core or access core memory using crafted context reference. The stack unwind process being executed from kernel land, a real fix could require each stack unwind step to verify the memory references before going further in the execution history.
Therefore this change does not fix the vulnerability of current TA stack unwind process against core/TA isolation.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| f98151a6 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: map PTA registered shared memory late
Normal registered dynamic shared memory objects are not mapped into OP-TEE OS memory space as that memory normally only is used in normal (user) TAs.
If
core: map PTA registered shared memory late
Normal registered dynamic shared memory objects are not mapped into OP-TEE OS memory space as that memory normally only is used in normal (user) TAs.
If a Pseudo TA is invoked from a user TA it will use the mapping already activated for the user TA and can easily access everything the user TA can access, including buffers passed in parameters for the user TA.
However, if a Pseudo TA is invoked directly from a non-secure client there is no user TA mapping to share, instead memory buffer passed in parameters has to be mapped directly.
With this patch registered shared memory buffer passed from a non-secure client are mapped if needed before invoking the Pseudo TA.
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt/armv8, b2260) Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 430dcbd8 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: reimplement mobj_mapped_shm_alloc()
Now that normal registered shared memory (created with mobj_reg_shm_alloc()) can be mapped the MOBJ type struct mobj_mapped_shm is redundant.
With this pat
core: reimplement mobj_mapped_shm_alloc()
Now that normal registered shared memory (created with mobj_reg_shm_alloc()) can be mapped the MOBJ type struct mobj_mapped_shm is redundant.
With this patch mobj_mapped_shm_alloc() is reimplemented using mobj_reg_shm_alloc() and mobj_reg_shm_map().
struct mobj_mapped_shm and all associated functions and variables are removed.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 071e7029 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add mobj_reg_shm_{,un}map()
Adds mobj_reg_shm_map() and mobj_reg_shm_unmap() operating on MOBJs created with mobj_reg_shm_alloc(), also know as registered shared memory.
mobj_reg_shm_alloc()
core: add mobj_reg_shm_{,un}map()
Adds mobj_reg_shm_map() and mobj_reg_shm_unmap() operating on MOBJs created with mobj_reg_shm_alloc(), also know as registered shared memory.
mobj_reg_shm_alloc() maps the described shared memory into OP-TEE OS memory space, mobj_reg_shm_unmap() unmaps the same memory again.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5c7a19bb | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj: remove double physical offset
Removes the double bookkeeping of physical offset into first physical page of a MOBJ. Now all the different offsets are needed to calculate the final offset
core: mobj: remove double physical offset
Removes the double bookkeeping of physical offset into first physical page of a MOBJ. Now all the different offsets are needed to calculate the final offset.
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a71af55e | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj: add mobj_get_phys_offs()
Adds mobj_get_phys_offs() which returns the physical offset into the first physical page/section of a MOBJ.
Reviewed-by: Etienne Carriere <etienne.carriere@lina
core: mobj: add mobj_get_phys_offs()
Adds mobj_get_phys_offs() which returns the physical offset into the first physical page/section of a MOBJ.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8ae8c738 | 13-Oct-2017 |
Kevin Peng <kevinp@marvell.com> |
Add Marvell platform with initial support for ARMADA A7K & A8K
Only tested 64-bit mode with default configurations:
1. build command make PLATFORM=marvell-armada7080 CFG_ARM64_core=y 2. Passed
Add Marvell platform with initial support for ARMADA A7K & A8K
Only tested 64-bit mode with default configurations:
1. build command make PLATFORM=marvell-armada7080 CFG_ARM64_core=y 2. Passed xtest
Signed-off-by: Kevin Peng <kevinp@marvell.com> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| ae9fdf98 | 11-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
plat-stm: support registered shm buffers
CFG_DDR_SECURE_BASE/_SIZE can be used to define the DDR range reserved to secure side. This can be larger than the TEETZ reserved memory. If CFG_DDR_SECURE_B
plat-stm: support registered shm buffers
CFG_DDR_SECURE_BASE/_SIZE can be used to define the DDR range reserved to secure side. This can be larger than the TEETZ reserved memory. If CFG_DDR_SECURE_BASE/_SIZE is defined, plat-stm registers the non-secure external memory to support dynamic shm registering.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| ae194216 | 12-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:sdp: fix SDP test pseudo-TA against dynamic shm
Physical memory typed CORE_MEM_NSEC_SHM belong to the default contiguous shm memory. Since dynamic SHM, now non secure memory reference can be ou
core:sdp: fix SDP test pseudo-TA against dynamic shm
Physical memory typed CORE_MEM_NSEC_SHM belong to the default contiguous shm memory. Since dynamic SHM, now non secure memory reference can be outside the default NSEC_SHM, hence check non secure reference using CORE_MEM_NON_SEC type instead of CORE_MEM_NSEC_SHM.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c5d84b72 | 10-Oct-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
plat-rcar: add non-secure DDR configuration
This patch adds non-secure DDR ranges for salvator-h3 and salvator-m3 boards.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wi
plat-rcar: add non-secure DDR configuration
This patch adds non-secure DDR ranges for salvator-h3 and salvator-m3 boards.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d7269ccc | 10-Oct-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
plat-rcar: add initial support for salvator-m3 board
Prior to this patch OP-TEE was able to run only at salvator-h3 board (but it worked on salvator-m3 too, only by coincidence).
Signed-off-by: Vol
plat-rcar: add initial support for salvator-m3 board
Prior to this patch OP-TEE was able to run only at salvator-h3 board (but it worked on salvator-m3 too, only by coincidence).
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b369a932 | 12-Oct-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
plat-rcar: force CFG_CORE_LARGE_PHYS_ADDR
On RCAR3 platform most of the DRAM is mapped over 4GB, so it needs LPE enabled with 32-bit builds.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com
plat-rcar: force CFG_CORE_LARGE_PHYS_ADDR
On RCAR3 platform most of the DRAM is mapped over 4GB, so it needs LPE enabled with 32-bit builds.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ae841edf | 12-Oct-2017 |
Jerome Forissier <jerome.forissier@linaro.org> |
pager: allow TA unwind when cause of unwind is not abort
It is perfectly safe to run the call stack unwinding code on a paged TA as long as we're not processing an abort. Adjust __abort_print() acco
pager: allow TA unwind when cause of unwind is not abort
It is perfectly safe to run the call stack unwinding code on a paged TA as long as we're not processing an abort. Adjust __abort_print() accordingly. Prior to this patch, the call stack was missing from TA panics if pager was enabled.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 785be2ee | 11-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
plat-vexpress: juno: add missing DRAM1
Defines missing DRAM1 base 0x880000000 size 0x180000000 for Juno.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wik
plat-vexpress: juno: add missing DRAM1
Defines missing DRAM1 base 0x880000000 size 0x180000000 for Juno.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Juno) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 3ff067c4 | 05-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
plat-vexpress: fvp: add missing DRAM1
Defines missing DRAM1 base 0x880000000 size 0xa00000000 for FVP.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jens Wiklander <jens.wi
plat-vexpress: fvp: add missing DRAM1
Defines missing DRAM1 base 0x880000000 size 0xa00000000 for FVP.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (FVP) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| cbe4eaec | 05-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add register_phys_mem_ul()
Adds register_phys_mem_ul() which must be used (for compatibility with CFG_CORE_LARGE_PHYS_ADDR=y) when input address and size is based on symbols generated in the l
core: add register_phys_mem_ul()
Adds register_phys_mem_ul() which must be used (for compatibility with CFG_CORE_LARGE_PHYS_ADDR=y) when input address and size is based on symbols generated in the link script.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|