| 5c1c14ad | 03-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: kern.ld.S: put constructors in init
Makes sure that constructor functions are in the init section to be available during initialization of OP-TEE.
Acked-by: Etienne Carriere <etienne.car
core: arm: kern.ld.S: put constructors in init
Makes sure that constructor functions are in the init section to be available during initialization of OP-TEE.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 127b5e99 | 03-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core; add MEM_AREA_TEE_ASAN
Adds MEM_AREA_TEE_ASAN which is used when pager is enabled to map the memory used by the address sanitizer if enabled.
Currently this only works in configurations with t
core; add MEM_AREA_TEE_ASAN
Adds MEM_AREA_TEE_ASAN which is used when pager is enabled to map the memory used by the address sanitizer if enabled.
Currently this only works in configurations with the pager where emulated SRAM is used.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 58cd4887 | 03-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pager: bugfix set_alias_area()
Fixes set_alias_area() to only take the supplied area, prior to this the final page would have been included too.
Reviewed-by: Etienne Carriere <etienne.carrier
core: pager: bugfix set_alias_area()
Fixes set_alias_area() to only take the supplied area, prior to this the final page would have been included too.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 36a063ef | 03-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
Replace struct prng_ops with function interface
Adds crypto_rng_add_entropy() and crypto_rng_read() replacing struct prng_ops in crypto_ops.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.o
Replace struct prng_ops with function interface
Adds crypto_rng_add_entropy() and crypto_rng_read() replacing struct prng_ops in crypto_ops.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 486754e8 | 08-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm32: reset_secondary() set reset vector
Sets reset vector in reset_secondary() to trap unexpected exceptions.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wik
core: arm32: reset_secondary() set reset vector
Sets reset vector in reset_secondary() to trap unexpected exceptions.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU v7/v8) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 64113fca | 02-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm32: replace _start with reset() function
Renames _start to reset_vect_table and renames reset() to _start() in order to avoid pulling in too much unpaged code via reset_secondary()/cpu_on_h
core: arm32: replace _start with reset() function
Renames _start to reset_vect_table and renames reset() to _start() in order to avoid pulling in too much unpaged code via reset_secondary()/cpu_on_handler().
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8473540d | 02-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
Keep assembly functions in separate sections
To get a more fine grained selection of which area (init, paged, unpaged) an assembly function is assigned do the equivalent of -ffunction-sections but i
Keep assembly functions in separate sections
To get a more fine grained selection of which area (init, paged, unpaged) an assembly function is assigned do the equivalent of -ffunction-sections but in assembly.
Some functions has to be in specific places in the binary for a successful boot, link script is updated accordingly.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| eb7b47bb | 08-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm32: thread_set_und_sp(): correct end tag
Sets correct end tag for thread_set_und_sp()
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklan
core: arm32: thread_set_und_sp(): correct end tag
Sets correct end tag for thread_set_und_sp()
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 789e38a6 | 06-Nov-2017 |
Zeng Tao <prime.zeng@hisilicon.com> |
core: arm: psci: pass nsec ctx to system_suspend
In the commit 732fc43(core: arm: psci: pass nsec ctx to psci), we have done the job, but we forgot to follow it in the later commit 1d40eb8 (core: ar
core: arm: psci: pass nsec ctx to system_suspend
In the commit 732fc43(core: arm: psci: pass nsec ctx to psci), we have done the job, but we forgot to follow it in the later commit 1d40eb8 (core: arm: sm: add PSCI system suspend), fix it in this patch.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com>
show more ...
|
| 639e5b83 | 26-Oct-2017 |
Joakim Bech <joakim.bech@linaro.org> |
pta: change DMSG to FMSG for invoke in pta/SDP
When running the default configuration SDP spams a lot: DEBUG: [0x0] TEE-CORE:invoke_command:338: command entry point for pseudo t
pta: change DMSG to FMSG for invoke in pta/SDP
When running the default configuration SDP spams a lot: DEBUG: [0x0] TEE-CORE:invoke_command:338: command entry point for pseudo ta "invoke_tests.pta" ...
By changing from DMSG to FMSG this will not flood the console anymore.
Signed-off-by: Joakim Bech <joakim.bech@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e4a1f581 | 23-Oct-2017 |
Sumit Garg <sumit.garg@nxp.com> |
entry_std.c: Initialize num_params to fix gcc warning
Signed-off-by: Sumit Garg <sumit.garg@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| b6449075 | 19-Oct-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
thread.c: free rpc arg mobj during cache disabling
Mobj, containing memory for RPC arguments was not deleted when client disabled argument cache. That would lead to resource leak.
Signed-off-by: Vo
thread.c: free rpc arg mobj during cache disabling
Mobj, containing memory for RPC arguments was not deleted when client disabled argument cache. That would lead to resource leak.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b45ff691 | 09-Oct-2017 |
Jerome Forissier <jerome.forissier@linaro.org> |
hikey, hikey960: enable dynamic shared memory
Enables dynamic shared memory by registering the non-secure memory range in plat-hikey/main.c.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro
hikey, hikey960: enable dynamic shared memory
Enables dynamic shared memory by registering the non-secure memory range in plat-hikey/main.c.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 9a85cc01 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: add v2p/p2v tests in embedded tests
Use the invocation test pseudo TA to test virt_to_phys and phys_to_virt conversions over TA memory reference parameters.
Convert in MEM_AREA_TA_VASPACE mem
core: add v2p/p2v tests in embedded tests
Use the invocation test pseudo TA to test virt_to_phys and phys_to_virt conversions over TA memory reference parameters.
Convert in MEM_AREA_TA_VASPACE memory when pTA client is a TA. Otherwise if means pTA client is in the non-secure world and the memref parameters are mapped straight to TEE core. Try in the static SHM, SDP memory and in the dynamic SHM.
Several configuration aside pager can make phys_to_virt() failing to find an existing valid virtual address. When so, do not report an error to the client.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemus, b2260)
show more ...
|
| 38830287 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: privileged land pa2va is not supported in dynamic SHM
Implementation currently does not support finding a mapped virtual memory address in the dynamic SHM range from a physical address.
T
core:mmu: privileged land pa2va is not supported in dynamic SHM
Implementation currently does not support finding a mapped virtual memory address in the dynamic SHM range from a physical address.
This change prevents phys_to_virt() from producing a faulty virtual address when dealing with dynamic SHM virtual address range.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 0d866655 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:debug: add verbosity when pa/va do not match
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 42d91b4b | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: fix userland pa2va conversion
When dealing with a memory object that are physically granulated, looking for a matching physical page requires to test each granule of the memory object.
Si
core:mmu: fix userland pa2va conversion
When dealing with a memory object that are physically granulated, looking for a matching physical page requires to test each granule of the memory object.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bbed97b6 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:mmu: fix userland va2pa conversion
This change takes care that the offset in granule of the target address to be converted is not added twice when computing the address physical page based on t
core:mmu: fix userland va2pa conversion
This change takes care that the offset in granule of the target address to be converted is not added twice when computing the address physical page based on the memory object reference.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| def98e21 | 17-Oct-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core:unwind: check user context on stack print of panicked TAs
This change checks that the userland context pointer is valid before reading its content.
Note that this change only lowers the chance
core:unwind: check user context on stack print of panicked TAs
This change checks that the userland context pointer is valid before reading its content.
Note that this change only lowers the chance of malformed TA being able to crash core or access core memory using crafted context reference. The stack unwind process being executed from kernel land, a real fix could require each stack unwind step to verify the memory references before going further in the execution history.
Therefore this change does not fix the vulnerability of current TA stack unwind process against core/TA isolation.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| f98151a6 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: map PTA registered shared memory late
Normal registered dynamic shared memory objects are not mapped into OP-TEE OS memory space as that memory normally only is used in normal (user) TAs.
If
core: map PTA registered shared memory late
Normal registered dynamic shared memory objects are not mapped into OP-TEE OS memory space as that memory normally only is used in normal (user) TAs.
If a Pseudo TA is invoked from a user TA it will use the mapping already activated for the user TA and can easily access everything the user TA can access, including buffers passed in parameters for the user TA.
However, if a Pseudo TA is invoked directly from a non-secure client there is no user TA mapping to share, instead memory buffer passed in parameters has to be mapped directly.
With this patch registered shared memory buffer passed from a non-secure client are mapped if needed before invoking the Pseudo TA.
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt/armv8, b2260) Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 430dcbd8 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: reimplement mobj_mapped_shm_alloc()
Now that normal registered shared memory (created with mobj_reg_shm_alloc()) can be mapped the MOBJ type struct mobj_mapped_shm is redundant.
With this pat
core: reimplement mobj_mapped_shm_alloc()
Now that normal registered shared memory (created with mobj_reg_shm_alloc()) can be mapped the MOBJ type struct mobj_mapped_shm is redundant.
With this patch mobj_mapped_shm_alloc() is reimplemented using mobj_reg_shm_alloc() and mobj_reg_shm_map().
struct mobj_mapped_shm and all associated functions and variables are removed.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 071e7029 | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add mobj_reg_shm_{,un}map()
Adds mobj_reg_shm_map() and mobj_reg_shm_unmap() operating on MOBJs created with mobj_reg_shm_alloc(), also know as registered shared memory.
mobj_reg_shm_alloc()
core: add mobj_reg_shm_{,un}map()
Adds mobj_reg_shm_map() and mobj_reg_shm_unmap() operating on MOBJs created with mobj_reg_shm_alloc(), also know as registered shared memory.
mobj_reg_shm_alloc() maps the described shared memory into OP-TEE OS memory space, mobj_reg_shm_unmap() unmaps the same memory again.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5c7a19bb | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj: remove double physical offset
Removes the double bookkeeping of physical offset into first physical page of a MOBJ. Now all the different offsets are needed to calculate the final offset
core: mobj: remove double physical offset
Removes the double bookkeeping of physical offset into first physical page of a MOBJ. Now all the different offsets are needed to calculate the final offset.
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU) Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a71af55e | 16-Oct-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj: add mobj_get_phys_offs()
Adds mobj_get_phys_offs() which returns the physical offset into the first physical page/section of a MOBJ.
Reviewed-by: Etienne Carriere <etienne.carriere@lina
core: mobj: add mobj_get_phys_offs()
Adds mobj_get_phys_offs() which returns the physical offset into the first physical page/section of a MOBJ.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8ae8c738 | 13-Oct-2017 |
Kevin Peng <kevinp@marvell.com> |
Add Marvell platform with initial support for ARMADA A7K & A8K
Only tested 64-bit mode with default configurations:
1. build command make PLATFORM=marvell-armada7080 CFG_ARM64_core=y 2. Passed
Add Marvell platform with initial support for ARMADA A7K & A8K
Only tested 64-bit mode with default configurations:
1. build command make PLATFORM=marvell-armada7080 CFG_ARM64_core=y 2. Passed xtest
Signed-off-by: Kevin Peng <kevinp@marvell.com> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|