| 87372da4 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
Enable ASLR by default
With this patch both CFG_TA_ASLR and CFG_CORE_ASLR are set to 'y' by default.
Removes CFG_TA_ASLR?=y for plat-hikey and plat-vexpress (qemu_virt).
If the current platform do
Enable ASLR by default
With this patch both CFG_TA_ASLR and CFG_CORE_ASLR are set to 'y' by default.
Removes CFG_TA_ASLR?=y for plat-hikey and plat-vexpress (qemu_virt).
If the current platform doesn't use CFG_DT=y and hasn't overridden get_aslr_seed() a warning message will be printed on the secure uart and execution will resume with the default load address.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5502aad4 | 25-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: link.mk: Make sure to link without relro
Passes -z norelro to linker to make sure that the relro option isn't enabled. With relro enabled all relro sections has to be contiguous with each oth
core: link.mk: Make sure to link without relro
Passes -z norelro to linker to make sure that the relro option isn't enabled. With relro enabled all relro sections has to be contiguous with each other. This would prevent us from removing .dynamic from the binary created with scripts/gen_tee_bin.py. Regardless of the relro option OP-TEE itself uses the equivalent of relro when mapping its memory.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e996d189 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: support ASLR and paging
Adds support for CFG_WITH_PAGER=y and CFG_CORE_ASLR=y.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 9438dbdb | 04-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix tee_pager_release_one_phys() assert
Prior to this patch it was assumed in tee_pager_release_one_phys() that a locked fobj would not span multiple page directories. This is not correct sinc
core: fix tee_pager_release_one_phys() assert
Prior to this patch it was assumed in tee_pager_release_one_phys() that a locked fobj would not span multiple page directories. This is not correct since it depends on the base address and size of the locked fobj. If the base address is close to the end of a page directory it can very well happen. With CFG_CORE_ASLR=y this is bound to happen sooner or later even if everything seems to work with CFG_CORE_ASLR=n. This patch fixes this by instead counting the number of areas which uses the pmem to be released. The number should be exactly one.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 83471b29 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix pager vaspace start in assign_mem_va()
Makes sure that MEM_AREA_PAGER_VASPACE follows directly after the static mappings of the OP-TEE ELF. This fixes the case where OP-TEE is mapped at hi
core: fix pager vaspace start in assign_mem_va()
Makes sure that MEM_AREA_PAGER_VASPACE follows directly after the static mappings of the OP-TEE ELF. This fixes the case where OP-TEE is mapped at higher addresses and thus tries to locate everything else at lower addresses. Without a fixed address for MEM_AREA_PAGER_VASPACE the reserved pager vaspace could end up at the wrong address.
Fixes: 5dd1570ac5b0 ("core: add embedded data region") Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ff207c8d | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pager: allocate pager_tables dynamically
With ASLR the number of pager_tables needed can differ from the number of pager_tables needed in an non-relocated configuration. Depending on the value
core: pager: allocate pager_tables dynamically
With ASLR the number of pager_tables needed can differ from the number of pager_tables needed in an non-relocated configuration. Depending on the value of VCORE_START_VA the range VCORE_START_VA..+TEE_RAM_VA_SIZE may cover an extra table compared to VCORE_START_VA being aligned to the start of a table. To avoid multiple configurations always calculate the number of tables needed and allocate pager_tables accordingly.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c6744caa | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add fobj_ro_reloc_paged_alloc()
Adds a new type of fobj, struct fobj_ro_reloc_paged, which is created with fobj_ro_reloc_paged_alloc(). It's like struct fobj_rop but with support for relocatio
core: add fobj_ro_reloc_paged_alloc()
Adds a new type of fobj, struct fobj_ro_reloc_paged, which is created with fobj_ro_reloc_paged_alloc(). It's like struct fobj_rop but with support for relocation too.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 15ba8c1f | 15-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move VFP state into struct user_ta_ctx
Moves the VFP state from struct user_ta_ctx to struct user_mode_ctx to make user mode handling a bit more generic.
Acked-by: Pipat Methavanitpong <pipat
core: move VFP state into struct user_ta_ctx
Moves the VFP state from struct user_ta_ctx to struct user_mode_ctx to make user mode handling a bit more generic.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7d2b71d6 | 08-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: vm_set_prot() and friends works across VM regions
Updates vm_set_prot() and friends to work on memory ranges which doesn't necessarily align with the underlying VM regions. VM regions are spli
core: vm_set_prot() and friends works across VM regions
Updates vm_set_prot() and friends to work on memory ranges which doesn't necessarily align with the underlying VM regions. VM regions are split as needed to perform the operations, with operations completed VM regions in the supplied memory range are merged if possible. The only restriction on a supplied memory range is that the already present mapping is compatible with the change.
Note that this also affect pager which also splits and merges pager areas as needed.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 79f22013 | 13-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: system_pta: refuse changing kernel mappings
Adds checks in system_unmap(), system_set_prot() and system_remap() to refuse making changes to kernel mappings.
Acked-by: Pipat Methavanitpong <pi
core: system_pta: refuse changing kernel mappings
Adds checks in system_unmap(), system_set_prot() and system_remap() to refuse making changes to kernel mappings.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7c732ee4 | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: get svc handler from the context of current session
Instead of a single global syscalls definition, get the syscall handler function from the context of current active session.
An extra optio
core: get svc handler from the context of current session
Instead of a single global syscalls definition, get the syscall handler function from the context of current active session.
An extra optional (mandatory for user mode TAs) function pointer is added to struct tee_ta_ops, handle_svc, which handles the syscall.
tee_svc_handler() is split into a generic thread_svc_handler() which is put in kernel/thread.c. The user TA specific part is put in user_ta_handle_svc() which is kept in tee/arch_svc.c but made available via the new .handle_svc function pointer of struct tee_ta_ops.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5343f09f | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add common user_mode_ctx_print_mappings()
Adds a common user_mode_ctx_print_mappings() which prints the current user mode mappings.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro
core: add common user_mode_ctx_print_mappings()
Adds a common user_mode_ctx_print_mappings() which prints the current user mode mappings.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2ccaf1af | 18-Sep-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move struct thread_ctx_regs to thread.h
Moves definition of struct thread_ctx_regs from thread_private.h to <kernel/thread.h>.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org>
core: move struct thread_ctx_regs to thread.h
Moves definition of struct thread_ctx_regs from thread_private.h to <kernel/thread.h>.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1936dfc7 | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add struct user_mode_ctx
Adds struct user_mode_ctx which replaces user mode specific fields used for memory mapping.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by:
core: add struct user_mode_ctx
Adds struct user_mode_ctx which replaces user mode specific fields used for memory mapping.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e94702a4 | 18-Sep-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: thread_enter_user_mode(): avoid leaking register content
Prior to this patch not all registers passed to user mode where assigned a new value. This allows user mode to see the value of some re
core: thread_enter_user_mode(): avoid leaking register content
Prior to this patch not all registers passed to user mode where assigned a new value. This allows user mode to see the value of some registers used by Core. With this patch all general purpose registers available in user mode are either cleared or assigned a value.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c2c16e87 | 09-Dec-2019 |
Andrew F. Davis <afd@ti.com> |
core: link.mk: Un-deprecate tee.bin v1 image generation
The v1 OP-TEE image "tee.bin" is used by a couple platforms as the only supported image version, until these platforms can migrate continue to
core: link.mk: Un-deprecate tee.bin v1 image generation
The v1 OP-TEE image "tee.bin" is used by a couple platforms as the only supported image version, until these platforms can migrate continue to build this image and do not mark it a deprecated. The tee-pager.bin and tee-pageable.bin are not used by these platforms and are properly deprecated by the v2 versions, leave these images deprecated.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ead7c47d | 09-Dec-2019 |
Andrew F. Davis <afd@ti.com> |
plat-ti: Restore non-secure entry address from saved copy in r5
When resuming the only value we need to work with a new version of is the non-secure context as it will have changed since boot. This
plat-ti: Restore non-secure entry address from saved copy in r5
When resuming the only value we need to work with a new version of is the non-secure context as it will have changed since boot. This value is stored on OP-TEE entry in r5, previously we saved this value by moving r5 to r3 then r3 to r4 basically just dodging getting overwritten by functions we call. This can be simplified now as nothing clobbers r5, so we can use it directly as the source for the non-secure context pointer feed into init_sec_mon().
Signed-off-by: Andrew F. Davis <afd@ti.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 55c1b947 | 10-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix generation of tee.bin
Prior to this patch generation of tee.bin (CFG_WITH_PAGER=n) fails with: GEN out/core/tee.bin Cannot find symbol __init_end core/arch/arm/kernel/link.mk:183: re
core: fix generation of tee.bin
Prior to this patch generation of tee.bin (CFG_WITH_PAGER=n) fails with: GEN out/core/tee.bin Cannot find symbol __init_end core/arch/arm/kernel/link.mk:183: recipe for target 'out/core/tee.bin' failed
Introduce a special __get_tee_init_end to fix this and also avoid confusion with __init_end used in the code for the pager case.
Fixes: 5dd1570ac5b0 ("core: add embedded data region") Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 85387995 | 09-Dec-2019 |
Clement Faure <clement.faure@nxp.com> |
core: imx: fix CFG_DRAM_BASE for imx8qm/qxp
The CFG_DRAM_BASE on imx8qm and imx8qxp is 0x80000000
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jens Wiklander <jens.wiklander@linar
core: imx: fix CFG_DRAM_BASE for imx8qm/qxp
The CFG_DRAM_BASE on imx8qm and imx8qxp is 0x80000000
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bc6f3bf2 | 20-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove unreachable code from tee_tadb_ta_open()
Prior to this patch tee_tadb_ta_open() had some unreachable code. With this patch remove that code, but retain the behaviour of tee_tadb_ta_open
core: remove unreachable code from tee_tadb_ta_open()
Prior to this patch tee_tadb_ta_open() had some unreachable code. With this patch remove that code, but retain the behaviour of tee_tadb_ta_open().
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2e42d8e7 | 19-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add description of struct tadb_entry
Adds description of the fields in struct tadb_entry.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@li
core: add description of struct tadb_entry
Adds description of the fields in struct tadb_entry.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b19db423 | 18-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add description of struct shdr_bootstrap_ta
Adds a description of the fields in struct shdr_bootstrap_ta.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <j
core: add description of struct shdr_bootstrap_ta
Adds a description of the fields in struct shdr_bootstrap_ta.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2139aa8c | 25-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: shdr_verify_signature() supply hash length for salt length
In order to support the TEE_ALG_RSASSA_PKCS1_PSS_MGF1_* group of algorithms supply the size of the hash as the size of the salt to cr
core: shdr_verify_signature() supply hash length for salt length
In order to support the TEE_ALG_RSASSA_PKCS1_PSS_MGF1_* group of algorithms supply the size of the hash as the size of the salt to crypto_acipher_rsassa_verify().
A salt is something introduced by PCKS1_PSS, PKCS1_V1.5 does not have a salt and the parameter will be ignored by crypto_acipher_rsassa_verify() for the latter.
With the PCKS1_PSS algorithm it is common practice to use a salt with the same size as the hash, but it is not a requirement. The implementation here depends on using a salt with the same size as the hash. This is a compromise to avoid extending the signed header with a salt length field.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d77929ec | 27-Nov-2019 |
Sumit Garg <sumit.garg@linaro.org> |
core: ftrace: dump core load address to support ASLR
Additionally dump core address in ftrace buffer to support syscall tracing in case TEE core ASLR is enabled.
Signed-off-by: Sumit Garg <sumit.ga
core: ftrace: dump core load address to support ASLR
Additionally dump core address in ftrace buffer to support syscall tracing in case TEE core ASLR is enabled.
Signed-off-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> [jf: s/Load address @/TEE load address @/] Signed-off-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 4f3fac24 | 27-Nov-2019 |
Sheetal Tigadoli <sheetal.tigadoli@broadcom.com> |
Update Broadcom DRAM2 base and size
Update Broadcom DRAM2 base and size Signed-off-by: Sheetal Tigadoli <sheetal.tigadoli@broadcom.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> |