| 941dec3a | 10-Jan-2020 |
Fangsuo Wu <fangsuowu@asrmicro.com> |
core: adjust nsec ddr memory size correctly
In carve_out_phys_mem(), when pa has the same address with m[n].addr, the m[n].size should also be adjusted.
Reviewed-by: Jens Wiklander <jens.wiklander@
core: adjust nsec ddr memory size correctly
In carve_out_phys_mem(), when pa has the same address with m[n].addr, the m[n].size should also be adjusted.
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Fangsuo Wu <fangsuowu@asrmicro.com>
show more ...
|
| 7d5f25b7 | 22-Jan-2020 |
Volodymyr Babchuk <volodymyr_babchuk@epam.com> |
plat: rcar: force disable core ALSR
We need to disable core ASLR for two reasons: 1. There is no source for ALSR seed, as Rcar platform does not provide DTB to OP-TEE 2. OP-TEE crashes during boo
plat: rcar: force disable core ALSR
We need to disable core ASLR for two reasons: 1. There is no source for ALSR seed, as Rcar platform does not provide DTB to OP-TEE 2. OP-TEE crashes during boot with enabled CFG_CORE_ASLR
Mainly we are disabling ASLR for the second reason. Further investigation is needed to see why enabled ASLR causes data abort in MMIO functions.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e66c2639 | 22-Jan-2020 |
Volodymyr Babchuk <volodymyr_babchuk@epam.com> |
plat: rcar: generate .srec file using gen_tee_bin
After recent changes, we are not able to use raw binary generated from the elf file. Instead we need to use gen_tee_bin script to generate the heade
plat: rcar: generate .srec file using gen_tee_bin
After recent changes, we are not able to use raw binary generated from the elf file. Instead we need to use gen_tee_bin script to generate the header-less binary with the correct layout.
This change also generates tee-raw.bin as byproduct. This file is usable also, because it allows to flash OP-TEE using JTAG.
Fixes: 5dd1570ac5b ("core: add embedded data region") Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e9c0b5d7 | 16-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: generic_entry_a{32,64}.S: use correct cached_mem_end
Stores the correct register at cached_mem_end at boot. This avoids usage of stale dcache content.
Fixes: 5dd1570ac5b0 ("core: add embedded
core: generic_entry_a{32,64}.S: use correct cached_mem_end
Stores the correct register at cached_mem_end at boot. This avoids usage of stale dcache content.
Fixes: 5dd1570ac5b0 ("core: add embedded data region") Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ba8c25ac | 13-Jan-2020 |
Andrew F. Davis <afd@ti.com> |
core: generic_entry_a64.S: use CIVAC over IVAC to clean cache data
After moving some initial sections around in memory we clean out the new data and invalidate the cache so it can be seen by other c
core: generic_entry_a64.S: use CIVAC over IVAC to clean cache data
After moving some initial sections around in memory we clean out the new data and invalidate the cache so it can be seen by other cores when they enable caches. The instruction used was invalidate; on most systems this will behave the same as clean/invalidate, but on some with L3 caches this can cause the just written data to be invalidated. Use the clean+invalidate to prevent this on such systems.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 4518cdc1 | 14-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm64: introduce CFG_CORE_ARM64_PA_BITS
Introduces CFG_CORE_ARM64_PA_BITS which replaces the max_pa global variable which was used to configure TCR_EL1.IPS.
Prior to 520860f ("core: generic_e
core: arm64: introduce CFG_CORE_ARM64_PA_BITS
Introduces CFG_CORE_ARM64_PA_BITS which replaces the max_pa global variable which was used to configure TCR_EL1.IPS.
Prior to 520860f ("core: generic_entry: add enable_mmu()") TCR_EL1.IPS was calculated and even updated later in the boot flow to automatically cover the needed physical address space. But now it's calculated before MMU is enabled and once MMU it's kept in read only memory.
With CFG_CORE_ARM64_PA_BITS TCR_EL1.IPS can be determined early and later it is enough to check that physical addresses to be mapped are covered by CFG_CORE_ARM64_PA_BITS.
Fixes: 520860f658be ("core: generic_entry: add enable_mmu()") Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 67283c27 | 14-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: make SMALL_PAGE_MASK and friends of type paddr_t
Makes SMALL_PAGE_MASK, CORE_MMU_PGDIR_MASK, CORE_MMU_USER_CODE_MASK and CORE_MMU_USER_PARAM_MASK of type paddr_t to allow correct masking of si
core: make SMALL_PAGE_MASK and friends of type paddr_t
Makes SMALL_PAGE_MASK, CORE_MMU_PGDIR_MASK, CORE_MMU_USER_CODE_MASK and CORE_MMU_USER_PARAM_MASK of type paddr_t to allow correct masking of significant bits.
Example: extern paddr_t addr; paddr_t page_addr = addr & ~SMALL_PAGE_MASK
If paddr_t is a 64-bit type SMALL_PAGE_MASK must also be 64-bit wide or the ~ operation will not set all the higher bits.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 18339813 | 26-Nov-2019 |
Sumit Garg <sumit.garg@linaro.org> |
core: enable rollback protection for REE-FS TAs
Add check for TA version while loading TA from REE-FS and compare against secure storage based TA version database to prevent against any TA version d
core: enable rollback protection for REE-FS TAs
Add check for TA version while loading TA from REE-FS and compare against secure storage based TA version database to prevent against any TA version downgrades.
Signed-off-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| abfd092a | 23-Dec-2019 |
Anthony Steinhauser <asteinhauser@google.com> |
core: arm64: fix speculative execution past ERET vulnerability
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET inst
core: arm64: fix speculative execution past ERET vulnerability
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET instruction was not a jump instruction. The speculative execution does not cross privilege-levels (to the jump target as one would expect), but it continues on the kernel privilege level as if the ERET instruction did not change the control flow - thus execution anything that is accidentally linked after the ERET instruction. Later, the results of this speculative execution are always architecturally discarded, however they can leak data using microarchitectural side channels. This speculative execution is very reliable (seems to be unconditional) and it manages to complete even relatively performance-heavy operations (e.g. multiple dependent fetches from uncached memory).
It was fixed by Linux [1], FreeBSD [2] and OpenBSD [3]. The misbehavior is demonstrated in [4] and [5].
Link: [1] https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8 Link: [2] https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61 Link: [3] https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2 Link: [4] https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc Link: [5] https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bf729804 | 03-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add core_mmu_map_contiguous_pages()
Adds core_mmu_map_contiguous_pages() which maps a range of physical addresses.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wik
core: add core_mmu_map_contiguous_pages()
Adds core_mmu_map_contiguous_pages() which maps a range of physical addresses.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 76c49973 | 12-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename to mobj_{inc,dec}_map()
Renames mobj_reg_shm_inc_map() and mobj_reg_shm_dec_map() to mobj_inc_map() and mobj_dec_map() respectively. This makes room for other implementations of registe
core: rename to mobj_{inc,dec}_map()
Renames mobj_reg_shm_inc_map() and mobj_reg_shm_dec_map() to mobj_inc_map() and mobj_dec_map() respectively. This makes room for other implementations of registered shared memory.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 026e3556 | 10-Oct-2019 |
Andrew F. Davis <afd@ti.com> |
plat-ti: Switch to using SMCCC compatible calls
Previously on our TI evil vendor Linux tree we would use a sentinel value in r12 to signal if a call was meant for OP-TEE or the legacy ROM. A path to
plat-ti: Switch to using SMCCC compatible calls
Previously on our TI evil vendor Linux tree we would use a sentinel value in r12 to signal if a call was meant for OP-TEE or the legacy ROM. A path to using SMCCC compatible calls from Linux is being implemented. Switch the OP-TEE side over.
Signed-off-by: Andrew F. Davis <afd@ti.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 87372da4 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
Enable ASLR by default
With this patch both CFG_TA_ASLR and CFG_CORE_ASLR are set to 'y' by default.
Removes CFG_TA_ASLR?=y for plat-hikey and plat-vexpress (qemu_virt).
If the current platform do
Enable ASLR by default
With this patch both CFG_TA_ASLR and CFG_CORE_ASLR are set to 'y' by default.
Removes CFG_TA_ASLR?=y for plat-hikey and plat-vexpress (qemu_virt).
If the current platform doesn't use CFG_DT=y and hasn't overridden get_aslr_seed() a warning message will be printed on the secure uart and execution will resume with the default load address.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5502aad4 | 25-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: link.mk: Make sure to link without relro
Passes -z norelro to linker to make sure that the relro option isn't enabled. With relro enabled all relro sections has to be contiguous with each oth
core: link.mk: Make sure to link without relro
Passes -z norelro to linker to make sure that the relro option isn't enabled. With relro enabled all relro sections has to be contiguous with each other. This would prevent us from removing .dynamic from the binary created with scripts/gen_tee_bin.py. Regardless of the relro option OP-TEE itself uses the equivalent of relro when mapping its memory.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e996d189 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: support ASLR and paging
Adds support for CFG_WITH_PAGER=y and CFG_CORE_ASLR=y.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 9438dbdb | 04-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix tee_pager_release_one_phys() assert
Prior to this patch it was assumed in tee_pager_release_one_phys() that a locked fobj would not span multiple page directories. This is not correct sinc
core: fix tee_pager_release_one_phys() assert
Prior to this patch it was assumed in tee_pager_release_one_phys() that a locked fobj would not span multiple page directories. This is not correct since it depends on the base address and size of the locked fobj. If the base address is close to the end of a page directory it can very well happen. With CFG_CORE_ASLR=y this is bound to happen sooner or later even if everything seems to work with CFG_CORE_ASLR=n. This patch fixes this by instead counting the number of areas which uses the pmem to be released. The number should be exactly one.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 83471b29 | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix pager vaspace start in assign_mem_va()
Makes sure that MEM_AREA_PAGER_VASPACE follows directly after the static mappings of the OP-TEE ELF. This fixes the case where OP-TEE is mapped at hi
core: fix pager vaspace start in assign_mem_va()
Makes sure that MEM_AREA_PAGER_VASPACE follows directly after the static mappings of the OP-TEE ELF. This fixes the case where OP-TEE is mapped at higher addresses and thus tries to locate everything else at lower addresses. Without a fixed address for MEM_AREA_PAGER_VASPACE the reserved pager vaspace could end up at the wrong address.
Fixes: 5dd1570ac5b0 ("core: add embedded data region") Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ff207c8d | 22-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pager: allocate pager_tables dynamically
With ASLR the number of pager_tables needed can differ from the number of pager_tables needed in an non-relocated configuration. Depending on the value
core: pager: allocate pager_tables dynamically
With ASLR the number of pager_tables needed can differ from the number of pager_tables needed in an non-relocated configuration. Depending on the value of VCORE_START_VA the range VCORE_START_VA..+TEE_RAM_VA_SIZE may cover an extra table compared to VCORE_START_VA being aligned to the start of a table. To avoid multiple configurations always calculate the number of tables needed and allocate pager_tables accordingly.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 15ba8c1f | 15-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move VFP state into struct user_ta_ctx
Moves the VFP state from struct user_ta_ctx to struct user_mode_ctx to make user mode handling a bit more generic.
Acked-by: Pipat Methavanitpong <pipat
core: move VFP state into struct user_ta_ctx
Moves the VFP state from struct user_ta_ctx to struct user_mode_ctx to make user mode handling a bit more generic.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7d2b71d6 | 08-Nov-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: vm_set_prot() and friends works across VM regions
Updates vm_set_prot() and friends to work on memory ranges which doesn't necessarily align with the underlying VM regions. VM regions are spli
core: vm_set_prot() and friends works across VM regions
Updates vm_set_prot() and friends to work on memory ranges which doesn't necessarily align with the underlying VM regions. VM regions are split as needed to perform the operations, with operations completed VM regions in the supplied memory range are merged if possible. The only restriction on a supplied memory range is that the already present mapping is compatible with the change.
Note that this also affect pager which also splits and merges pager areas as needed.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7c732ee4 | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: get svc handler from the context of current session
Instead of a single global syscalls definition, get the syscall handler function from the context of current active session.
An extra optio
core: get svc handler from the context of current session
Instead of a single global syscalls definition, get the syscall handler function from the context of current active session.
An extra optional (mandatory for user mode TAs) function pointer is added to struct tee_ta_ops, handle_svc, which handles the syscall.
tee_svc_handler() is split into a generic thread_svc_handler() which is put in kernel/thread.c. The user TA specific part is put in user_ta_handle_svc() which is kept in tee/arch_svc.c but made available via the new .handle_svc function pointer of struct tee_ta_ops.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5343f09f | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add common user_mode_ctx_print_mappings()
Adds a common user_mode_ctx_print_mappings() which prints the current user mode mappings.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro
core: add common user_mode_ctx_print_mappings()
Adds a common user_mode_ctx_print_mappings() which prints the current user mode mappings.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2ccaf1af | 18-Sep-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move struct thread_ctx_regs to thread.h
Moves definition of struct thread_ctx_regs from thread_private.h to <kernel/thread.h>.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org>
core: move struct thread_ctx_regs to thread.h
Moves definition of struct thread_ctx_regs from thread_private.h to <kernel/thread.h>.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1936dfc7 | 07-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add struct user_mode_ctx
Adds struct user_mode_ctx which replaces user mode specific fields used for memory mapping.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by:
core: add struct user_mode_ctx
Adds struct user_mode_ctx which replaces user mode specific fields used for memory mapping.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e94702a4 | 18-Sep-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: thread_enter_user_mode(): avoid leaking register content
Prior to this patch not all registers passed to user mode where assigned a new value. This allows user mode to see the value of some re
core: thread_enter_user_mode(): avoid leaking register content
Prior to this patch not all registers passed to user mode where assigned a new value. This allows user mode to see the value of some registers used by Core. With this patch all general purpose registers available in user mode are either cleared or assigned a value.
Acked-by: Pipat Methavanitpong <pipat.methavanitpong@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|