| 52ae776e | 14-Feb-2020 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: aslr: fix cached_mem_end update
Fix update of cache_mem_end that corrupts CPU register R4 used to store a boot argument in Aarch32.
Fixes: 487fd6828322 ("core: aslr: apply load offset to cach
core: aslr: fix cached_mem_end update
Fix update of cache_mem_end that corrupts CPU register R4 used to store a boot argument in Aarch32.
Fixes: 487fd6828322 ("core: aslr: apply load offset to cached_mem_end") Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7c1d10ce | 14-Feb-2020 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: generic_entry: fix aarch32 lpae mmu configuration
Correct configuration of the MMU registers TTBR0/TTBR1 for Aarch32/LPAE that omitted to load a zero value in the 32bit upper part of the regis
core: generic_entry: fix aarch32 lpae mmu configuration
Correct configuration of the MMU registers TTBR0/TTBR1 for Aarch32/LPAE that omitted to load a zero value in the 32bit upper part of the registers.
Fixes: 520860f658be ("core: generic_entry: add enable_mmu()") Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7204438c | 31-Jan-2020 |
Khoa Hoang <admin@khoahoang.com> |
core: aslr: set tee_svc_uref_base to VCORE_START_VA
tee_svc_uref_base was using hardcoded TEE_TEXT_VA_START define value. This value isn't valid after TEE core relocation. Switch to use VCORE_START_
core: aslr: set tee_svc_uref_base to VCORE_START_VA
tee_svc_uref_base was using hardcoded TEE_TEXT_VA_START define value. This value isn't valid after TEE core relocation. Switch to use VCORE_START_VA which is linker variable that should get update after relocation code executed.
Signed-off-by: Khoa Hoang <admin@khoahoang.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 487fd682 | 30-Jan-2020 |
Khoa Hoang <admin@khoahoang.com> |
core: aslr: apply load offset to cached_mem_end
cached_mem_end was calculated before relocation and use later for D$ flush. Add code to update cached_mem_end with ASLR load offset.
Signed-off-by: K
core: aslr: apply load offset to cached_mem_end
cached_mem_end was calculated before relocation and use later for D$ flush. Add code to update cached_mem_end with ASLR load offset.
Signed-off-by: Khoa Hoang <admin@khoahoang.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 9df63cd7 | 21-Nov-2019 |
Clement Faure <clement.faure@nxp.com> |
core: imx: add imx6ulzevk platform flavor
Add imx6ulzevk platform flavor.
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jerome Forissier <jerome@forissier.org> |
| c9f1d2ba | 20-Aug-2019 |
Clement Faure <clement.faure@nxp.com> |
core: imx: add default UART for sabreauto boards
Board imx6*sabreauto default UART is UART4 and not UART1.
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jerome Forissier <jerome@fo
core: imx: add default UART for sabreauto boards
Board imx6*sabreauto default UART is UART4 and not UART1.
Signed-off-by: Clement Faure <clement.faure@nxp.com> Acked-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 2deea89a | 10-Feb-2020 |
Ali Zhang <alizhang@google.com> |
core: arm32: fix out-of-sync SPSR
On some platforms that use OP-TEE built as ARM32, asynchronous data aborts are causing innocent TAs to be killed non-deterministically upon invocations.
OP-TEE sho
core: arm32: fix out-of-sync SPSR
On some platforms that use OP-TEE built as ARM32, asynchronous data aborts are causing innocent TAs to be killed non-deterministically upon invocations.
OP-TEE should not be trapping asynchronous data aborts by default (unless for bringups) because they usually indicate memory errors outside the control of PE's MMU and thus better handled by the normal world OS. Trapping async data aborts also force-unloads keep-alive TAs, which defeats the feature.
This (masking async data aborts) turns out to be indeed the expected behavior as `CPSR.A` is set upon SMC entry. The bit is however mistakenly cleared upon transitioning from SVC mode (OP-TEE) to user mode (TA) due to a typo introduced in the following commit:
commit a702f5e71e79 ("core: split thread_enter_user_mode")
where `get_spsr()` should be calling `read_cpsr()` instead of `read_spsr()` in order to save important bits in CPSR to SPSR prior to switching to user mode.
More general background at the risk of being pedantic:
Invoking a TA from the REE-OS triggers a series of exception level transitions: NS.EL1-->[NS.EL2-->]EL3-->[S.EL2-->]S.EL1-->S.EL0.
During each transtion the SPSR of each level except EL0 should be kept in sync with the level's PSTATE(ARM64) or CPSR(ARM32).
The PSTATE/CPSR is initialized by target level's software when transitioning from a high level to a lower level. For example OP-TEE initializes the PSTATE/CPSR upon SMC entry.
The PSTATE/CPSR is saved to the current level SPSR by software prior to transitioning out to a lower level. The PSTATE/CPSR is restored from SPSR automatically (without software intervention) upon "returning" to a highlevel from a lower level.
Fixes: https://github.com/OP-TEE/optee_os/issues/3576
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Ali Zhang <alizhang@google.com>
show more ...
|
| bf4a9353 | 23-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: protect syscall table lookup from speculation
In user_ta_handle_svc() as part of handling a syscall there's a lookup in the syscall table which can be subject to a speculation attack. load_no_
core: protect syscall table lookup from speculation
In user_ta_handle_svc() as part of handling a syscall there's a lookup in the syscall table which can be subject to a speculation attack. load_no_speculate() is used to protect the sensitive lookup.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b979dff7 | 06-Feb-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: vm_unmap(): remove length alignment requirement
Removes the requirement that length of memory area to unmap must be page aligned. The supplied length is instead rounded up to the nearest page.
core: vm_unmap(): remove length alignment requirement
Removes the requirement that length of memory area to unmap must be page aligned. The supplied length is instead rounded up to the nearest page.
This fixes a regression with CFG_FTRACE_SUPPORT=y: E/TC:? 0 assertion '!res' failed at core/arch/arm/kernel/user_ta.c:571 <user_ta_dump_ftrace>
Fixes: cffe74d2446b ("core: fix assigned size of struct mobj_reg_shm") Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d780a7fb | 01-Feb-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: set SCTLR_SPAN
Initializes SCTLR.SPAN to 1. SCTLR.SPAN was introduced with v8.1-PAN and was prior to that defined as RES1.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off
core: arm: set SCTLR_SPAN
Initializes SCTLR.SPAN to 1. SCTLR.SPAN was introduced with v8.1-PAN and was prior to that defined as RES1.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5746bdef | 01-Feb-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: add SCTLR_SPAN define
Adds define for setting SCTLR.SPAN which is available with the architecture feature ARMv8.1-PAN in both AArch32 and AArch64.
Reviewed-by: Jerome Forissier <jerome@f
core: arm: add SCTLR_SPAN define
Adds define for setting SCTLR.SPAN which is available with the architecture feature ARMv8.1-PAN in both AArch32 and AArch64.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7ce2319e | 03-Feb-2020 |
Henrik Uhrenfeldt <henrik.uhrenfeldt@huawei.com> |
hikey960: fix support for 4G & 6G boards
Since commit 4518cdc1ff64 ("core: arm64: introduce CFG_CORE_ARM64_PA_BITS") platforms are required to define CFG_CORE_ARM64_PA_BITS if their physical address
hikey960: fix support for 4G & 6G boards
Since commit 4518cdc1ff64 ("core: arm64: introduce CFG_CORE_ARM64_PA_BITS") platforms are required to define CFG_CORE_ARM64_PA_BITS if their physical address space extends beyond 4G. This was missing for HiKey960 4G & 6G versions, which indeed have addresses beyond 4G.
Signed-off-by: Henrik Uhrenfeldt <henrik.uhrenfeldt@huawei.com>
show more ...
|
| 1ba7f0bb | 27-Sep-2019 |
Cedric Neveux <cedric.neveux@nxp.com> |
drivers: CAAM driver User Buffer SGT create
CAAM Driver can operate directly with the User Buffer and in this case, the buffer can be on non-contiguous physical page.
CAAM is using a DMA to load/st
drivers: CAAM driver User Buffer SGT create
CAAM Driver can operate directly with the User Buffer and in this case, the buffer can be on non-contiguous physical page.
CAAM is using a DMA to load/store data from memory. The DMA is working with physical address. In case of the User Buffer, if the buffer is crossing multiple Small Page, a CAAM Scatter Gather Table needs to be created to rebuild the physical memory chunks used by the User virtual buffer.
Add a function to check if a buffer is a User buffer crossing mutliple small page. Add a function to create a SGT Table of the User buffer.
Signed-off-by: Cedric Neveux <cedric.neveux@nxp.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b6afa13a | 27-Jan-2020 |
Carlo Caione <ccaione@baylibre.com> |
plat-amlogic: Add initial support for Amlogic platforms
This is the initial support for the Amlogic platforms.
Tested 64-bin mode on A113D (AXG) board using upstream TF-A BL31.
* xtest results (-l
plat-amlogic: Add initial support for Amlogic platforms
This is the initial support for the Amlogic platforms.
Tested 64-bin mode on A113D (AXG) board using upstream TF-A BL31.
* xtest results (-l 15): | 44074 subtests of which 0 failed | 96 test cases of which 0 failed | 0 test cases were skipped | TEE test application done!
* Compiled with: | make PLATFORM=amlogic
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Carlo Caione <ccaione@baylibre.com>
show more ...
|
| 5ef300e2 | 31-Jan-2020 |
Jerome Forissier <jerome@forissier.org> |
core_mmu: fix warnings when CFG_CORE_DYN_SHM=n && CFG_SECURE_DATA_PATH=n
Static function pbuf_is_special_mem() is used only when dynamic shared memory or secure data path are enabled. Add the proper
core_mmu: fix warnings when CFG_CORE_DYN_SHM=n && CFG_SECURE_DATA_PATH=n
Static function pbuf_is_special_mem() is used only when dynamic shared memory or secure data path are enabled. Add the proper #ifdefs to fix the following warning:
$ make -s CFG_CORE_DYN_SHM=n CFG_SECURE_DATA_PATH=n core/arch/arm/mm/core_mmu.c:260:13: warning: ‘pbuf_is_special_mem’ defined but not used [-Wunused-function] 260 | static bool pbuf_is_special_mem(paddr_t pbuf, size_t len, | ^~~~~~~~~~~~~~~~~~~
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b8889ee9 | 31-Jan-2020 |
Jerome Forissier <jerome@forissier.org> |
core: entry_fast.c: fix warning when CFG_CORE_DYN_SHM=n
When CFG_CORE_DYN_SHM=n and CFG_TEE_CORE_LOG_LEVEL<3 we have:
$ make -s CFG_CORE_DYN_SHM=n CFG_TEE_CORE_LOG_LEVEL=2 core/arch/arm/tee/entry
core: entry_fast.c: fix warning when CFG_CORE_DYN_SHM=n
When CFG_CORE_DYN_SHM=n and CFG_TEE_CORE_LOG_LEVEL<3 we have:
$ make -s CFG_CORE_DYN_SHM=n CFG_TEE_CORE_LOG_LEVEL=2 core/arch/arm/tee/entry_fast.c: In function ‘tee_entry_exchange_capabilities’: core/arch/arm/tee/entry_fast.c:65:7: warning: unused variable ‘dyn_shm_en’ [-Wunused-variable] 65 | bool dyn_shm_en = false; | ^~~~~~~~~~
Add __maybe_unused to get rid of the warning.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 58e47485 | 05-Nov-2019 |
Rouven Czerwinski <r.czerwinski@pengutronix.de> |
plat-imx: Add SA settings for i.MX6UL
The Secure Access register configures the access mode for non-TrustZone aware DMA masters. To ensure that no DMA master can read the secure memory for OP-TEE, w
plat-imx: Add SA settings for i.MX6UL
The Secure Access register configures the access mode for non-TrustZone aware DMA masters. To ensure that no DMA master can read the secure memory for OP-TEE, we set access for all masters except the processor (Cortex-A7) to non-secure only and lock the settings afterwards.
Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de> Reviewed-by: Clement Faure <clement.faure@nxp.com>
show more ...
|
| cab01ed5 | 05-Nov-2019 |
Rouven Czerwinski <r.czerwinski@pengutronix.de> |
plat-imx: add CSU SA register for i.MX6/7
CSU_SA is at the same offset for both i.MX6 and i.MX7, add it to both register files.
Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de> Review
plat-imx: add CSU SA register for i.MX6/7
CSU_SA is at the same offset for both i.MX6 and i.MX7, add it to both register files.
Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de> Reviewed-by: Clement Faure <clement.faure@nxp.com>
show more ...
|
| 403cc5e3 | 18-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm64.h: add read_mpidr() macro
Adds the macro read_mpidr() to arm64.h to avoid ifdefs in code.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklan
core: arm64.h: add read_mpidr() macro
Adds the macro read_mpidr() to arm64.h to avoid ifdefs in code.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 121351f6 | 19-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: read thread_vector_table from assembly
Reads and returns thread_vector_table directly from assembly instead of saving the return value from generic_boot_init_primary(). With this generic_boot_
core: read thread_vector_table from assembly
Reads and returns thread_vector_table directly from assembly instead of saving the return value from generic_boot_init_primary(). With this generic_boot_init_primary() is declared in the same way when configured with or without TF-A.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| fd44afdc | 28-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pseudo_ta: check size of mapped mobj
Add a check in copy_in_param() to see that the mobj is large enough to hold the mapped parameter.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Sig
core: pseudo_ta: check size of mapped mobj
Add a check in copy_in_param() to see that the mobj is large enough to hold the mapped parameter.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a3f882bb | 29-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj_phys_get_va(): check offset is in range
Checks that the supplied offset is still within the range of the mobj.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wi
core: mobj_phys_get_va(): check offset is in range
Checks that the supplied offset is still within the range of the mobj.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 4befaadc | 29-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mobj_reg_shm_get_va(): check offset is in range
Checks that the supplied offset is still within the range of the mobj.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens
core: mobj_reg_shm_get_va(): check offset is in range
Checks that the supplied offset is still within the range of the mobj.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| da01e483 | 22-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove num_pages from struct mobj_reg_shm
Removes the redundant element num_pages from struct mobj_reg_shm.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander
core: remove num_pages from struct mobj_reg_shm
Removes the redundant element num_pages from struct mobj_reg_shm.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| cffe74d2 | 21-Jan-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix assigned size of struct mobj_reg_shm
Prior to this patch a struct mobj_reg_shm was initialized with num_pages * SMALL_PAGE_SIZE without taking page_offset into account. This patch fixes th
core: fix assigned size of struct mobj_reg_shm
Prior to this patch a struct mobj_reg_shm was initialized with num_pages * SMALL_PAGE_SIZE without taking page_offset into account. This patch fixes that by subtracting the result of the multiplication above with page_offset.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|