History log of /optee_os/core/arch/arm/ (Results 2651 – 2675 of 3635)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
fb9489aa17-Oct-2017 Jordan Rhee <jordanrh@microsoft.com>

core: fix psci_cpu_on() to use context_id parameter

The PSCI specification requires the context_id parameter to be
passed in r0 when the core jumps to normal world. Some OS's require
this parameter.

core: fix psci_cpu_on() to use context_id parameter

The PSCI specification requires the context_id parameter to be
passed in r0 when the core jumps to normal world. Some OS's require
this parameter.

Tested on IMX6Quad and IMX7Dual.

Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Jordan Rhee <jordanrh@microsoft.com>
Tested-by: Jordan Rhee <jordanrh@microsoft.com>

show more ...

2f82082f02-Feb-2018 Edison Ai <edison.ai@arm.com>

core: add ddr overall register

register_ddr() is used to add overall DDR address range.
SDP memories, static SHM, secure DDR and so on need to fix the
problem that intersect with the overall DDR.

R

core: add ddr overall register

register_ddr() is used to add overall DDR address range.
SDP memories, static SHM, secure DDR and so on need to fix the
problem that intersect with the overall DDR.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Edison Ai <edison.ai@arm.com>

show more ...

216816c802-Feb-2018 Edison Ai <edison.ai@arm.com>

core: rename register_nsec_ddr() to register_dynamic_shm()

register_nsec_ddr() is actually only used to register dynamic physically
non-contiguous SHM, rename it to register_dynamic_shm() will be mo

core: rename register_nsec_ddr() to register_dynamic_shm()

register_nsec_ddr() is actually only used to register dynamic physically
non-contiguous SHM, rename it to register_dynamic_shm() will be more
clear.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Edison Ai <edison.ai@arm.com>

show more ...

3889635b28-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: select workaround vector in C

Replace the two assembly implementations for selecting the exception
vector with a common C version.

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Test

core: select workaround vector in C

Replace the two assembly implementations for selecting the exception
vector with a common C version.

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey, QEMU)
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

cb615cce28-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: arm.h: add more MIDR definitions

Adds MIDR_PRIMARY_PART_NUM_MASK and MIDR_IMPLEMENTER_MASK.

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Signed-off-by: Jens Wiklander <jens.wikland

core: arm.h: add more MIDR definitions

Adds MIDR_PRIMARY_PART_NUM_MASK and MIDR_IMPLEMENTER_MASK.

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

6768289428-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: arm64.h: add read_midr_el1()

Adds read_midr_el1() and the alias read_midr()

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

4366b8fe28-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: arm32.h: add read_midr()

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

f803132328-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: rename exception vectors

Rename exception vectors to thread_excp_vect* for both ARM32 and ARM64
to be more clear. The vectors are also exported with global definitions.

Reviewed-by: Volodymyr

core: rename exception vectors

Rename exception vectors to thread_excp_vect* for both ARM32 and ARM64
to be more clear. The vectors are also exported with global definitions.

Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

b14416d227-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: armv7: core_init_mmu_regs() init contextidr

The value of CONTEXTIDR is initially undefined, initialize it with a
sane value.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested

core: armv7: core_init_mmu_regs() init contextidr

The value of CONTEXTIDR is initially undefined, initialize it with a
sane value.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jordan Rhee <jordanrh@microsoft.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

18f4fe3d27-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: arm: kern.ld.S: stop using PROVIDE()

Stop using the PROVIDE() keyword in the linker script. The current usage
causes problems like:
out/arm-plat-vexpress/core/kern.ld:168: undefined symbol
`__

core: arm: kern.ld.S: stop using PROVIDE()

Stop using the PROVIDE() keyword in the linker script. The current usage
causes problems like:
out/arm-plat-vexpress/core/kern.ld:168: undefined symbol
`__asan_map_end' referenced in expression
make: *** [out/arm-plat-vexpress/core/tee.elf] Error 1

when compiled with certain flags and compilers.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

e2998dec26-Feb-2018 Etienne Carriere <etienne.carriere@linaro.org>

core 32bit mmu: remove constraint on reuse of xlat tables

Since commit 5e36abf51875 ("mmu: implement generic mmu initialization")
the MMU 32bit descriptor mode allows to map memories with different

core 32bit mmu: remove constraint on reuse of xlat tables

Since commit 5e36abf51875 ("mmu: implement generic mmu initialization")
the MMU 32bit descriptor mode allows to map memories with different
attributes (but the NS state) using different entries of a common
level2 MMU table. In the old days the non-LPAE layer failed to share
such level2 tables and required a pgdir alignment constraint when
assigning the core virtual addresses to be mapped. This change removes
the now useless constraint.

Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

9d858c7619-Jan-2018 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mmu: add dump_xlat_tables() function

As we dropped tables initialization code from core_mmu_v7.c and
core_mmu_lpae.c there are no means to visualize pagetables now.

This patch adds function that re

mmu: add dump_xlat_tables() function

As we dropped tables initialization code from core_mmu_v7.c and
core_mmu_lpae.c there are no means to visualize pagetables now.

This patch adds function that recursively prints current state of
pagetables. Currently it prints pagetables only during initialization,
but it can be used to debug pgt at any time.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

5e36abf519-Jan-2018 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mmu: implement generic mmu initialization

This patch adds function core_mmu_map_region() that maps given memory
region. This function is generic, in sense, that it can map memory for
both short and

mmu: implement generic mmu initialization

This patch adds function core_mmu_map_region() that maps given memory
region. This function is generic, in sense, that it can map memory for
both short and long descriptor formats, as it uses primitives provided
by core_mmu_v7 and core_mmu_lpae.

Also, this function tries to use largest allocation blocks
possible. For example, if memory region is not aligned to PGDIR_SIZE
but spans across multiple pgdirs, core_mmu_map_region() will map
most of this region with large blocks, and only start/end will be
mapped with small pages.

As core_mmu_map_region() provides all means needed for MMU initialization,
we can drop mmu-specific code in core_mmu_v7.c and core_mmu_lpae.c

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

4c4ae21019-Jan-2018 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mmu: replace _prepare_small_page_mapping with _entry_to_finer_grained

core_mmu_prepare_small_page_mapping() just prepares table for the next
level if there was no mappings already.
core_mmu_entry_to

mmu: replace _prepare_small_page_mapping with _entry_to_finer_grained

core_mmu_prepare_small_page_mapping() just prepares table for the next
level if there was no mappings already.
core_mmu_entry_to_finer_grained() will do the same even if there is are
something mapped there.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

512f509124-Jan-2018 Volodymyr Babchuk <vlad.babchuk@gmail.com>

smc: extend protocol to support virtualization

In order to support multiple guests, OP-TEE should be able to track
guest lifecycle. Idea is that hypervisor informs OP-TEE when it
wants to create a n

smc: extend protocol to support virtualization

In order to support multiple guests, OP-TEE should be able to track
guest lifecycle. Idea is that hypervisor informs OP-TEE when it
wants to create a new virtual machine. OP-TEE allocates resources
for it or returns an error, if there are not enough resources available.
When virtual machine is being destroyed OP-TEE frees any resources
that was allocated previously.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

ab53541102-Feb-2018 Etienne Carriere <etienne.carriere@linaro.org>

core: add pager constraint on mobj get_pa method

On user TA crash or panic the core may dump the TA state
among which the physical address of the memory area mapped
in the TA space which are referen

core: add pager constraint on mobj get_pa method

On user TA crash or panic the core may dump the TA state
among which the physical address of the memory area mapped
in the TA space which are referenced by the mobj layer.
Therefore the get_pa method for such mobj shall have a
KEEP_PAGER constraint.

This change adds such constraint for static shm and registered
shm memory objects.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

bda4804c01-Feb-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: sm_a32: add missing isb after scr change

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <

core: sm_a32: add missing isb after scr change

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ae9208f130-Jan-2018 Jens Wiklander <jens.wiklander@linaro.org>

arm32: enable ACTLR_CA8_ENABLE_INVALIDATE_BTB

Enables ACTLR_CA8_ENABLE_INVALIDATE_BTB (ACTLR[6]) in generic boot if
compiled with CFG_CORE_WORKAROUND_SPECTRE_BP or
CFG_CORE_WORKAROUND_SPECTRE_BP_SEC

arm32: enable ACTLR_CA8_ENABLE_INVALIDATE_BTB

Enables ACTLR_CA8_ENABLE_INVALIDATE_BTB (ACTLR[6]) in generic boot if
compiled with CFG_CORE_WORKAROUND_SPECTRE_BP or
CFG_CORE_WORKAROUND_SPECTRE_BP_SEC and the cpu is discovered to be
Cortex-A8.

Fixes CVE-2017-5715
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

259d7eb125-Jan-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: mmu: protect global asid field with a lock

Protects the global ASID bitfield (g_asid) with a spinlock.

Fixes: 99f969dd6c99 ("core: fine grained tee_ta_mutex locking")
Reviewed-by: Volodymyr B

core: mmu: protect global asid field with a lock

Protects the global ASID bitfield (g_asid) with a spinlock.

Fixes: 99f969dd6c99 ("core: fine grained tee_ta_mutex locking")
Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

6fde6f0225-Jan-2018 Jerome Forissier <jerome.forissier@linaro.org>

Revert "core: fine grained tee_ta_mutex locking"

Commit 99f969dd6c99 ("core: fine grained tee_ta_mutex locking") fixes a
deadlock that can occur if a TA is loaded while not enough page tables
are av

Revert "core: fine grained tee_ta_mutex locking"

Commit 99f969dd6c99 ("core: fine grained tee_ta_mutex locking") fixes a
deadlock that can occur if a TA is loaded while not enough page tables
are available in pgt_cache to map the context. But it also splits up a
big critical section and there's obviously a few hidden dependencies
towards tee_ta_mutex causing stability issues with the pager. Running
'while xtest 1013; do true; done' in AArch64 with at least three
threads running in parallel will ultimately fail.

Therefore, revert the fine grained locking commit until the race
conditions are sorted out.

Reported-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

c10e9d4824-Jan-2018 Volodymyr Babchuk <vlad.babchuk@gmail.com>

secstor: fix memory leak in install_ta()

If signature check failed, we need to close tadb session first.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jerome Forissier <jer

secstor: fix memory leak in install_ta()

If signature check failed, we need to close tadb session first.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

e9596d0722-Jan-2018 Etienne Carriere <etienne.carriere@linaro.org>

core: prevent crash in tee_mmu_final() on TA loading error

If the creation of the TA execution context fails before the mapping
directives are initialized, tee_mmu_final() will be called with the TA

core: prevent crash in tee_mmu_final() on TA loading error

If the creation of the TA execution context fails before the mapping
directives are initialized, tee_mmu_final() will be called with the TA
context field mmu being NULL.

This change allows tee_mmu_final() to be called with uninitialized
mapping resources without crashing the core.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

24a8c0ad19-Dec-2017 Peter Griffin <peter.griffin@linaro.org>

hikey: Enable cache APIs for hikey platform.

When decrypting into SDP buffers TA's like Playready
and Widevine need to be able to flush the cache.

Signed-off-by: Peter Griffin <peter.griffin@linaro

hikey: Enable cache APIs for hikey platform.

When decrypting into SDP buffers TA's like Playready
and Widevine need to be able to flush the cache.

Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

99f969dd18-Jan-2018 Jens Wiklander <jens.wiklander@linaro.org>

core: fine grained tee_ta_mutex locking

Changes TA loading and session initialization to use fine grained locking
based on the tee_ta_mutex.

This avoids a potential dead lock with PGT cache where w

core: fine grained tee_ta_mutex locking

Changes TA loading and session initialization to use fine grained locking
based on the tee_ta_mutex.

This avoids a potential dead lock with PGT cache where we're waiting for
new page tables with tee_ta_mutex locked, which prevents
tee_ta_close_session() to indirectly return any page tables.

This change also removes the last really big critical section. With this
TAs can be loaded in parallel.

Reported-by: Zhizhou Zhang <zhizhouzhang@asrmicro.com>
Tested-by: Zhizhou Zhang <zhizhouzhang@asrmicro.com>
Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960)
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4eaf9b0418-Jan-2018 Jerome Forissier <jerome.forissier@linaro.org>

Fix compiler warning with register_sdp_mem()

Fixes the following warning/error when CFG_SECURE_DATA_PATH is disabled:

$ make PLATFORM=hikey CFG_SECURE_DATA_PATH=n
...
core/arch/arm/mm/core_mmu.c

Fix compiler warning with register_sdp_mem()

Fixes the following warning/error when CFG_SECURE_DATA_PATH is disabled:

$ make PLATFORM=hikey CFG_SECURE_DATA_PATH=n
...
core/arch/arm/mm/core_mmu.c:90:61: error: ISO C does not allow extra ';' outside of a function [-Werror=pedantic]
register_sdp_mem(CFG_TEE_SDP_MEM_BASE, CFG_TEE_SDP_MEM_SIZE);
^
cc1: all warnings being treated as errors

Fixes: 2d9ed57b6bd8 ("Define register_sdp_mem() only when CFG_SECURE_DATA_PATH is defined")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<101102103104105106107108109110>>...146