History log of /optee_os/core/arch/arm/ (Results 2976 – 3000 of 3635)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
62aeb34b19-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: reduce unpaged size

Reduces unpaged size by excluding __thread_std_smc_entry() from the
unpaged graph.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklande

core: reduce unpaged size

Reduces unpaged size by excluding __thread_std_smc_entry() from the
unpaged graph.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

95e4998a19-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: arm: use weak symbols to reduce dependency graphs

Makes functions that need to be excluded from unpaged and init parts of
the TEE binary weak. When building the dependency graph for init and
u

core: arm: use weak symbols to reduce dependency graphs

Makes functions that need to be excluded from unpaged and init parts of
the TEE binary weak. When building the dependency graph for init and
unpaged parts an empty version of those functions (from
core/arch/arm/kernel/link_dummies.c) are used instead.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a04aa50f19-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: bugfix undefined behavior in expand_prel31()

Fixes undefined behavior in expand_prel31() detected with
CFG_CORE_SANITIZE_UNDEFINED=y

ERROR: [0x0] TEE-CORE: Undefined behavior shift_out_of_b

core: bugfix undefined behavior in expand_prel31()

Fixes undefined behavior in expand_prel31() detected with
CFG_CORE_SANITIZE_UNDEFINED=y

ERROR: [0x0] TEE-CORE: Undefined behavior shift_out_of_bounds at core/arch/arm/kernel/unwind_arm32.c:102 col 42
ERROR: [0x0] TEE-CORE: Panic at core/kernel/ubsan.c:189 <__ubsan_handle_shift_out_of_bounds>

Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU)
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4fd20a1219-Jun-2017 Etienne Carriere <etienne.carriere@linaro.org>

core: fix listing of init resources in linker file

Fix the missing space character to separate entries at generation of
init_entries.txt file. This file content is used as an argument list
string fo

core: fix listing of init resources in linker file

Fix the missing space character to separate entries at generation of
init_entries.txt file. This file content is used as an argument list
string for the linker tool.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

da4fad9914-Jun-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mobj: mobj_reg_shm: fix bug in offset calculation

Wrong variable was used to calculate offset.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@

mobj: mobj_reg_shm: fix bug in offset calculation

Wrong variable was used to calculate offset.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a3ea24cf16-Jun-2017 Etienne Carriere <etienne.carriere@linaro.org>

core: clarify end of static mapping table

Move remaining code relying on null size value for detecting end
of static mapping table with a test on type value. This is made
consistent between lpae and

core: clarify end of static mapping table

Move remaining code relying on null size value for detecting end
of static mapping table with a test on type value. This is made
consistent between lpae and non-lpae implementations.

Rename MEM_AREA_NOTYPE into MEM_AREA_END as it is dedicated to this
specific purpose.

Faulty core_mmu_get_type_by_pa() can return MEM_AREA_MAXTYPE on invalid
cases.

Add a comment highlighting null sized entry are not filled in the static
mapping directives table.

Forgive the trick on level_index_m'sk to fit in the 80 chars/line.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4e1faa2f16-Jun-2017 Viktor Signayevskiy <v.signayevsk@samsung.com>

plat-sunxi: provide .bss section initialization before usage

BSS initialization is executed AFTER the initialization of the
MMU table (global variable array "static_memory_map[]"), so
the table is o

plat-sunxi: provide .bss section initialization before usage

BSS initialization is executed AFTER the initialization of the
MMU table (global variable array "static_memory_map[]"), so
the table is overwritten.
Change this so that BSS initialization executes BEFORE
static_memory_map[] is initialized by core_init_mmu_map().

Signed-off-by: Victor Signaevskyi <piligrim2007@meta.ua>
Fixes: https://github.com/OP-TEE/optee_os/issues/1607
Fixes: 236601217f7e ("core: remove __early_bss")
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
[jf: minor edits to the commit message, add Fixes:]
Signed-off-by: Jerome Forissier <jeorme.forissier@linaro.org>

show more ...

8410cd9424-May-2017 Andrew F. Davis <afd@ti.com>

plat-ti: Reserve first page of SRAM for secure boot software

The first 4KB of SRAM is used by the initial secure software and
OP-TEE should not be loaded to this address. Adjust the TEE_LOAD_ADDR
to

plat-ti: Reserve first page of SRAM for secure boot software

The first 4KB of SRAM is used by the initial secure software and
OP-TEE should not be loaded to this address. Adjust the TEE_LOAD_ADDR
to reflect this.

Signed-off-by: Andrew F. Davis <afd@ti.com>

show more ...

432f64c115-Jun-2017 Viktor Signayevskiy <v.signayevsk@samsung.com>

core: fix core_init_mmu_tables() loop

Fixes the terminating condition of the for loop in
core_init_mmu_tables() to rely on mm[n].type instead of mm[n].size.

Fixes: https://github.com/OP-TEE/issue/1

core: fix core_init_mmu_tables() loop

Fixes the terminating condition of the for loop in
core_init_mmu_tables() to rely on mm[n].type instead of mm[n].size.

Fixes: https://github.com/OP-TEE/issue/1602
Signed-off-by: Victor Signaevskyi <piligrim2007@meta.ua>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
[jf: wrap commit description]
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

142d5af206-Jun-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

core: use mobjs for all shared buffers

To ease usage of REE-originated shared memory, all code that uses shared
buffer is moved to mobjs. That means that TA loader, fs_rpc, sockets, etc
all use mobj

core: use mobjs for all shared buffers

To ease usage of REE-originated shared memory, all code that uses shared
buffer is moved to mobjs. That means that TA loader, fs_rpc, sockets, etc
all use mobjs to represent shared buffers instead of simple paddr_t.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

9cf24e6b02-Jun-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mobj: added new mobj type: mobj_shm

mobj_shm represents buffer in predefined SHM region.
It can be used to pass allocated shm regions instead of [paddr,size] pair.

Signed-off-by: Volodymyr Babchuk

mobj: added new mobj type: mobj_shm

mobj_shm represents buffer in predefined SHM region.
It can be used to pass allocated shm regions instead of [paddr,size] pair.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

0d9e635813-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

plat-d02: Use LPAE, increase pager TZSRAM size to 512K and TEE_RAM to 2M

Fixes a boot error when CFG_WITH_PAGER=y:

INFO: TEE-CORE:
INFO: TEE-CORE: Pager is enabled. Hashes: 512 bytes
INFO:

plat-d02: Use LPAE, increase pager TZSRAM size to 512K and TEE_RAM to 2M

Fixes a boot error when CFG_WITH_PAGER=y:

INFO: TEE-CORE:
INFO: TEE-CORE: Pager is enabled. Hashes: 512 bytes
INFO: TEE-CORE: OP-TEE version: 2.4.0-136-g4ec2358 #25 Tue Jun 13 13:32:21 UTC 2017 arm
INFO: TEE-CORE: Shared memory address range: 50500000, 50f00000
ERROR: TEE-CORE: Panic at core/lib/libtomcrypt/src/tee_ltc_provider.c:500 <get_mpa_scratch_memory_pool>

Panic occurs because tee_pager_alloc() fails to allocate memory from
tee_mm_vcore. Fix this by increasing CFG_TEE_RAM_VA_SIZE from 1 to
2 MiB. This implies to enable LPAE, otherwise the TEE core panics with:

ERROR: TEE-CORE: Panic 'Unsupported page size in translation table' at core/arch/arm/mm/tee_pager.c:219 <set_alias_area>

Finally, CFG_CORE_TZSRAM_EMUL_SIZE has to be increased to at least
416 KiB to avoid:

LD out/arm-plat-d02/core/tee.elf
/usr/bin/arm-linux-gnueabihf-ld: OP-TEE can't fit init part into available physical memory

We choose 512 KiB because smaller values cause horrible performance.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

76497ff712-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

plat-hikey: enable 64-bit paging

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

5339dc5401-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

plat-vexpress: enable 64-bit paging

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (Hikey GP)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Juno AArch64)
Tested-by: Jens Wiklande

plat-vexpress: enable 64-bit paging

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (Hikey GP)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Juno AArch64)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (FVP AArch64)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU AArch64)
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

9ba3438901-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: arm: increase emulated SRAM

Increases emulated TrustZone protected SRAM to 448 kB to increase
the pager performance especially for 64-bit mode.

Reviewed-by: Etienne Carriere <etienne.carriere

core: arm: increase emulated SRAM

Increases emulated TrustZone protected SRAM to 448 kB to increase
the pager performance especially for 64-bit mode.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4b60327f01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: update 64-bit copy_init from 32-bit version

Updates the copy_init part in generic_entry_a64.S from
generic_entry_a32.S

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-b

core: update 64-bit copy_init from 32-bit version

Updates the copy_init part in generic_entry_a64.S from
generic_entry_a32.S

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ebba838301-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: 64-bit update release_unused_kernel_stack()

release_unused_kernel_stack() is called when the pager is enabled when
the state of a thread is saved in order to release unused stack pages.

Updat

core: 64-bit update release_unused_kernel_stack()

release_unused_kernel_stack() is called when the pager is enabled when
the state of a thread is saved in order to release unused stack pages.

Update release_unused_kernel_stack() for 64-bit mode.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

64ec106b01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: bugfix tee_pager_release_phys()

Fixes the case where less than a page is to be released by ignoring the
request.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jen

core: bugfix tee_pager_release_phys()

Fixes the case where less than a page is to be released by ignoring the
request.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

11b025ea01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: link script: .bss alignment

.bss may need a larger alignment than 8. Instead of trying to guess let
the linker chose it and to avoid having an unaccounted hole before .bss
set __data_end first

core: link script: .bss alignment

.bss may need a larger alignment than 8. Instead of trying to guess let
the linker chose it and to avoid having an unaccounted hole before .bss
set __data_end first thing inside the .bss section. This makes sure that
the data section is properly padded out when assembling a paged tee.bin.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ecb0611912-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: invalidate tlb when clearing entry

When clearing an entry in a translation table corresponding TLB entry
must always be invalidated. With this patch two missing places are
addressed. Th

core: pager: invalidate tlb when clearing entry

When clearing an entry in a translation table corresponding TLB entry
must always be invalidated. With this patch two missing places are
addressed. This fixes problem in xtest regression suite case 1016.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

95df580301-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: add dsb instructions for tlb invalidation

Adds DSB instructions needed for correct visibility of TLB
invalidations.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by:

core: add dsb instructions for tlb invalidation

Adds DSB instructions needed for correct visibility of TLB
invalidations.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

d2ccd62a01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: make 64-bit tlb invalidation inner shareable

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

ad937c0401-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: assert against recursive mutex locking

Adds an assert to check that the thread holding a mutex tries to lock it
again.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-b

core: assert against recursive mutex locking

Adds an assert to check that the thread holding a mutex tries to lock it
again.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

aaaf00a208-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

core: arm: make alignment check configurable

We occasionally get reports from people stumbling upon data abort
exceptions caused by alignment faults in TAs. The recommended fix is to
change the code

core: arm: make alignment check configurable

We occasionally get reports from people stumbling upon data abort
exceptions caused by alignment faults in TAs. The recommended fix is to
change the code so that the unaligned access won't occur. But it is
sometimes difficult to achieve.

Therefore we provide a compile-time option to disable alignment checks.
For AArch64 it applies to both SEL1 and SEL0.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

bdc5282e07-Jun-2017 Etienne Carriere <etienne.carriere@linaro.org>

core: fix weakness in shm registration

When core needs to validate content before it is used, core must first
move the data in secure memory, then validate it (or not), then access
validated data fr

core: fix weakness in shm registration

When core needs to validate content before it is used, core must first
move the data in secure memory, then validate it (or not), then access
validated data from secure memory only, not from original shared memory
location.

This change fixes mobj_reg_shm_alloc() so that it checks the validity
of the registered reference after the references are copied into the
secure memory.

This change fixes mobj_mapped_shm_alloc() to use the shm buffer reference
instead of the initial description still located in shared memory.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<111112113114115116117118119120>>...146