History log of /optee_os/core/ (Results 5326 – 5350 of 6456)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
a3ea24cf16-Jun-2017 Etienne Carriere <etienne.carriere@linaro.org>

core: clarify end of static mapping table

Move remaining code relying on null size value for detecting end
of static mapping table with a test on type value. This is made
consistent between lpae and

core: clarify end of static mapping table

Move remaining code relying on null size value for detecting end
of static mapping table with a test on type value. This is made
consistent between lpae and non-lpae implementations.

Rename MEM_AREA_NOTYPE into MEM_AREA_END as it is dedicated to this
specific purpose.

Faulty core_mmu_get_type_by_pa() can return MEM_AREA_MAXTYPE on invalid
cases.

Add a comment highlighting null sized entry are not filled in the static
mapping directives table.

Forgive the trick on level_index_m'sk to fit in the 80 chars/line.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4e1faa2f16-Jun-2017 Viktor Signayevskiy <v.signayevsk@samsung.com>

plat-sunxi: provide .bss section initialization before usage

BSS initialization is executed AFTER the initialization of the
MMU table (global variable array "static_memory_map[]"), so
the table is o

plat-sunxi: provide .bss section initialization before usage

BSS initialization is executed AFTER the initialization of the
MMU table (global variable array "static_memory_map[]"), so
the table is overwritten.
Change this so that BSS initialization executes BEFORE
static_memory_map[] is initialized by core_init_mmu_map().

Signed-off-by: Victor Signaevskyi <piligrim2007@meta.ua>
Fixes: https://github.com/OP-TEE/optee_os/issues/1607
Fixes: 236601217f7e ("core: remove __early_bss")
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
[jf: minor edits to the commit message, add Fixes:]
Signed-off-by: Jerome Forissier <jeorme.forissier@linaro.org>

show more ...

8410cd9424-May-2017 Andrew F. Davis <afd@ti.com>

plat-ti: Reserve first page of SRAM for secure boot software

The first 4KB of SRAM is used by the initial secure software and
OP-TEE should not be loaded to this address. Adjust the TEE_LOAD_ADDR
to

plat-ti: Reserve first page of SRAM for secure boot software

The first 4KB of SRAM is used by the initial secure software and
OP-TEE should not be loaded to this address. Adjust the TEE_LOAD_ADDR
to reflect this.

Signed-off-by: Andrew F. Davis <afd@ti.com>

show more ...

432f64c115-Jun-2017 Viktor Signayevskiy <v.signayevsk@samsung.com>

core: fix core_init_mmu_tables() loop

Fixes the terminating condition of the for loop in
core_init_mmu_tables() to rely on mm[n].type instead of mm[n].size.

Fixes: https://github.com/OP-TEE/issue/1

core: fix core_init_mmu_tables() loop

Fixes the terminating condition of the for loop in
core_init_mmu_tables() to rely on mm[n].type instead of mm[n].size.

Fixes: https://github.com/OP-TEE/issue/1602
Signed-off-by: Victor Signaevskyi <piligrim2007@meta.ua>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
[jf: wrap commit description]
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

142d5af206-Jun-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

core: use mobjs for all shared buffers

To ease usage of REE-originated shared memory, all code that uses shared
buffer is moved to mobjs. That means that TA loader, fs_rpc, sockets, etc
all use mobj

core: use mobjs for all shared buffers

To ease usage of REE-originated shared memory, all code that uses shared
buffer is moved to mobjs. That means that TA loader, fs_rpc, sockets, etc
all use mobjs to represent shared buffers instead of simple paddr_t.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

9cf24e6b02-Jun-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

mobj: added new mobj type: mobj_shm

mobj_shm represents buffer in predefined SHM region.
It can be used to pass allocated shm regions instead of [paddr,size] pair.

Signed-off-by: Volodymyr Babchuk

mobj: added new mobj type: mobj_shm

mobj_shm represents buffer in predefined SHM region.
It can be used to pass allocated shm regions instead of [paddr,size] pair.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

50f2431307-Mar-2017 Volodymyr Babchuk <vlad.babchuk@gmail.com>

msg_param: add msg_param.c with helper functions

This patch adds various helper functions to manipulate with parameters
passed to/from normal world.

Also it introduces new optee_param type which is

msg_param: add msg_param.c with helper functions

This patch adds various helper functions to manipulate with parameters
passed to/from normal world.

Also it introduces new optee_param type which is used to pass long
lists of parameters.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Hikey)

show more ...

0d9e635813-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

plat-d02: Use LPAE, increase pager TZSRAM size to 512K and TEE_RAM to 2M

Fixes a boot error when CFG_WITH_PAGER=y:

INFO: TEE-CORE:
INFO: TEE-CORE: Pager is enabled. Hashes: 512 bytes
INFO:

plat-d02: Use LPAE, increase pager TZSRAM size to 512K and TEE_RAM to 2M

Fixes a boot error when CFG_WITH_PAGER=y:

INFO: TEE-CORE:
INFO: TEE-CORE: Pager is enabled. Hashes: 512 bytes
INFO: TEE-CORE: OP-TEE version: 2.4.0-136-g4ec2358 #25 Tue Jun 13 13:32:21 UTC 2017 arm
INFO: TEE-CORE: Shared memory address range: 50500000, 50f00000
ERROR: TEE-CORE: Panic at core/lib/libtomcrypt/src/tee_ltc_provider.c:500 <get_mpa_scratch_memory_pool>

Panic occurs because tee_pager_alloc() fails to allocate memory from
tee_mm_vcore. Fix this by increasing CFG_TEE_RAM_VA_SIZE from 1 to
2 MiB. This implies to enable LPAE, otherwise the TEE core panics with:

ERROR: TEE-CORE: Panic 'Unsupported page size in translation table' at core/arch/arm/mm/tee_pager.c:219 <set_alias_area>

Finally, CFG_CORE_TZSRAM_EMUL_SIZE has to be increased to at least
416 KiB to avoid:

LD out/arm-plat-d02/core/tee.elf
/usr/bin/arm-linux-gnueabihf-ld: OP-TEE can't fit init part into available physical memory

We choose 512 KiB because smaller values cause horrible performance.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

76497ff712-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

plat-hikey: enable 64-bit paging

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

5339dc5401-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

plat-vexpress: enable 64-bit paging

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (Hikey GP)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Juno AArch64)
Tested-by: Jens Wiklande

plat-vexpress: enable 64-bit paging

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (Hikey GP)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (Juno AArch64)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (FVP AArch64)
Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU AArch64)
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

9ba3438901-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: arm: increase emulated SRAM

Increases emulated TrustZone protected SRAM to 448 kB to increase
the pager performance especially for 64-bit mode.

Reviewed-by: Etienne Carriere <etienne.carriere

core: arm: increase emulated SRAM

Increases emulated TrustZone protected SRAM to 448 kB to increase
the pager performance especially for 64-bit mode.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4b60327f01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: update 64-bit copy_init from 32-bit version

Updates the copy_init part in generic_entry_a64.S from
generic_entry_a32.S

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-b

core: update 64-bit copy_init from 32-bit version

Updates the copy_init part in generic_entry_a64.S from
generic_entry_a32.S

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ebba838301-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: 64-bit update release_unused_kernel_stack()

release_unused_kernel_stack() is called when the pager is enabled when
the state of a thread is saved in order to release unused stack pages.

Updat

core: 64-bit update release_unused_kernel_stack()

release_unused_kernel_stack() is called when the pager is enabled when
the state of a thread is saved in order to release unused stack pages.

Update release_unused_kernel_stack() for 64-bit mode.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

64ec106b01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: bugfix tee_pager_release_phys()

Fixes the case where less than a page is to be released by ignoring the
request.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jen

core: bugfix tee_pager_release_phys()

Fixes the case where less than a page is to be released by ignoring the
request.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

11b025ea01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: link script: .bss alignment

.bss may need a larger alignment than 8. Instead of trying to guess let
the linker chose it and to avoid having an unaccounted hole before .bss
set __data_end first

core: link script: .bss alignment

.bss may need a larger alignment than 8. Instead of trying to guess let
the linker chose it and to avoid having an unaccounted hole before .bss
set __data_end first thing inside the .bss section. This makes sure that
the data section is properly padded out when assembling a paged tee.bin.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ecb0611912-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: invalidate tlb when clearing entry

When clearing an entry in a translation table corresponding TLB entry
must always be invalidated. With this patch two missing places are
addressed. Th

core: pager: invalidate tlb when clearing entry

When clearing an entry in a translation table corresponding TLB entry
must always be invalidated. With this patch two missing places are
addressed. This fixes problem in xtest regression suite case 1016.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

95df580301-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: add dsb instructions for tlb invalidation

Adds DSB instructions needed for correct visibility of TLB
invalidations.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by:

core: add dsb instructions for tlb invalidation

Adds DSB instructions needed for correct visibility of TLB
invalidations.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

d2ccd62a01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: make 64-bit tlb invalidation inner shareable

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

43d269aa01-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

ltc: fix 64-bit warning

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

58c83eb501-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: REE_FS: avoid deadlock in ree_fs_create()

ree_fs_close() can't be called in ree_fs_create() cleanup as
ree_fs_close() tries to acquire the mutex already acquired in
ree_fs_create(). Copy relev

core: REE_FS: avoid deadlock in ree_fs_create()

ree_fs_close() can't be called in ree_fs_create() cleanup as
ree_fs_close() tries to acquire the mutex already acquired in
ree_fs_create(). Copy relevant content from ree_fs_close() instead.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ad937c0401-Jun-2017 Jens Wiklander <jens.wiklander@linaro.org>

core: assert against recursive mutex locking

Adds an assert to check that the thread holding a mutex tries to lock it
again.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-b

core: assert against recursive mutex locking

Adds an assert to check that the thread holding a mutex tries to lock it
again.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

aaaf00a208-Jun-2017 Jerome Forissier <jerome.forissier@linaro.org>

core: arm: make alignment check configurable

We occasionally get reports from people stumbling upon data abort
exceptions caused by alignment faults in TAs. The recommended fix is to
change the code

core: arm: make alignment check configurable

We occasionally get reports from people stumbling upon data abort
exceptions caused by alignment faults in TAs. The recommended fix is to
change the code so that the unaligned access won't occur. But it is
sometimes difficult to achieve.

Therefore we provide a compile-time option to disable alignment checks.
For AArch64 it applies to both SEL1 and SEL0.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4c56bf5f07-Jun-2017 Peng Fan <peng.fan@nxp.com>

drivers: tzc380: add tzc380 driver

Add tzc380 driver support.

The usage:
Use tzc_init(vaddr_t base) to get the tzc380 configuration.
Use tzc_configure_region to configure the memory region,
such as

drivers: tzc380: add tzc380 driver

Add tzc380 driver support.

The usage:
Use tzc_init(vaddr_t base) to get the tzc380 configuration.
Use tzc_configure_region to configure the memory region,
such as "tzc_configure_region(5, 0x4e000000,
TZC_ATTR_REGION_SIZE(TZC_REGION_SIZE_32M) | TZC_ATTR_REGION_EN_MASK |
TZC_ATTR_SP_S_RW);"

Signed-off-by: Peng Fan <peng.fan@nxp.com>
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

bdc5282e07-Jun-2017 Etienne Carriere <etienne.carriere@linaro.org>

core: fix weakness in shm registration

When core needs to validate content before it is used, core must first
move the data in secure memory, then validate it (or not), then access
validated data fr

core: fix weakness in shm registration

When core needs to validate content before it is used, core must first
move the data in secure memory, then validate it (or not), then access
validated data from secure memory only, not from original shared memory
location.

This change fixes mobj_reg_shm_alloc() so that it checks the validity
of the registered reference after the references are copied into the
secure memory.

This change fixes mobj_mapped_shm_alloc() to use the shm buffer reference
instead of the initial description still located in shared memory.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

57aabac502-Jun-2017 Bogdan Liulko <bogdan.liulko@globallogic.com>

Remove buffering for AES CTR

CTR mode of AES algorithm turns block cipher into stream cipher.
It means that input data can has any size independent from block
size. It must be processed and result c

Remove buffering for AES CTR

CTR mode of AES algorithm turns block cipher into stream cipher.
It means that input data can has any size independent from block
size. It must be processed and result ciphertext must be
generated after each TEE_CipherUpdate function call. That is why
it is incorrect to apply for AES CTR the input buffering on
TEE_CipherUpdate call when size is not multiple of block size.

Signed-off-by: Bogdan Liulko <bogdan.liulko@globallogic.com>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)
Tested-by: Bogdan Liulko <bogdan.liulko@globallogic.com> (R-Car)
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<211212213214215216217218219220>>...259