History log of /optee_os/ (Results 5526 – 5550 of 8382)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
a401bcfb12-Mar-2019 Bastien Simondi <bsimondi@netflix.com>

core: check allocated size of temporary secure memory

When servicing syscall_invoke_ta_command(), the invoked TA could modify
the .size field. Make sure the allocated buffer is not overwritten on
re

core: check allocated size of temporary secure memory

When servicing syscall_invoke_ta_command(), the invoked TA could modify
the .size field. Make sure the allocated buffer is not overwritten on
return.

Signed-off-by: Bastien Simondi <bsimondi@netflix.com>
[jf: fix multi-line comment, replace '= { 0 };' with '= { };']
[jf: add commit description]
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

ad56511625-Feb-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: crypto: libtomcrypt: enable LTC_CLEAN_STACK

Enables LTC_CLEAN_STACK so that LibTomCrypt will wipe key material and
other sensitive data once no longer used.

Signed-off-by: Jerome Forissier <j

core: crypto: libtomcrypt: enable LTC_CLEAN_STACK

Enables LTC_CLEAN_STACK so that LibTomCrypt will wipe key material and
other sensitive data once no longer used.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Suggested-by: Bastien Simondi <bsimondi@netflix.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

3ca4a1ca25-Feb-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: FS: wipe sensitive data after use

The secure storage code makes use of various cryptographic data (keys
and IVs). Make sure the buffers are wiped after use to minimize the
risks that sensitive

core: FS: wipe sensitive data after use

The secure storage code makes use of various cryptographic data (keys
and IVs). Make sure the buffers are wiped after use to minimize the
risks that sensitive data may be leaked to an attacker who would have
gained some access to the secure memory.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reported-by: Bastien Simondi <bsimondi@netflix.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

13a2660112-Mar-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: thread: use READ_ONCE() when accessing data in shared memory

In some places we read a value from shared memory, then based on the
value we take some actions. When multiple tests are done, we s

core: thread: use READ_ONCE() when accessing data in shared memory

In some places we read a value from shared memory, then based on the
value we take some actions. When multiple tests are done, we should make
sure that the value is not read multiple times because there is no
guarantee that Normal World has not changed the value in the mean time,
which could break the logic. Consider for instance:

if (shared && shared->value)
do_something();

If "shared" resides in shared memory, it might change between
"if (shared)" and "if (shared->value)". If it happens to be set to NULL
for example, the code will crash.
To ensure consistency, a temporary variable has to be used to hold the
value, and the READ_ONCE() macro is required to prevent the compiler
from emitting multiple loads of the memory location.

Reported-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

cc6bc5f912-Mar-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: verify size of allocated shared memory

Makes sure that normal world cannot change the size of allocated shared
memory, resulting in a smaller buffer being allocated.

Suggested-by: Bastien Sim

core: verify size of allocated shared memory

Makes sure that normal world cannot change the size of allocated shared
memory, resulting in a smaller buffer being allocated.

Suggested-by: Bastien Simondi <bsimondi@netflix.com> [1.1]
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

9348854930-Jan-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: scrub user-tainted memory returned by alloc_temp_sec_mem()

This is a security fix for TA-to-TA calls.

In syscall_open_ta_session() and syscall_invoke_ta_command(), caller TA
can reference som

core: scrub user-tainted memory returned by alloc_temp_sec_mem()

This is a security fix for TA-to-TA calls.

In syscall_open_ta_session() and syscall_invoke_ta_command(), caller TA
can reference some private memory, in which case the kernel makes a
temporary copy. Unfortunately, memory allocated through
alloc_temp_sec_mem() is not cleared when returned. One could leverage
this to copy arbitrary data into this secure memory pool or to snoop
former data from a previous call done by another TA (e.g., using
TEE_PARAM_TYPE_MEMREF_OUTPUT allows to map the data while not overwriting
it, hence accessing to what is already there).

This patch introduces mobj_free_wipe() to clear and free an mobj.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.5]
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

7c8b181a25-Feb-2019 Jerome Forissier <jerome.forissier@linaro.org>

libutils: add memzero_explicit()

Adds a new function: memzero_explicit(s, count) which is equivalent to
memset(s, 0, count) except that it cannot be optimized away by the
compiler.

memset() being a

libutils: add memzero_explicit()

Adds a new function: memzero_explicit(s, count) which is equivalent to
memset(s, 0, count) except that it cannot be optimized away by the
compiler.

memset() being a built-in function, the compiler is free to perform
optimizations such as simply discarding a call when it considers that the
call cannot have any observable effect from the program's point of view.
A typical example is clearing local data before returning from a
function. memset() is likely to have no effect in this case while
memzero_explicit() will work as expected.

Calling memset() directly from memzero_explicit() would work as long as
link time optimization (LTO) is not applied. With LTO however, the
compiler could inline the call to memzero_explicit() and find out that
dead store optimization applies. In order to avoid that, we use a method
mentioned in [1] which consists in using a volatile function pointer.
This method is considered "effective in practice" with all the commonly
used compilers.

Link: [1] https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-yang.pdf
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

70b6131029-Jan-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: scrub user-tainted kernel heap memory before freeing it

Some syscalls can be used to poison kernel heap memory. Data copied from
userland is not wiped when the syscall returns. For instance, w

core: scrub user-tainted kernel heap memory before freeing it

Some syscalls can be used to poison kernel heap memory. Data copied from
userland is not wiped when the syscall returns. For instance, when doing
syscall_log() one can copy arbitrary data of variable length onto kernel
memory. When free() is called, the block is returned to the memory pool,
tainted with that userland data. This might be used in combination with
some other vulnerability to produce an exploit.

This patch uses free_wipe() to clear the buffers that have been used to
store user-provided data before returning them to the heap.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.4]
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

4e57065513-Feb-2019 Jerome Forissier <jerome.forissier@linaro.org>

libutils: add free_wipe()

Adds function free_wipe(void *ptr) to clear a buffer before returning
it to the heap. The pattern used to overwrite the data is 0x55.
Users have to #include <stdlib_ext.h>

libutils: add free_wipe()

Adds function free_wipe(void *ptr) to clear a buffer before returning
it to the heap. The pattern used to overwrite the data is 0x55.
Users have to #include <stdlib_ext.h> to import the declaration.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

e1509d6e29-Jan-2019 Jerome Forissier <jerome.forissier@linaro.org>

core: check for overflow in msg_param_mobj_from_noncontig()

msg_param_mobj_from_noncontig() does not check that buf_ptr + size does
not overflow. As a result, num_pages could be computed small, whil

core: check for overflow in msg_param_mobj_from_noncontig()

msg_param_mobj_from_noncontig() does not check that buf_ptr + size does
not overflow. As a result, num_pages could be computed small, while
size could be big. Only num_pages will be mapped/registered in the
returned mobj. If the caller does not compare mobj->size with required
size, it can end up manipulating memory out of the intended region.

Fix the issue by using overflow checking macros.

Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.2]
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

34050c2026-Apr-2019 Etienne Carriere <etienne.carriere@linaro.org>

stm32mp1: default embedded RNG driver

Default enable CFG_STM32_RNG in the platform configuration.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech

stm32mp1: default embedded RNG driver

Default enable CFG_STM32_RNG in the platform configuration.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

c73d63e303-May-2019 Etienne Carriere <etienne.carriere@linaro.org>

stm32mp1: fix missing RNG1 non-secure mapping

RNG1 may be assigned to the non-secure world while secure world do
use the resource. In such case, secure world is responsible for
accessing the periphe

stm32mp1: fix missing RNG1 non-secure mapping

RNG1 may be assigned to the non-secure world while secure world do
use the resource. In such case, secure world is responsible for
accessing the peripheral in a system state where non-secure world
cannot execute of interfere in RNG1 state. secure world will uses RNG1
even if non-secure, during OP-TEE initialization and some power states
transitions, when non-secure world is not executed.

This change corrects the missing mapping of RNG1 IO memory with
non-secure access attributes.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

ebdc36f107-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: share sections of loaded elf

Uses the file interface to share read-only parts of loaded binary
content of an ELF. This means that multiple instances of one TA will
share the read-only data/cod

core: share sections of loaded elf

Uses the file interface to share read-only parts of loaded binary
content of an ELF. This means that multiple instances of one TA will
share the read-only data/code of each ELF.

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960, GP)
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

fd7a82a317-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_mmu_map_param(): clean mapped params

If tee_mmu_map_param() fails, clean mapped params by calling
tee_mmu_clean_param() in case some mappings succeeded.

Reviewed-by: Jerome Forissier <jer

core: tee_mmu_map_param(): clean mapped params

If tee_mmu_map_param() fails, clean mapped params by calling
tee_mmu_clean_param() in case some mappings succeeded.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1e25659216-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: fix assertion '(*pgt)->vabase == pg_info->va_base'

Fixes assertion in set_pg_region() which is triggered by holes in a vm
map spanning over at least one complete page table.

Acked-by: Jerome

core: fix assertion '(*pgt)->vabase == pg_info->va_base'

Fixes assertion in set_pg_region() which is triggered by holes in a vm
map spanning over at least one complete page table.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

53716c0c15-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_pager_set_uta_area_attr(): check same context

Prior to this patch it was assumed that only one area could be using a
fobj unless it was shared between multiple context. This isn't true, If

core: tee_pager_set_uta_area_attr(): check same context

Prior to this patch it was assumed that only one area could be using a
fobj unless it was shared between multiple context. This isn't true, If
an area happens to span two page tables it will be split into two areas
connected to the same fobj. This patch fixes this by checking that all
areas using a fobj has the context.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

c2ce418612-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

Introduce CFG_CORE_DUMP_OOM

Introduces CFG_CORE_DUMP_OOM which if y will print an error and dump the
stack on memory allocation failures using malloc() and friends.

Reviewed-by: Jerome Forissier <j

Introduce CFG_CORE_DUMP_OOM

Introduces CFG_CORE_DUMP_OOM which if y will print an error and dump the
stack on memory allocation failures using malloc() and friends.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4c47436815-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_mmu.c: only free unused page tables

When freeing page tables or a partially used pages make sure that other
parts of the page tables are unused.

Acked-by: Jerome Forissier <jerome.forissi

core: tee_mmu.c: only free unused page tables

When freeing page tables or a partially used pages make sure that other
parts of the page tables are unused.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

77e393ef15-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pgt_flush_ctx_range(): check arg pgt_cache

In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL
before traversing the list.

Acked-by: Jerome Forissier <jerome.forissier@linar

core: pgt_flush_ctx_range(): check arg pgt_cache

In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL
before traversing the list.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

eeb866c404-Mar-2019 Jerome Forissier <jerome.forissier@linaro.org>

Add TA entry point function: __ta_entry()

Symbol __utee_entry may be undefined in a TA in case libutee is
built as a shared library (CFG_ULIBS_SHARED=y). Add a wrapper function
to avoid this issue.

Add TA entry point function: __ta_entry()

Symbol __utee_entry may be undefined in a TA in case libutee is
built as a shared library (CFG_ULIBS_SHARED=y). Add a wrapper function
to avoid this issue.

Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

d5c2ace614-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_ta_init_user_ta_session(): flush pgt on error

If tee_ta_init_user_ta_session() fails to initialize the user TA, call
pgt_flush_ctx() on cleanup to make sure that all used page entries are

core: tee_ta_init_user_ta_session(): flush pgt on error

If tee_ta_init_user_ta_session() fails to initialize the user TA, call
pgt_flush_ctx() on cleanup to make sure that all used page entries are
released since some page fault may have been served already.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1cb3c06312-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: free_elf_states(): clear elf->elf_state

Clear elf->elf_state in free_elf_states() to avoid leaving a dangling
pointer.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-b

core: free_elf_states(): clear elf->elf_state

Clear elf->elf_state in free_elf_states() to avoid leaving a dangling
pointer.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

f03a1dcb10-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: invalidate entire icache

Invalidates entire icache when icache invalidation could be needed. This
invalidates more entries than strictly needed. The advantage is stable
paging. Next step is to

core: invalidate entire icache

Invalidates entire icache when icache invalidation could be needed. This
invalidates more entries than strictly needed. The advantage is stable
paging. Next step is to locate places where tlb and icache invalidations
can be relaxed.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

79b8357b09-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: elf_flags_to_mattr(): add privileged bits

Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the
corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This
resul

core: elf_flags_to_mattr(): add privileged bits

Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the
corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This
results in tee_pager_add_uta_area() initializing allocated struct
tee_pager_area with the same protection bits as if the protection bits
was set with vm_set_prot(). As a consequence will vm_set_prot() only
make changes if effective protection bits are changed.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2e84663d09-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_pager_set_uta_area_attr(): save flags

Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr
bits instead of just the protection bits derived from the flags
parameter. This

core: tee_pager_set_uta_area_attr(): save flags

Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr
bits instead of just the protection bits derived from the flags
parameter. This leads to tee_pager_set_uta_area_attr() updating
permission even when not needed. With this patch is only the effective
protection bits saved in the different struct tee_pager_area which are
updated when changing permissions.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<221222223224225226227228229230>>...336