| 3ca4a1ca | 25-Feb-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: FS: wipe sensitive data after use
The secure storage code makes use of various cryptographic data (keys and IVs). Make sure the buffers are wiped after use to minimize the risks that sensitive
core: FS: wipe sensitive data after use
The secure storage code makes use of various cryptographic data (keys and IVs). Make sure the buffers are wiped after use to minimize the risks that sensitive data may be leaked to an attacker who would have gained some access to the secure memory.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reported-by: Bastien Simondi <bsimondi@netflix.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| 13a26601 | 12-Mar-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: thread: use READ_ONCE() when accessing data in shared memory
In some places we read a value from shared memory, then based on the value we take some actions. When multiple tests are done, we s
core: thread: use READ_ONCE() when accessing data in shared memory
In some places we read a value from shared memory, then based on the value we take some actions. When multiple tests are done, we should make sure that the value is not read multiple times because there is no guarantee that Normal World has not changed the value in the mean time, which could break the logic. Consider for instance:
if (shared && shared->value) do_something();
If "shared" resides in shared memory, it might change between "if (shared)" and "if (shared->value)". If it happens to be set to NULL for example, the code will crash. To ensure consistency, a temporary variable has to be used to hold the value, and the READ_ONCE() macro is required to prevent the compiler from emitting multiple loads of the memory location.
Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| cc6bc5f9 | 12-Mar-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: verify size of allocated shared memory
Makes sure that normal world cannot change the size of allocated shared memory, resulting in a smaller buffer being allocated.
Suggested-by: Bastien Sim
core: verify size of allocated shared memory
Makes sure that normal world cannot change the size of allocated shared memory, resulting in a smaller buffer being allocated.
Suggested-by: Bastien Simondi <bsimondi@netflix.com> [1.1] Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| 93488549 | 30-Jan-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: scrub user-tainted memory returned by alloc_temp_sec_mem()
This is a security fix for TA-to-TA calls.
In syscall_open_ta_session() and syscall_invoke_ta_command(), caller TA can reference som
core: scrub user-tainted memory returned by alloc_temp_sec_mem()
This is a security fix for TA-to-TA calls.
In syscall_open_ta_session() and syscall_invoke_ta_command(), caller TA can reference some private memory, in which case the kernel makes a temporary copy. Unfortunately, memory allocated through alloc_temp_sec_mem() is not cleared when returned. One could leverage this to copy arbitrary data into this secure memory pool or to snoop former data from a previous call done by another TA (e.g., using TEE_PARAM_TYPE_MEMREF_OUTPUT allows to map the data while not overwriting it, hence accessing to what is already there).
This patch introduces mobj_free_wipe() to clear and free an mobj.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.5] Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| 70b61310 | 29-Jan-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: scrub user-tainted kernel heap memory before freeing it
Some syscalls can be used to poison kernel heap memory. Data copied from userland is not wiped when the syscall returns. For instance, w
core: scrub user-tainted kernel heap memory before freeing it
Some syscalls can be used to poison kernel heap memory. Data copied from userland is not wiped when the syscall returns. For instance, when doing syscall_log() one can copy arbitrary data of variable length onto kernel memory. When free() is called, the block is returned to the memory pool, tainted with that userland data. This might be used in combination with some other vulnerability to produce an exploit.
This patch uses free_wipe() to clear the buffers that have been used to store user-provided data before returning them to the heap.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.4] Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| e1509d6e | 29-Jan-2019 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: check for overflow in msg_param_mobj_from_noncontig()
msg_param_mobj_from_noncontig() does not check that buf_ptr + size does not overflow. As a result, num_pages could be computed small, whil
core: check for overflow in msg_param_mobj_from_noncontig()
msg_param_mobj_from_noncontig() does not check that buf_ptr + size does not overflow. As a result, num_pages could be computed small, while size could be big. Only num_pages will be mapped/registered in the returned mobj. If the caller does not compare mobj->size with required size, it can end up manipulating memory out of the intended region.
Fix the issue by using overflow checking macros.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reported-by: Bastien Simondi <bsimondi@netflix.com> [1.2] Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| 34050c20 | 26-Apr-2019 |
Etienne Carriere <etienne.carriere@linaro.org> |
stm32mp1: default embedded RNG driver
Default enable CFG_STM32_RNG in the platform configuration.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech
stm32mp1: default embedded RNG driver
Default enable CFG_STM32_RNG in the platform configuration.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| c73d63e3 | 03-May-2019 |
Etienne Carriere <etienne.carriere@linaro.org> |
stm32mp1: fix missing RNG1 non-secure mapping
RNG1 may be assigned to the non-secure world while secure world do use the resource. In such case, secure world is responsible for accessing the periphe
stm32mp1: fix missing RNG1 non-secure mapping
RNG1 may be assigned to the non-secure world while secure world do use the resource. In such case, secure world is responsible for accessing the peripheral in a system state where non-secure world cannot execute of interfere in RNG1 state. secure world will uses RNG1 even if non-secure, during OP-TEE initialization and some power states transitions, when non-secure world is not executed.
This change corrects the missing mapping of RNG1 IO memory with non-secure access attributes.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| ebdc36f1 | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: share sections of loaded elf
Uses the file interface to share read-only parts of loaded binary content of an ELF. This means that multiple instances of one TA will share the read-only data/cod
core: share sections of loaded elf
Uses the file interface to share read-only parts of loaded binary content of an ELF. This means that multiple instances of one TA will share the read-only data/code of each ELF.
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960, GP) Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| fd7a82a3 | 17-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: tee_mmu_map_param(): clean mapped params
If tee_mmu_map_param() fails, clean mapped params by calling tee_mmu_clean_param() in case some mappings succeeded.
Reviewed-by: Jerome Forissier <jer
core: tee_mmu_map_param(): clean mapped params
If tee_mmu_map_param() fails, clean mapped params by calling tee_mmu_clean_param() in case some mappings succeeded.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1e256592 | 16-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix assertion '(*pgt)->vabase == pg_info->va_base'
Fixes assertion in set_pg_region() which is triggered by holes in a vm map spanning over at least one complete page table.
Acked-by: Jerome
core: fix assertion '(*pgt)->vabase == pg_info->va_base'
Fixes assertion in set_pg_region() which is triggered by holes in a vm map spanning over at least one complete page table.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 53716c0c | 15-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: tee_pager_set_uta_area_attr(): check same context
Prior to this patch it was assumed that only one area could be using a fobj unless it was shared between multiple context. This isn't true, If
core: tee_pager_set_uta_area_attr(): check same context
Prior to this patch it was assumed that only one area could be using a fobj unless it was shared between multiple context. This isn't true, If an area happens to span two page tables it will be split into two areas connected to the same fobj. This patch fixes this by checking that all areas using a fobj has the context.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 4c474368 | 15-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: tee_mmu.c: only free unused page tables
When freeing page tables or a partially used pages make sure that other parts of the page tables are unused.
Acked-by: Jerome Forissier <jerome.forissi
core: tee_mmu.c: only free unused page tables
When freeing page tables or a partially used pages make sure that other parts of the page tables are unused.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 77e393ef | 15-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: pgt_flush_ctx_range(): check arg pgt_cache
In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL before traversing the list.
Acked-by: Jerome Forissier <jerome.forissier@linar
core: pgt_flush_ctx_range(): check arg pgt_cache
In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL before traversing the list.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d5c2ace6 | 14-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: tee_ta_init_user_ta_session(): flush pgt on error
If tee_ta_init_user_ta_session() fails to initialize the user TA, call pgt_flush_ctx() on cleanup to make sure that all used page entries are
core: tee_ta_init_user_ta_session(): flush pgt on error
If tee_ta_init_user_ta_session() fails to initialize the user TA, call pgt_flush_ctx() on cleanup to make sure that all used page entries are released since some page fault may have been served already.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1cb3c063 | 12-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: free_elf_states(): clear elf->elf_state
Clear elf->elf_state in free_elf_states() to avoid leaving a dangling pointer.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-b
core: free_elf_states(): clear elf->elf_state
Clear elf->elf_state in free_elf_states() to avoid leaving a dangling pointer.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| f03a1dcb | 10-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: invalidate entire icache
Invalidates entire icache when icache invalidation could be needed. This invalidates more entries than strictly needed. The advantage is stable paging. Next step is to
core: invalidate entire icache
Invalidates entire icache when icache invalidation could be needed. This invalidates more entries than strictly needed. The advantage is stable paging. Next step is to locate places where tlb and icache invalidations can be relaxed.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 79b8357b | 09-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: elf_flags_to_mattr(): add privileged bits
Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This resul
core: elf_flags_to_mattr(): add privileged bits
Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This results in tee_pager_add_uta_area() initializing allocated struct tee_pager_area with the same protection bits as if the protection bits was set with vm_set_prot(). As a consequence will vm_set_prot() only make changes if effective protection bits are changed.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2e84663d | 09-Apr-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: tee_pager_set_uta_area_attr(): save flags
Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr bits instead of just the protection bits derived from the flags parameter. This
core: tee_pager_set_uta_area_attr(): save flags
Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr bits instead of just the protection bits derived from the flags parameter. This leads to tee_pager_set_uta_area_attr() updating permission even when not needed. With this patch is only the effective protection bits saved in the different struct tee_pager_area which are updated when changing permissions.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| fead5511 | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add get_tag() to struct user_ta_store_ops
Adds get_tag() method to struct user_ta_store_ops.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wi
core: add get_tag() to struct user_ta_store_ops
Adds get_tag() method to struct user_ta_store_ops.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 0b345c6c | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add tee_tadb_get_tag()
Adds the function tee_tadb_get_tag() which returns a tag that uniquely identifies a TA.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens
core: add tee_tadb_get_tag()
Adds the function tee_tadb_get_tag() which returns a tag that uniquely identifies a TA.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 9f31ef5a | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add a file abstraction
Adds a struct file which is used to share fobjs with other memory mappings or contexts.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wik
core: add a file abstraction
Adds a struct file which is used to share fobjs with other memory mappings or contexts.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2616b103 | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add prot arg to tee_pager_add_uta_area()
Adds a prot argument to tee_pager_add_uta_area() to set the initial protection instead of the previous TEE_MATTR_PRW | TEE_MATTR_URWX;
Reviewed-by: Et
core: add prot arg to tee_pager_add_uta_area()
Adds a prot argument to tee_pager_add_uta_area() to set the initial protection instead of the previous TEE_MATTR_PRW | TEE_MATTR_URWX;
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c0d24921 | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove unused tee_pager_transfer_uta_region()
Removes the now unused function tee_pager_transfer_uta_region()
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wikl
core: remove unused tee_pager_transfer_uta_region()
Removes the now unused function tee_pager_transfer_uta_region()
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c17351f4 | 07-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove unused pgt_transfer()
Removes the now unused function pgt_transfer()
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> |