History log of /optee_os/core/arch/arm/ (Results 2226 – 2250 of 3635)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
34050c2026-Apr-2019 Etienne Carriere <etienne.carriere@linaro.org>

stm32mp1: default embedded RNG driver

Default enable CFG_STM32_RNG in the platform configuration.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech

stm32mp1: default embedded RNG driver

Default enable CFG_STM32_RNG in the platform configuration.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

c73d63e303-May-2019 Etienne Carriere <etienne.carriere@linaro.org>

stm32mp1: fix missing RNG1 non-secure mapping

RNG1 may be assigned to the non-secure world while secure world do
use the resource. In such case, secure world is responsible for
accessing the periphe

stm32mp1: fix missing RNG1 non-secure mapping

RNG1 may be assigned to the non-secure world while secure world do
use the resource. In such case, secure world is responsible for
accessing the peripheral in a system state where non-secure world
cannot execute of interfere in RNG1 state. secure world will uses RNG1
even if non-secure, during OP-TEE initialization and some power states
transitions, when non-secure world is not executed.

This change corrects the missing mapping of RNG1 IO memory with
non-secure access attributes.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

ebdc36f107-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: share sections of loaded elf

Uses the file interface to share read-only parts of loaded binary
content of an ELF. This means that multiple instances of one TA will
share the read-only data/cod

core: share sections of loaded elf

Uses the file interface to share read-only parts of loaded binary
content of an ELF. This means that multiple instances of one TA will
share the read-only data/code of each ELF.

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960, GP)
Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

fd7a82a317-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_mmu_map_param(): clean mapped params

If tee_mmu_map_param() fails, clean mapped params by calling
tee_mmu_clean_param() in case some mappings succeeded.

Reviewed-by: Jerome Forissier <jer

core: tee_mmu_map_param(): clean mapped params

If tee_mmu_map_param() fails, clean mapped params by calling
tee_mmu_clean_param() in case some mappings succeeded.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1e25659216-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: fix assertion '(*pgt)->vabase == pg_info->va_base'

Fixes assertion in set_pg_region() which is triggered by holes in a vm
map spanning over at least one complete page table.

Acked-by: Jerome

core: fix assertion '(*pgt)->vabase == pg_info->va_base'

Fixes assertion in set_pg_region() which is triggered by holes in a vm
map spanning over at least one complete page table.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

53716c0c15-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_pager_set_uta_area_attr(): check same context

Prior to this patch it was assumed that only one area could be using a
fobj unless it was shared between multiple context. This isn't true, If

core: tee_pager_set_uta_area_attr(): check same context

Prior to this patch it was assumed that only one area could be using a
fobj unless it was shared between multiple context. This isn't true, If
an area happens to span two page tables it will be split into two areas
connected to the same fobj. This patch fixes this by checking that all
areas using a fobj has the context.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4c47436815-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_mmu.c: only free unused page tables

When freeing page tables or a partially used pages make sure that other
parts of the page tables are unused.

Acked-by: Jerome Forissier <jerome.forissi

core: tee_mmu.c: only free unused page tables

When freeing page tables or a partially used pages make sure that other
parts of the page tables are unused.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

77e393ef15-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pgt_flush_ctx_range(): check arg pgt_cache

In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL
before traversing the list.

Acked-by: Jerome Forissier <jerome.forissier@linar

core: pgt_flush_ctx_range(): check arg pgt_cache

In pgt_flush_ctx_range() check that the argument pgt_cache isn't NULL
before traversing the list.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

d5c2ace614-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_ta_init_user_ta_session(): flush pgt on error

If tee_ta_init_user_ta_session() fails to initialize the user TA, call
pgt_flush_ctx() on cleanup to make sure that all used page entries are

core: tee_ta_init_user_ta_session(): flush pgt on error

If tee_ta_init_user_ta_session() fails to initialize the user TA, call
pgt_flush_ctx() on cleanup to make sure that all used page entries are
released since some page fault may have been served already.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1cb3c06312-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: free_elf_states(): clear elf->elf_state

Clear elf->elf_state in free_elf_states() to avoid leaving a dangling
pointer.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-b

core: free_elf_states(): clear elf->elf_state

Clear elf->elf_state in free_elf_states() to avoid leaving a dangling
pointer.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

f03a1dcb10-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: invalidate entire icache

Invalidates entire icache when icache invalidation could be needed. This
invalidates more entries than strictly needed. The advantage is stable
paging. Next step is to

core: invalidate entire icache

Invalidates entire icache when icache invalidation could be needed. This
invalidates more entries than strictly needed. The advantage is stable
paging. Next step is to locate places where tlb and icache invalidations
can be relaxed.

Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

79b8357b09-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: elf_flags_to_mattr(): add privileged bits

Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the
corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This
resul

core: elf_flags_to_mattr(): add privileged bits

Adds the privileged bits TEE_MATTR_PW and TEE_MATTR_PR when setting the
corresponding user bits TEE_MATTR_UW and TEE_MATTR_UR respectively. This
results in tee_pager_add_uta_area() initializing allocated struct
tee_pager_area with the same protection bits as if the protection bits
was set with vm_set_prot(). As a consequence will vm_set_prot() only
make changes if effective protection bits are changed.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2e84663d09-Apr-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: tee_pager_set_uta_area_attr(): save flags

Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr
bits instead of just the protection bits derived from the flags
parameter. This

core: tee_pager_set_uta_area_attr(): save flags

Prior to this patch is tee_pager_set_uta_area_attr() saving the mattr
bits instead of just the protection bits derived from the flags
parameter. This leads to tee_pager_set_uta_area_attr() updating
permission even when not needed. With this patch is only the effective
protection bits saved in the different struct tee_pager_area which are
updated when changing permissions.

Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

fead551107-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add get_tag() to struct user_ta_store_ops

Adds get_tag() method to struct user_ta_store_ops.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wi

core: add get_tag() to struct user_ta_store_ops

Adds get_tag() method to struct user_ta_store_ops.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2616b10307-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add prot arg to tee_pager_add_uta_area()

Adds a prot argument to tee_pager_add_uta_area() to set the initial
protection instead of the previous TEE_MATTR_PRW | TEE_MATTR_URWX;

Reviewed-by: Et

core: add prot arg to tee_pager_add_uta_area()

Adds a prot argument to tee_pager_add_uta_area() to set the initial
protection instead of the previous TEE_MATTR_PRW | TEE_MATTR_URWX;

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

c0d2492107-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove unused tee_pager_transfer_uta_region()

Removes the now unused function tee_pager_transfer_uta_region()

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wikl

core: remove unused tee_pager_transfer_uta_region()

Removes the now unused function tee_pager_transfer_uta_region()

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

c17351f407-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove unused pgt_transfer()

Removes the now unused function pgt_transfer()

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

6cc3087f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: user_ta: support mapping paged parameters

Supports mapping shared paged parameters for user TA.

vm_map() is modified to map objects with the TEE_MATTR_EPHEMERAL
attribute which is an indicati

core: user_ta: support mapping paged parameters

Supports mapping shared paged parameters for user TA.

vm_map() is modified to map objects with the TEE_MATTR_EPHEMERAL
attribute which is an indication that it's a parameter instead of TA
code/data or permanent mapping.

tee_mmu_clean_param() is added to clean out all parameters added with
tee_mmu_map_param().

In tee_mmu_map_param() instead of clearing out any old parameters
there's a check to see that there's no old parameters hanging around.

Finally mobj_update_mapping() is removed since the pager now supports
shared mappings.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

74e903b207-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add get_fobj() to mobj_seccpy_shm

Adds get_fobj() method to the mobj_seccpy_shm object.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wikland

core: add get_fobj() to mobj_seccpy_shm

Adds get_fobj() method to the mobj_seccpy_shm object.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

b83c0d5f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: record fobj in pmem instead of area

Records the fobj backing the physical page represented by a pmem instead
of struct tee_pager_area which was used prior to this patch. Each fobj
can

core: pager: record fobj in pmem instead of area

Records the fobj backing the physical page represented by a pmem instead
of struct tee_pager_area which was used prior to this patch. Each fobj
can be used by several unrelated areas in the end allowing real shared
memory between multiple user context.

Reference counting for the page tables is increasing in activity since
entries which are hidden/unhidden also decrease/increase the count. This
is because there's no difference between unhiding a pmem or just mapping
it again in another page table.

The memory sharing is not fully taken advantage of in this patch.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

985e182207-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: fix tbl_usage_count()

Fixes tbl_usage_count() to tell if a page is mapped with the more reliably
attribute instead of the presence of a physical address.

Reviewed-by: Etienne Carriere

core: pager: fix tbl_usage_count()

Fixes tbl_usage_count() to tell if a page is mapped with the more reliably
attribute instead of the presence of a physical address.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a8777fc007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: add missing tzsram physical pages

init_runtime() adds the physical pages used for paging. Prior to this
patch only pages covered by the memory using by the binary was added. If
TZSRAM i

core: pager: add missing tzsram physical pages

init_runtime() adds the physical pages used for paging. Prior to this
patch only pages covered by the memory using by the binary was added. If
TZSRAM is large enough there will remain a range of physical pages after
the "pageable_end" address. With this patch this last range is also added.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

f7a26db307-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pgt_{in,de}c_used_entries() check wrapping

In pgt_inc_used_entries() and pgt_dec_used_entries() assert that
pgt->num_used_entries doesn't wrap.

Reviewed-by: Etienne Carriere <etienne.carriere

core: pgt_{in,de}c_used_entries() check wrapping

In pgt_inc_used_entries() and pgt_dec_used_entries() assert that
pgt->num_used_entries doesn't wrap.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a2b6778007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: don't store hidden page pa in table

When a page is hidden the associated pmem->flags is updated with
PMEM_FLAG_HIDDEN. From a pmem it's also possible to derive the physical
address of t

core: pager: don't store hidden page pa in table

When a page is hidden the associated pmem->flags is updated with
PMEM_FLAG_HIDDEN. From a pmem it's also possible to derive the physical
address of the page. This makes storing the physical address of a hidden
(and possibly dirty) page redundant. So always store 0 instead of
physical address of hidden pages to simplify.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

e595a5f007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove TEE_MATTR_HIDDEN_*

Removes the now unused TEE_MATTR_HIDDEN_BLOCK and
TEE_MATTR_HIDDEN_DIRTY_BLOCK.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wikla

core: remove TEE_MATTR_HIDDEN_*

Removes the now unused TEE_MATTR_HIDDEN_BLOCK and
TEE_MATTR_HIDDEN_DIRTY_BLOCK.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<81828384858687888990>>...146