History log of /optee_os/core/ (Results 4326 – 4350 of 6456)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
6cc3087f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: user_ta: support mapping paged parameters

Supports mapping shared paged parameters for user TA.

vm_map() is modified to map objects with the TEE_MATTR_EPHEMERAL
attribute which is an indicati

core: user_ta: support mapping paged parameters

Supports mapping shared paged parameters for user TA.

vm_map() is modified to map objects with the TEE_MATTR_EPHEMERAL
attribute which is an indication that it's a parameter instead of TA
code/data or permanent mapping.

tee_mmu_clean_param() is added to clean out all parameters added with
tee_mmu_map_param().

In tee_mmu_map_param() instead of clearing out any old parameters
there's a check to see that there's no old parameters hanging around.

Finally mobj_update_mapping() is removed since the pager now supports
shared mappings.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

74e903b207-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add get_fobj() to mobj_seccpy_shm

Adds get_fobj() method to the mobj_seccpy_shm object.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wikland

core: add get_fobj() to mobj_seccpy_shm

Adds get_fobj() method to the mobj_seccpy_shm object.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

b83c0d5f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: record fobj in pmem instead of area

Records the fobj backing the physical page represented by a pmem instead
of struct tee_pager_area which was used prior to this patch. Each fobj
can

core: pager: record fobj in pmem instead of area

Records the fobj backing the physical page represented by a pmem instead
of struct tee_pager_area which was used prior to this patch. Each fobj
can be used by several unrelated areas in the end allowing real shared
memory between multiple user context.

Reference counting for the page tables is increasing in activity since
entries which are hidden/unhidden also decrease/increase the count. This
is because there's no difference between unhiding a pmem or just mapping
it again in another page table.

The memory sharing is not fully taken advantage of in this patch.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

985e182207-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: fix tbl_usage_count()

Fixes tbl_usage_count() to tell if a page is mapped with the more reliably
attribute instead of the presence of a physical address.

Reviewed-by: Etienne Carriere

core: pager: fix tbl_usage_count()

Fixes tbl_usage_count() to tell if a page is mapped with the more reliably
attribute instead of the presence of a physical address.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a8777fc007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: add missing tzsram physical pages

init_runtime() adds the physical pages used for paging. Prior to this
patch only pages covered by the memory using by the binary was added. If
TZSRAM i

core: pager: add missing tzsram physical pages

init_runtime() adds the physical pages used for paging. Prior to this
patch only pages covered by the memory using by the binary was added. If
TZSRAM is large enough there will remain a range of physical pages after
the "pageable_end" address. With this patch this last range is also added.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

f7a26db307-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pgt_{in,de}c_used_entries() check wrapping

In pgt_inc_used_entries() and pgt_dec_used_entries() assert that
pgt->num_used_entries doesn't wrap.

Reviewed-by: Etienne Carriere <etienne.carriere

core: pgt_{in,de}c_used_entries() check wrapping

In pgt_inc_used_entries() and pgt_dec_used_entries() assert that
pgt->num_used_entries doesn't wrap.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a2b6778007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: don't store hidden page pa in table

When a page is hidden the associated pmem->flags is updated with
PMEM_FLAG_HIDDEN. From a pmem it's also possible to derive the physical
address of t

core: pager: don't store hidden page pa in table

When a page is hidden the associated pmem->flags is updated with
PMEM_FLAG_HIDDEN. From a pmem it's also possible to derive the physical
address of the page. This makes storing the physical address of a hidden
(and possibly dirty) page redundant. So always store 0 instead of
physical address of hidden pages to simplify.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

e595a5f007-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove TEE_MATTR_HIDDEN_*

Removes the now unused TEE_MATTR_HIDDEN_BLOCK and
TEE_MATTR_HIDDEN_DIRTY_BLOCK.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wikla

core: remove TEE_MATTR_HIDDEN_*

Removes the now unused TEE_MATTR_HIDDEN_BLOCK and
TEE_MATTR_HIDDEN_DIRTY_BLOCK.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

aa06d68707-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: fix tee_pager_unhide_page()

Prior to this patch was tee_pager_unhide_page() searching for a physical
page which was used at a certain page index in an area. What wasn't
checked was that

core: pager: fix tee_pager_unhide_page()

Prior to this patch was tee_pager_unhide_page() searching for a physical
page which was used at a certain page index in an area. What wasn't
checked was that the area in addition to the page index matched. This
leads sometimes unhiding the wrong page which will result in rapid
aborts in succession until the correct page has been handled. With this
patch the area is also checked fixing the problem.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

53a68c3807-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: add pmem flags

Adds a flags field to struct tee_pager_pmem which is used to keep track
of the hidden and dirty state of a physical page instead of relying on
TEE_MATTR_* bits.

Reviewed

core: pager: add pmem flags

Adds a flags field to struct tee_pager_pmem which is used to keep track
of the hidden and dirty state of a physical page instead of relying on
TEE_MATTR_* bits.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

04752f6907-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: allocate TA memory with fobj_ta_mem_alloc()

Uses fobj_ta_mem_alloc() to allocate TA memory when creating a new
context.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-

core: allocate TA memory with fobj_ta_mem_alloc()

Uses fobj_ta_mem_alloc() to allocate TA memory when creating a new
context.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

fbcaa41107-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add fobj_sec_mem_alloc()

Adds fobj_sec_mem_alloc() which allocates physical memory from
tee_mm_sec_ddr, to be used as TA memory.

Support is added in the MOBJ of with_fobj type to handle this

core: add fobj_sec_mem_alloc()

Adds fobj_sec_mem_alloc() which allocates physical memory from
tee_mm_sec_ddr, to be used as TA memory.

Support is added in the MOBJ of with_fobj type to handle this
new kind of fobj.

A fobj_ta_mem_alloc() macro is added to use either fobj_rw_paged_alloc()
if paging of user TAs is enabled or else to use fobj_sec_mem_alloc()
instead.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

5d06920a07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: remove useless debug print

Removes a useless debug print from tee_pager_unhide_page().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.w

core: pager: remove useless debug print

Removes a useless debug print from tee_pager_unhide_page().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ca0bd72f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove mobj_paged_alloc()

Removes the now useless mobj_paged_alloc().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

ae02ae9807-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: change tee_pager_add_uta_area() arguments

Simplifies tee_pager_add_uta_area() by taking a pointer to a struct fobj
instead instead of a size. The return value is changed to a TEE_Result
to har

core: change tee_pager_add_uta_area() arguments

Simplifies tee_pager_add_uta_area() by taking a pointer to a struct fobj
instead instead of a size. The return value is changed to a TEE_Result
to harmonize better with other functions.

vm_map() is changed to expect a mobj with an assigned fobj when paging
is enabled. This requires that the pager allocations done with
mobj_paged_alloc() are replaced with fobj_rw_paged_alloc().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

71e2b56707-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: change tee_pager_add_core_area() arguments

Simplifies the tee_pager_add_core_area() arguments by taking an enum
tee_pager_area_type and a pointer to a struct fobj instead of the old
size, flag

core: change tee_pager_add_core_area() arguments

Simplifies the tee_pager_add_core_area() arguments by taking an enum
tee_pager_area_type and a pointer to a struct fobj instead of the old
size, flags, store and hashes.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2bb1139b07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: pager: share fobj between areas

pager_add_uta_area() and tee_pager_add_core_area() allocates one fobj which
is shared between all areas allocated during this function call.

Acked-by: Etienne

core: pager: share fobj between areas

pager_add_uta_area() and tee_pager_add_core_area() allocates one fobj which
is shared between all areas allocated during this function call.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

7513149e07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: remove flags argument from tee_pager_alloc()

Removes the flags argument from tee_pager_alloc() since it's only used
with TEE_MATTR_LOCKED. The exception is the bignum pool, but since it
still

core: remove flags argument from tee_pager_alloc()

Removes the flags argument from tee_pager_alloc() since it's only used
with TEE_MATTR_LOCKED. The exception is the bignum pool, but since it
still releases all locked pages each time the pool becomes unused it's
efficient usage of memory.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

0010a43f07-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

mobj: add mobj_with_fobj type

Adds mobj_with_fobj MOBJ type to refer to fobj when passed to vm_map() etc.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <

mobj: add mobj_with_fobj type

Adds mobj_with_fobj MOBJ type to refer to fobj when passed to vm_map() etc.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2cf99c4607-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: user_ta: refactor struct load_seg

Removes unneeded offs and oend fields from struct load_seg and add a
mobj field. With the mobj stored in struct load_seg mobj_code in struct
user_ta_elf isn't

core: user_ta: refactor struct load_seg

Removes unneeded offs and oend fields from struct load_seg and add a
mobj field. With the mobj stored in struct load_seg mobj_code in struct
user_ta_elf isn't needed any longer and is removed.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a73b587807-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

Replace ta_head.entry with elf entry

Prior to this patch the entry function of the TA was stored in ta_head
which is located in a read-only section of the TA. This results in the
linker emitting a r

Replace ta_head.entry with elf entry

Prior to this patch the entry function of the TA was stored in ta_head
which is located in a read-only section of the TA. This results in the
linker emitting a relocation modifying a read-only section. This is a
problem if the read-only section is mapped read-only while relocations
are performed. To avoid this problematic relocation the ta_head.entry
is removed and the ELF entry point is used instead.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

ee54628907-Feb-2019 Jens Wiklander <jens.wiklander@linaro.org>

core: add a file object interface

Adds a file object interface which is an abstraction of the storage part
in a struct tee_pager_area. This adds no new features, just moves some code
from tee_pager.

core: add a file object interface

Adds a file object interface which is an abstraction of the storage part
in a struct tee_pager_area. This adds no new features, just moves some code
from tee_pager.c into fobj.c.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

fd10f62b28-Jan-2019 Ovidiu Mihalachi <ovidiu_mihalachi@mentor.com>

core: keep alive TA context can be created after TA has panicked

When a keep alive TA instance panics, it continues to exist and
blocks all further use of the TA until the next reboot of the system.

core: keep alive TA context can be created after TA has panicked

When a keep alive TA instance panics, it continues to exist and
blocks all further use of the TA until the next reboot of the system.
Moreover, when a new session is trying to be created for
the panicked TA (while another session to that TA is still opened),
the system hangs.

This change releases panicked TA context and clears all references to
the released context when the TA panics regardless the TA properties.
This allows keep alive TA instances to be created back after they have
panicked without needing to reboot OP-TEE core.

Sessions on panicked TAs have to be closed by the client by calling
the proper API when session client is scheduled back.

Signed-off-by: Ovidiu Mihalachi <ovidiu_mihalachi@mentor.com>
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

6e59bb1e07-May-2019 Etienne Carriere <etienne.carriere@linaro.org>

core: handle user TA context released from session

Change is_user_ta_ctx() to support NULL context reference. For such
references the function now returns boolean value false. This allows
caller to

core: handle user TA context released from session

Change is_user_ta_ctx() to support NULL context reference. For such
references the function now returns boolean value false. This allows
caller to nicely abort their sequence when the context reference
is already released from the session instance. Note that caller shall
not assume a context refer to a PTA when is_user_ta_ctx() return
false, it shall call is_pseudo_ta_ctx().

A side effect is that few test on reference and function return value
can be simplified.

This change also ensures TA dump_state() function does not crash when
called provides a null context reference.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>

show more ...

d62792a026-Apr-2019 Etienne Carriere <etienne.carriere@linaro.org>

stm32mp1: clean shared resource to use vaddr_t

Replace type uintptr_t with type vaddr_t when applicable for consistency
with other resources.

Signed-off-by: Etienne Carriere <etienne.carriere@linar

stm32mp1: clean shared resource to use vaddr_t

Replace type uintptr_t with type vaddr_t when applicable for consistency
with other resources.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Joakim Bech <joakim.bech@linaro.org>

show more ...

1...<<171172173174175176177178179180>>...259