| d7d52b01 | 21-Feb-2017 |
Andrew F. Davis <afd@ti.com> |
plat-ti: Cleanup platform configuration
Reorganize platform configuration to assist in addition of new platforms. No functional changes.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Joa
plat-ti: Cleanup platform configuration
Reorganize platform configuration to assist in addition of new platforms. No functional changes.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| 15485f40 | 19-Apr-2017 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: mm: print memory type name instead of numerical value
Improve the legibility of the memory manager debug traces by converting the memory types to strings before printing them in dump_mmap_tabl
core: mm: print memory type name instead of numerical value
Improve the legibility of the memory manager debug traces by converting the memory types to strings before printing them in dump_mmap_table(), add_phys_mem() and add_va_space().
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
show more ...
|
| e15384ca | 04-Apr-2017 |
Andrew F. Davis <afd@ti.com> |
core_mmu_v7: Allow cache memory attributes to match non-SMP Linux
On non-SMP ARM Linux the default cache policy is inner/outer write-back, no write-allocate not sharable. When compiled with SMP supp
core_mmu_v7: Allow cache memory attributes to match non-SMP Linux
On non-SMP ARM Linux the default cache policy is inner/outer write-back, no write-allocate not sharable. When compiled with SMP support the policy is updated to inner/outer write-back with write-allocate sharable.
OP-TEE makes the assumption that SMP will be enabled, allow overriding this for the non-SMP cases.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 59cae313 | 04-Apr-2017 |
Andrew F. Davis <afd@ti.com> |
core_mmu_v7: Rename index to normal cached memory
The index into cache attribute registers for device memory is called ATTR_DEVICE_INDEX, but the normal cached memory is referred to as ATTR_IWBWA_OW
core_mmu_v7: Rename index to normal cached memory
The index into cache attribute registers for device memory is called ATTR_DEVICE_INDEX, but the normal cached memory is referred to as ATTR_IWBWA_OWBWA_INDEX, this implies the caching type. This is not always the type of cache we will use. Rename it to a more generic ATTR_NORMAL_CACHED_INDEX.
Signed-off-by: Andrew F. Davis <afd@ti.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 93c9df51 | 29-Mar-2017 |
Andrew F. Davis <afd@ti.com> |
plat-ti: Move TZDRAM area to better align with other DRAM uses
The area currently reserved for OP-TEE overlaps an area that is used by another existing device use-case, move OP-TEE to a non-interfer
plat-ti: Move TZDRAM area to better align with other DRAM uses
The area currently reserved for OP-TEE overlaps an area that is used by another existing device use-case, move OP-TEE to a non-interfering address.
Signed-off-by: Andrew F. Davis <afd@ti.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 62ede146 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: assert no null physical address is used in core static mapping
Current implementation of core mapping assumes value 0 denotes an invalid physical address. Hence this change asserts (in debug m
core: assert no null physical address is used in core static mapping
Current implementation of core mapping assumes value 0 denotes an invalid physical address. Hence this change asserts (in debug mode) that no null physical address is used in the core static mapping.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 737a9daf | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: skip registered physical memory with a null size
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 0cf0c5a8 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: debug trace the number of xlat tables used
These debug traces can be quite handy to monitor the number of translation tables effectively used at runtime.
Signed-off-by: Etienne Carriere <etie
core: debug trace the number of xlat tables used
These debug traces can be quite handy to monitor the number of translation tables effectively used at runtime.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 4b2b8e52 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: minor cleanup in core_mmu_lpae.c
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| aa2ab38b | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: more flexible static memory mapping
This change allows remove dependency of the static mapping description on memory type IDs ordering and the requirements for such memory types to match uniqu
core: more flexible static memory mapping
This change allows remove dependency of the static mapping description on memory type IDs ordering and the requirements for such memory types to match unique memories. This change will be required by a later change that will define several memory types for the TEE RAM (read-only/executable, read-only, read/write).
The setup of the static memory mapping array is somewhat changed: - 1st step: fill the array from registered "phys_mem" (unchanged) - 2nd step: assign "region size" to arrays cells (unchanged) - 3rd step: sort array by region size. This allows to group small page mappings together to prevent wasting xlat tables. - 4th step: for non LPAE mapping only: separate secure mapping from non-secure mapping for the small page mapped areas. - 5th step: assign virtual addresses for flat-mapped areas. This sequence saves the flat mapped areas base address which is used in the next step to assign non-flat mapped virtual memories. - 6th step: assign virtual addresses for the non-flat mapped areas. This step considers the cases where non-flat mapped areas are mapped before or after flat mapped areas as done before this change. - a last, yet not mandatory step sorts the static mapping description array by virtual address ranges.
With this change, the static mapping does not expects memory type ID value ordering reflect any ordering of the areas virtual address ranges. Instead, the memory type IDs only reflect the expected mapping attributes which drive how virtual mapping is built.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2d216865 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: default define stack alignment and core vmem size
Default define CFG_TEE_RAM_VA_SIZE if not defined from platform.
As a side effect of bringing a default value for CFG_TEE_RAM_VA_SIZE into co
core: default define stack alignment and core vmem size
Default define CFG_TEE_RAM_VA_SIZE if not defined from platform.
As a side effect of bringing a default value for CFG_TEE_RAM_VA_SIZE into core_mmu.h and thus including 'platform_config.h', the macro STACK_ALIGNMENT defined in user_ta.c must not conflict with the macro defined by the platform. Hence this change also default defines STACK_ALIGNMENT if not defined from platform.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b90c1e32 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: debug trace for non-LPAE mapping
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 04ec0d2d | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: fix non-LPAE mapping against unmapped areas
Core defines some virtual memory than should not be default mapped. Yet core_mmu_v7.c loads a non null descriptor in MMU tables for such memories: a
core: fix non-LPAE mapping against unmapped areas
Core defines some virtual memory than should not be default mapped. Yet core_mmu_v7.c loads a non null descriptor in MMU tables for such memories: attributes are null (which makes the page effectively not mapped) but a meaningless non null physical page address is loaded. This change
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ee203c1c | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: move place where TEE_RAM mapping region size is forced
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 9efffc34 | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: move mmap trace into a specific routine
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 6fd2f72a | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: fix memory mapping overlapping sequence
Before this patch, the overlapping memory ranges where detected only if the lower physical range was registered before the higher overlapping physical r
core: fix memory mapping overlapping sequence
Before this patch, the overlapping memory ranges where detected only if the lower physical range was registered before the higher overlapping physical range.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bd5f930a | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: fix plat-stm iomem mapping
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> |
| 303753fa | 10-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: remove useless rounding for registered io memory
This change removes useless rounding on registered io memory for the platforms maintained by Linaro.
Also remove registering of GIC iomem on p
core: remove useless rounding for registered io memory
This change removes useless rounding on registered io memory for the platforms maintained by Linaro.
Also remove registering of GIC iomem on plat-mediatek as the platform does not use the GIC resources.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 59fffc71 | 12-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: deprecate DEVICEx_TYPE/_PA_BASE/_SIZE
Macros DEVICEx_TYPE, DEVICEx_PA_BASE and DEVICEx__SIZE used to help platform to register their address range mapping requirements. These are now deprecate
core: deprecate DEVICEx_TYPE/_PA_BASE/_SIZE
Macros DEVICEx_TYPE, DEVICEx_PA_BASE and DEVICEx__SIZE used to help platform to register their address range mapping requirements. These are now deprecated since platform should use the more flexible register_phys_mem() macro.
This change removes all occurrences of DEVICEx_TYPE/_PA_BASE/_SIZE and use the register_phys_mem() instead.
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 73595e4c | 12-Apr-2017 |
Etienne Carriere <etienne.carriere@linaro.org> |
core: fix register_XXX_mem() against physical address
Use __COUNTER__ instead of the registered physical address to generate the label of the structure defined by the macros __register_phys_mem() an
core: fix register_XXX_mem() against physical address
Use __COUNTER__ instead of the registered physical address to generate the label of the structure defined by the macros __register_phys_mem() and __register_sdp_mem().
Before this change, when argument "addr" is used, one cannot use these macros providing an address that is the result of a local operation.
I.e This implementation was not possible: __register_phys_mem(<any-id>, ROUNDUP(<addr>, <value>), <size>); and one needed to use a temporary macro for the address computation: #define MY_BASE_ADDRESS ROUNDUP(<addr>, <value>) __register_phys_mem(<any-id>, MY_BASE_ADDRESS, <size>);
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 69306abd | 15-Feb-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
tee_mm: add locking while working with pool->entry
This fixes race condition in linked list handling.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wik
tee_mm: add locking while working with pool->entry
This fixes race condition in linked list handling.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 40ea51de | 15-Feb-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
spinlock: add cpu_spin_lock_xsave()/xrestore() functions
These are functions like spin_lock_irqsave() functions in linux kernel. They should replace xxx_lock() functions in different modules.
Signe
spinlock: add cpu_spin_lock_xsave()/xrestore() functions
These are functions like spin_lock_irqsave() functions in linux kernel. They should replace xxx_lock() functions in different modules.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 771ba399 | 07-Apr-2017 |
Etienne Carriere <etienne.carriere@st.com> |
core: increase size of the exception stack in AArch32 mode
'stack_abt' is too small when pager is enabled and needs to be increased. 'stack_abt' is used for aborts (prefetch/data/undef) and native/f
core: increase size of the exception stack in AArch32 mode
'stack_abt' is too small when pager is enabled and needs to be increased. 'stack_abt' is used for aborts (prefetch/data/undef) and native/foreign interruptions exceptions.
Tests on ARMv7 targets show that between 1000 and 1500 bytes of these stacks are consumed at runtime. Tests considered configurations with trace/debug/LPAE/VFP enabled and disabled. Tests were based on running OP-TEE with the xtest non-regression suite.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Tested-by: Etienne Carriere <etienne.carriere@st.com> (qemu_virt/b2260) Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1149e39f | 04-Apr-2017 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
core_mmu: round VAs to region size
Entries in memory map are sorted by address, not by region size. So, if entries have region size 2M, 4k, 2M, ... then all maps after second one will be misaligned.
core_mmu: round VAs to region size
Entries in memory map are sorted by address, not by region size. So, if entries have region size 2M, 4k, 2M, ... then all maps after second one will be misaligned. Thus, we need to explicitly round virtual addresses.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt)
show more ...
|
| 47ff9bf0 | 04-Apr-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix compile error with CFG_WITH_USER_TA=n
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Fixes: 040bc0f04138 ("core: add test case for hash-tree") Signed-off-by: Jens Wiklander <j
core: fix compile error with CFG_WITH_USER_TA=n
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Fixes: 040bc0f04138 ("core: add test case for hash-tree") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|