| #
2cd578ba |
| 23-May-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_CONFIG=y since it requires extensive changes in the ASAN framework.
The VCORE_FREE area is moved right before the .asan_shadow area.
init_asan() calls boot_mem_init_asan() to tag access to already allocated boot memory.
entry_a32.S is updated to skip allowing access to stacks in the .asan_shadow area for CFG_DYN_CONFIG=y since stacks are stored elsewhere in that configuration.
entry_a64.S is updated to initialize the .asan_shadow area in the same way as in entry_a32.S.
The .asan_shadow area is mapped explicitly in collect_mem_ranges() instead of relying on the now non-existent coverage of MEM_AREA_TEE_RAM_RW.
CFG_DYN_CONFIG=y and CFG_WITH_PAGER=y is not yet known to work.
Fixes: 1c1f8b65b5c6 ("core: mm: unify secure core and TA memory") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
bb538722 |
| 02-Jun-2025 |
Alvin Chang <alvinga@andestech.com> |
core: replace CFG_DYN_STACK_CONFIG with CFG_DYN_CONFIG
This commit replaces CFG_DYN_STACK_CONFIG with CFG_DYN_CONFIG since now RISC-V also supports CFG_DYN_STACK_CONFIG.
Signed-off-by: Alvin Chang
core: replace CFG_DYN_STACK_CONFIG with CFG_DYN_CONFIG
This commit replaces CFG_DYN_STACK_CONFIG with CFG_DYN_CONFIG since now RISC-V also supports CFG_DYN_STACK_CONFIG.
Signed-off-by: Alvin Chang <alvinga@andestech.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
91d4649d |
| 20-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add thread_count to thread_init_threads()
Add a thread_count parameter to thread_init_threads(). This must currently always be equal to CFG_NUM_THREADS, but may become a dynamic configuration
core: add thread_count to thread_init_threads()
Add a thread_count parameter to thread_init_threads(). This must currently always be equal to CFG_NUM_THREADS, but may become a dynamic configuration parameter with CFG_DYN_CONFIG=y in later patches.
The array threads[] is changed into a pointer to allow dynamic allocation in later patches. The assembly code is updated accordingly to handle a pointer instead of an array.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Alvin Chang <alvinga@andestech.com> Tested-by: Alvin Chang <alvinga@andestech.com> Reviewed-by: Yu-Chien Peter Lin <peter.lin@sifive.com> Tested-by: Yu-Chien Peter Lin <peter.lin@sifive.com> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
59724f22 |
| 20-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: dynamic allocation of thread_core_local and its stacks
With CFG_DYN_CONFIG enabled, use dynamic allocation of thread_core_local and the two stacks, tmp_stack and abt_stack, recorded in it.
Si
core: dynamic allocation of thread_core_local and its stacks
With CFG_DYN_CONFIG enabled, use dynamic allocation of thread_core_local and the two stacks, tmp_stack and abt_stack, recorded in it.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
abb35419 |
| 14-Apr-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: update recorded SP first after MMU is enabled
With CFG_CORE_ASLR=y, stored addresses must be updated after MMU has been enabled to match the map offset. In particular the recorded stack p
core: arm: update recorded SP first after MMU is enabled
With CFG_CORE_ASLR=y, stored addresses must be updated after MMU has been enabled to match the map offset. In particular the recorded stack pointers in thread_core_local[] must be updated to match the new offset before any calls can be done into C code or check_stack_limits() with CFG_CORE_DEBUG_CHECK_STACKS=y might catch an inconsistent stack pointer.
Currently, boot_mem_relocate() is called before the recorded stack pointers have been updated and causes a crash with CFG_CORE_ASLR=y and CFG_CORE_DEBUG_CHECK_STACKS=y. So fix this by calling delaying the call to boot_mem_relocate() to after the stack pointers in thread_core_local[] has been updated.
Reported-by: Jerome Forissier <jerome.forissier@linaro.org> Closes: https://github.com/OP-TEE/optee_os/issues/7363 Fixes: ea991d7459f6 ("core: arm: remove THREAD_CORE_LOCAL_STACKCHECK_RECURSION") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (vexpress-qemu_armv8a)
show more ...
|
| #
ea991d74 |
| 21-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: remove THREAD_CORE_LOCAL_STACKCHECK_RECURSION
THREAD_CORE_LOCAL_STACKCHECK_RECURSION was introduced in the commit b5ec8152f3e5 ("core: arm: refactor boot"). However, clearing the stackche
core: arm: remove THREAD_CORE_LOCAL_STACKCHECK_RECURSION
THREAD_CORE_LOCAL_STACKCHECK_RECURSION was introduced in the commit b5ec8152f3e5 ("core: arm: refactor boot"). However, clearing the stackcheck_recursion flag from assembly during boot isn't needed since the stack pointer is set up in synch with the recorded information in thread_core_local. So remove the unnecessary clearing and remove THREAD_CORE_LOCAL_STACKCHECK_RECURSION.
Reported-by: Alvin Chang <alvinga@andestech.com> Closes: https://github.com/OP-TEE/optee_os/commit/b5ec8152f3e5ad8cc111952f0483f5cf903aac7c#r154088026 Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
758c3687 |
| 13-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix CFG_BOOT_INIT_THREAD_CORE_LOCAL0
CFG_BOOT_INIT_THREAD_CORE_LOCAL0 is misleading since it's concerning the core id of the boot CPU. So rename the configuration flag to CFG_BOOT_INIT_CURRENT
core: fix CFG_BOOT_INIT_THREAD_CORE_LOCAL0
CFG_BOOT_INIT_THREAD_CORE_LOCAL0 is misleading since it's concerning the core id of the boot CPU. So rename the configuration flag to CFG_BOOT_INIT_CURRENT_THREAD_CORE_LOCAL and update the code as needed. Only thread_init_thread_core_local() has a change of behaviour where the boot CPU now can have any core id.
Fixes: b5ec8152f3e5 ("core: arm: refactor boot") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
b0da0d59 |
| 06-Mar-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: boot: add boot_init_primary_runtime()
Split the early parts of boot_init_primary_final() into boot_init_primary_runtime(). boot_init_primary_runtime() initializes the runtime, part of that is
core: boot: add boot_init_primary_runtime()
Split the early parts of boot_init_primary_final() into boot_init_primary_runtime(). boot_init_primary_runtime() initializes the runtime, part of that is to generate the PAUTH keys. The PAUTH keys are loaded in assembly before boot_init_primary_final() is called.
This fixes an error when SPs are initialized by entering and exiting S-EL0 from boot_init_primary_final() but the PAUTH registers hasn't been initialized with the right values. E/TC:0 0 Core undef-abort at address 0xe106be4 E/TC:0 0 esr 0x72000000 ttbr0 0x200000e27d000 ttbr1 0x00000000 cidr 0x0 E/TC:0 0 cpu #0 cpsr 0x60000144 E/TC:0 0 x0 0000000000000000 x1 0000000000000000 E/TC:0 0 x2 0000000000000000 x3 0000000000000000 E/TC:0 0 x4 000000000e27a060 x5 000000000e27a05c E/TC:0 0 x6 000000000000009f x7 0000000000000083 E/TC:0 0 x8 0000000000000000 x9 0000000000004367 E/TC:0 0 x10 000000000000009f x11 0000000000000000 E/TC:0 0 x12 0000000000000000 x13 0000000040006f80 E/TC:0 0 x14 0000000000000000 x15 0000000000000000 E/TC:0 0 x16 000000000e107460 x17 0000000000000000 E/TC:0 0 x18 0000000000000000 x19 000000000e002000 E/TC:0 0 x20 000000000e300000 x21 0000000040000000 E/TC:0 0 x22 0000000000000000 x23 000000000e272830 E/TC:0 0 x24 000000000e22c250 x25 0000000000000000 E/TC:0 0 x26 0000000000000000 x27 0000000000000000 E/TC:0 0 x28 0000000000000000 x29 000000000e27a020 E/TC:0 0 x30 0a2ed3b10e1314e8 elr 000000000e106be4 E/TC:0 0 sp_el0 000000000e27a010 E/TC:0 0 TEE load address @ 0xe100000 E/TC:0 0 Core undef-abort at address 0xe106be4 .debug_info+27620 E/TC:0 0 Call stack: E/TC:0 0 0x0e106be4 thread_enter_user_mode at core/arch/arm/kernel/thread.c:1049 E/TC:0 0 0x0e110628 sp_open_session at core/arch/arm/kernel/secure_partition.c:635 E/TC:0 0 0x0e112508 sp_init_uuid at core/arch/arm/kernel/secure_partition.c:1583 E/TC:0 0 0x0e1135f8 sp_init_all at core/arch/arm/kernel/secure_partition.c:2018 E/TC:0 0 0x0e137950 do_init_calls at core/kernel/initcall.c:20 E/TC:0 0 0x0e137b0c call_finalcalls at core/kernel/initcall.c:73
Fixes: b5ec8152f3e5 ("core: arm: refactor boot") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
b5ec8152 |
| 22-Jan-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: refactor boot
Introduce CFG_BOOT_INIT_THREAD_CORE_LOCAL0 to indicate that thread_core_local[0] is initialized before the boot_init_* functions are called.
thread_init_core_local_stacks()
core: arm: refactor boot
Introduce CFG_BOOT_INIT_THREAD_CORE_LOCAL0 to indicate that thread_core_local[0] is initialized before the boot_init_* functions are called.
thread_init_core_local_stacks() and thread_init_thread_core_local() are replaced by a new version of thread_init_thread_core_local() for CFG_BOOT_INIT_THREAD_CORE_LOCAL0=y.
Move initialization of thread_core_local[] from very early to boot_init_primary_late() where various DTBs containing run-time configuration are available. This will be needed in later patches when the number of configured cores can be read from DT or some other run-time configuration.
Move the "OP-TEE version" print and following code from boot_init_primary_late() to boot_init_primary_final()
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
d461c892 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: enable CFG_BOOT_MEM unconditionally
Enable CFG_BOOT_MEM unconditionally and call the boot_mem_*() functions as needed from entry_*.S and boot.c.
The pager will reuse all boot_mem memory
core: arm: enable CFG_BOOT_MEM unconditionally
Enable CFG_BOOT_MEM unconditionally and call the boot_mem_*() functions as needed from entry_*.S and boot.c.
The pager will reuse all boot_mem memory internally when configured. The non-pager configuration will unmap the memory and make it available for TAs if needed.
__FLATMAP_PAGER_TRAILING_SPACE is removed from the link script, collect_mem_ranges() in core/mm/core_mmu.c maps the memory following VCORE_INIT_RO automatically.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
5727b6af |
| 20-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: add boot_cached_mem_end
Add boot_cached_mem_end in C code, replacing the previous read-only mapped cached_mem_end. This allows updates to boot_cached_mem_end after MMU has been enabled.
core: arm: add boot_cached_mem_end
Add boot_cached_mem_end in C code, replacing the previous read-only mapped cached_mem_end. This allows updates to boot_cached_mem_end after MMU has been enabled.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
72f437a7 |
| 03-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add CFG_CORE_ASLR_SEED
Add CFG_CORE_ASLR_SEED to override the used seed if CFG_CORE_ASLR=y. CFG_CORE_ASLR_SEED is intended to help debugging ASLR related issues by using the same address layou
core: add CFG_CORE_ASLR_SEED
Add CFG_CORE_ASLR_SEED to override the used seed if CFG_CORE_ASLR=y. CFG_CORE_ASLR_SEED is intended to help debugging ASLR related issues by using the same address layout each time.
CFG_CORE_ASLR_SEED requires CFG_INSECURE=y.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
449b5f25 |
| 13-Aug-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: boot: use thread specific PAUTH keys
Use thread specific PAUTH keys during boot while using thread specific stack pointer.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked
core: arm: boot: use thread specific PAUTH keys
Use thread specific PAUTH keys during boot while using thread specific stack pointer.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
faf09045 |
| 15-Jun-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: introduce boot_init_primary_final()
Introduce boot_init_primary_final() and move the call to call_finalcalls() into that function.
This is needed in later patches to enabled PAUTH before
core: arm: introduce boot_init_primary_final()
Introduce boot_init_primary_final() and move the call to call_finalcalls() into that function.
This is needed in later patches to enabled PAUTH before boot_init_primary_final() is called.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
b2f99d20 |
| 01-Feb-2024 |
Olivier Deprez <olivier.deprez@arm.com> |
core: boot: fix memtag init sequence
Based on following observations on FVP: With boot_init_memtag called before MMU enable, DC GZA hits an alignment fault. This is because all accesses are of devic
core: boot: fix memtag init sequence
Based on following observations on FVP: With boot_init_memtag called before MMU enable, DC GZA hits an alignment fault. This is because all accesses are of device type when MMU is off. Arm ARM states for DC GZA: "If the memory region being modified is any type of Device memory, this instruction can give an alignment fault." Moving boot_init_memtag after MMU enable, DC GZA hits a permission fault, this is because the range returned by core_mmu_get_secure_memory consists of pages mapped RO (text sections) and then RW (data sections) consecutively. DC GZA is a write instruction executed towards an RO page leading to a fault.
To fix this, split boot_init_memtag into two halves: - Setup memtag operations before MMU is enabled such that MAIR_EL1 is properly configured for normal tagged memory. - Clear core TEE RW sections after MMU is enabled.
Closes: https://github.com/OP-TEE/optee_os/issues/6649 Signed-off-by: Olivier Deprez <olivier.deprez@arm.com> [jw rewrote boot_clear_memtag()] Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
f332e77c |
| 02-Oct-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: refactor boot argument handling
Adds a C function, boot_save_args(), to as early as possible analyze and save the needed parameters depending on the current configuration. The parameters
core: arm: refactor boot argument handling
Adds a C function, boot_save_args(), to as early as possible analyze and save the needed parameters depending on the current configuration. The parameters are stored in global variables, which are then accessed by the subsequently called functions, boot_init_primary_early(), boot_init_primary_late(), and get_aslr_seed().
entry_a32.S now preserves {r0-r3,lr} and pass them to boot_save_args().
entry_a64.S now preserves {x0-x3} and pass them to boot_save_args() with zero in a5.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com> Acked-by: Raymond Mao <raymond.mao@linaro.org>
show more ...
|
| #
1005ab2c |
| 03-Oct-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: ffa: parse boot info from SPMC at EL3
The SPMC at EL3 passes boot info in x0, parse it to find the manifest DT.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Car
core: ffa: parse boot info from SPMC at EL3
The SPMC at EL3 passes boot info in x0, parse it to find the manifest DT.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com> Acked-by: Raymond Mao <raymond.mao@linaro.org>
show more ...
|
| #
3050ae8a |
| 08-Sep-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: unconditionally support manifest DT with FF-A
When configured for FF-A (CFG_CORE_FFA=y) unconditionally support receiving at manifest device tree. This also makes CFG_DT=y mandatory with FF-A.
core: unconditionally support manifest DT with FF-A
When configured for FF-A (CFG_CORE_FFA=y) unconditionally support receiving at manifest device tree. This also makes CFG_DT=y mandatory with FF-A.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Leisen <leisen1@huawei.com> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
6fa59c9a |
| 12-May-2023 |
Seonghyun Park <seonghp@amazon.com> |
arm64: Introduce permissive PAN implementation
Privileged Access Never (PAN) is a part of ARMv8.1 extension that restricts accesses to unprivileged memory from privileged mode in order to prevent un
arm64: Introduce permissive PAN implementation
Privileged Access Never (PAN) is a part of ARMv8.1 extension that restricts accesses to unprivileged memory from privileged mode in order to prevent unintended accesses to potentially malicious memory.
This introduces configuration of PAN and helper functions enter_user_access() and exit_user_access() that toggles PSTATE.PAN that controls the behavior of PAN.
Current OP-TEE impelmentation is not ready to apply strict PAN policy due to missing user-access function uses, etc.
Hence, this patch takes a very permissive approach (yet better than nothing), where PAN is deactivated in the entire lifetime of thread_svc_handler (i.e., system call).
Signed-off-by: Seonghyun Park <seonghp@amazon.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
b89b3da2 |
| 21-Apr-2023 |
Vincent Chuang <Vincent.Chuang@mediatek.com> |
core: thread: Add support for canary value randomization
Currently hardcoded magic number is used as thread stack canary, an attacker with full control over the overflow can embed the hardcoded cana
core: thread: Add support for canary value randomization
Currently hardcoded magic number is used as thread stack canary, an attacker with full control over the overflow can embed the hardcoded canary value on the right location to bypass the overflow detection.
To add extra layer of security, redefine the canary value as variable, such that the canary can be initialized during runtime.
The canaries are initialized with static values from thread_init_canaries() during the early boot stage. The plat_get_random_stack_canaries() is refactored to support arbitrary-length random numbers, and a new function called thread_update_canaries() is created to fetch the random values and update the thread canaries. For CFG_NS_VIRTUALIZATION=y, the updated function is disabled.
Signed-off-by: Vincent Chuang <Vincent.Chuang@mediatek.com> Signed-off-by: Randy Hsu <Randy-CY.Hsu@mediatek.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
30bfe0d4 |
| 06-Mar-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm64: use adr_l for __nex_bss_start and __nex_bss_end
Fixes the following linker errors when CFG_NS_VIRTUALIZATION is enabled: .../entry_a64.o: in function `clear_bss': .../entry_a64.
core: arm64: use adr_l for __nex_bss_start and __nex_bss_end
Fixes the following linker errors when CFG_NS_VIRTUALIZATION is enabled: .../entry_a64.o: in function `clear_bss': .../entry_a64.S:237:(.text._start+0x8c): relocation truncated to fit: R_AARCH64_ADR_PREL_LO21 against symbol `__nex_bss_start' defined in .bss.mempool_default section in all_objs.o .../entry_a64.S:238:(.text._start+0x90): relocation truncated to fit: R_AARCH64_ADR_PREL_LO21 against symbol `__nex_bss_end' defined in .bss.mempool_default section in all_objs.o
Use the adr_l macro instead of adr to get the addresses for start and end of .nex_bss.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
0d928692 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: support physically relocatable OP-TEE binary
With CFG_CORE_PHYS_RELOCATABLE=y enable support in OP-TEE to relocate itself to allow it to run from physical address that differs from the link ad
core: support physically relocatable OP-TEE binary
With CFG_CORE_PHYS_RELOCATABLE=y enable support in OP-TEE to relocate itself to allow it to run from physical address that differs from the link address.
This feature is currently only supported with CFG_CORE_SEL2_SPMC=y since the TEE core has to know the range of available memory. With SPMC at EL2 this is accomplished via get_sec_mem_from_manifest(). An SPMC at S-EL2 may need to load OP-TEE at a different address depending on configuration.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
e1602654 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: ffa: parse boot info
With CFG_CORE_SEL2_SPMC=y OP-TEE is executed as an SP at S-EL1. The manifest describing the OP-TEE SP is passed as a boot argument.
The manifest contains among other thin
core: ffa: parse boot info
With CFG_CORE_SEL2_SPMC=y OP-TEE is executed as an SP at S-EL1. The manifest describing the OP-TEE SP is passed as a boot argument.
The manifest contains among other things the two properties "load-address" and "mem-size". These cover the secure memory allocated for OP-TEE to cover core and TA memory. The retrieved memory range is saved with a call to core_mmu_set_secure_memory() to be used when initializing MMU and other memory configuration.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
460c9735 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: boot: use correct base address for relocation
The start of the memory used by OP-TEE core is TEE_LOAD_ADDR. But relocate() and undo_init_relocation() has until now used TEE_RAM_START inst
core: arm: boot: use correct base address for relocation
The start of the memory used by OP-TEE core is TEE_LOAD_ADDR. But relocate() and undo_init_relocation() has until now used TEE_RAM_START instead which only work when it has the same value as TEE_LOAD_ADDR. So fix this by using TEE_LOAD_ADDR instead
Fixes: 5966660c02b3 ("core: move relocation to embedded data region") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
c79fb6d4 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: rename load_offset in struct core_mmu_config
Renames the field load_offset in struct core_mmu_config to the more accurate name map_offset.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro
core: rename load_offset in struct core_mmu_config
Renames the field load_offset in struct core_mmu_config to the more accurate name map_offset.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|