| #
2cd578ba |
| 23-May-2025 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_
core: fix asan for CFG_WITH_PAGER=n
Some fixes are needed to make CFG_CORE_SANITIZE_KADDRESS=y work both with and without CFG_DYN_CONFIG=y.
Sanitizing stack addresses aren't supported with CFG_DYN_CONFIG=y since it requires extensive changes in the ASAN framework.
The VCORE_FREE area is moved right before the .asan_shadow area.
init_asan() calls boot_mem_init_asan() to tag access to already allocated boot memory.
entry_a32.S is updated to skip allowing access to stacks in the .asan_shadow area for CFG_DYN_CONFIG=y since stacks are stored elsewhere in that configuration.
entry_a64.S is updated to initialize the .asan_shadow area in the same way as in entry_a32.S.
The .asan_shadow area is mapped explicitly in collect_mem_ranges() instead of relying on the now non-existent coverage of MEM_AREA_TEE_RAM_RW.
CFG_DYN_CONFIG=y and CFG_WITH_PAGER=y is not yet known to work.
Fixes: 1c1f8b65b5c6 ("core: mm: unify secure core and TA memory") Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
d461c892 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: enable CFG_BOOT_MEM unconditionally
Enable CFG_BOOT_MEM unconditionally and call the boot_mem_*() functions as needed from entry_*.S and boot.c.
The pager will reuse all boot_mem memory
core: arm: enable CFG_BOOT_MEM unconditionally
Enable CFG_BOOT_MEM unconditionally and call the boot_mem_*() functions as needed from entry_*.S and boot.c.
The pager will reuse all boot_mem memory internally when configured. The non-pager configuration will unmap the memory and make it available for TAs if needed.
__FLATMAP_PAGER_TRAILING_SPACE is removed from the link script, collect_mem_ranges() in core/mm/core_mmu.c maps the memory following VCORE_INIT_RO automatically.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
99c6021f |
| 14-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm,pager: make __vcore_init_ro_start follow __vcore_init_rx_end
This concerns configurations with CFG_WITH_PAGER=y. Until this patch, even if __vcore_init_ro_size (VCORE_INIT_RO_SZ) is 0 for
core: arm,pager: make __vcore_init_ro_start follow __vcore_init_rx_end
This concerns configurations with CFG_WITH_PAGER=y. Until this patch, even if __vcore_init_ro_size (VCORE_INIT_RO_SZ) is 0 for CFG_CORE_RODATA_NOEXEC=n, __vcore_init_ro_start was using some value smaller than __vcore_init_rx_end. To simplify code trying to find the end of VCORE_INIT_RX and VCORE_INIT_RO parts of the binary, make sure that __vcore_init_ro_start follows right after __vcore_init_rx_end.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
a5ac48d6 |
| 13-Sep-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add VCORE_FREE_{PA,SZ,END_PA}
Add VCORE_FREE_{PA,SZ,END_PA} defines to identify the unused and free memory range at the end of TEE_RAM_START..(TEE_RAM_START + TEE_RAM_VA_SIZE).
VCORE_FREE_SZ
core: add VCORE_FREE_{PA,SZ,END_PA}
Add VCORE_FREE_{PA,SZ,END_PA} defines to identify the unused and free memory range at the end of TEE_RAM_START..(TEE_RAM_START + TEE_RAM_VA_SIZE).
VCORE_FREE_SZ is 0 in a pager configuration since all the memory is used by the pager.
The VCORE_FREE range is excluded from the TEE_RAM_RW area for CFG_NS_VIRTUALIZATION=y and instead put in a separate NEX_RAM_RW area. This makes each partition use a bit less memory and leaves the VCORE_FREE range available for the Nexus.
The VCORE_FREE range is added to the TEE_RAM_RW area for the normal configuration with CFG_NS_VIRTUALIZATION=n and CFG_WITH_PAGER=n. It's in practice unchanged behaviour in this configuration.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
bfcdda39 |
| 20-Aug-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: kern.ld.S: assert enough RAM for paging
Update the assert for enough ram for paging to take hash data and relocation information into account.
Signed-off-by: Jens Wiklander <jens.wikland
core: arm: kern.ld.S: assert enough RAM for paging
Update the assert for enough ram for paging to take hash data and relocation information into account.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@foss.st.com>
show more ...
|
| #
9cb4152f |
| 26-Jul-2024 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: kern.ld.S: align .ARM.ex* sections
Make sure that the .ARM.exidx and .ARM.extab sections are 8 byte aligned to work with CFG_CORE_SANITIZE_KADDRESS=y.
Signed-off-by: Jens Wiklander <jens
core: arm: kern.ld.S: align .ARM.ex* sections
Make sure that the .ARM.exidx and .ARM.extab sections are 8 byte aligned to work with CFG_CORE_SANITIZE_KADDRESS=y.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
0d928692 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: support physically relocatable OP-TEE binary
With CFG_CORE_PHYS_RELOCATABLE=y enable support in OP-TEE to relocate itself to allow it to run from physical address that differs from the link ad
core: support physically relocatable OP-TEE binary
With CFG_CORE_PHYS_RELOCATABLE=y enable support in OP-TEE to relocate itself to allow it to run from physical address that differs from the link address.
This feature is currently only supported with CFG_CORE_SEL2_SPMC=y since the TEE core has to know the range of available memory. With SPMC at EL2 this is accomplished via get_sec_mem_from_manifest(). An SPMC at S-EL2 may need to load OP-TEE at a different address depending on configuration.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d690c838 |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: kern.ld.S: assert load address page aligned
Simplify things and assert that the start of text or load address is page aligned instead. This replaces the more relaxed check for 32-byte or
core: arm: kern.ld.S: assert load address page aligned
Simplify things and assert that the start of text or load address is page aligned instead. This replaces the more relaxed check for 32-byte or 128-byte alignment depending on ARM32 or ARM64. This also helps later when introducing physical relocation of the OP-TEE binary.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
ee34e7ea |
| 11-Apr-2023 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: remove TEE_RAM_VA_START and TEE_TEXT_VA_START
TEE_RAM_VA_START and TEE_TEXT_VA_START are defined to exactly the same thing as TEE_RAM_START and TEE_LOAD_ADDR respectively. They don't deal with
core: remove TEE_RAM_VA_START and TEE_TEXT_VA_START
TEE_RAM_VA_START and TEE_TEXT_VA_START are defined to exactly the same thing as TEE_RAM_START and TEE_LOAD_ADDR respectively. They don't deal with virtual addresses as the names suggests, they too represent physical addresses. So remove TEE_RAM_VA_START and TEE_TEXT_VA_START to get rid of some redundancy.
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jerome Forissier <jerome.forissier@linaro.org>
show more ...
|
| #
b76b2296 |
| 03-Feb-2023 |
Jerome Forissier <jerome.forissier@linaro.org> |
virt: rename CFG_VIRTUALIZATION to CFG_NS_VIRTUALIZATION
With the advent of virtualization support at S-EL2 in the Armv8.4-A architecture, CFG_VIRTUALIZATION has become ambiguous. Let's rename it to
virt: rename CFG_VIRTUALIZATION to CFG_NS_VIRTUALIZATION
With the advent of virtualization support at S-EL2 in the Armv8.4-A architecture, CFG_VIRTUALIZATION has become ambiguous. Let's rename it to CFG_NS_VIRTUALIZATION to indicate more clearly that it is about supporting virtualization on the non-secure side.
This commit is the result of the following command:
$ for f in $(git grep -l -w CFG_VIRTUALIZATION); do \ sed -i -e 's/CFG_VIRTUALIZATION/CFG_NS_VIRTUALIZATION/g' $f; \ done
...plus the compatibility line in mk/config.mk:
CFG_NS_VIRTUALIZATION ?= $(CFG_VIRTUALIZATION)
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
show more ...
|
| #
1aae2c8e |
| 19-Jan-2022 |
Jerome Forissier <jerome@forissier.org> |
core: pager: export __{text,rodata}_{init,pageable}_{start,end}
Add symbols __text_pageable_start, __text_pageable_end, __rodata_pageable_start and __rodata_pageable_end. They will later be used by
core: pager: export __{text,rodata}_{init,pageable}_{start,end}
Add symbols __text_pageable_start, __text_pageable_end, __rodata_pageable_start and __rodata_pageable_end. They will later be used by the attestation PTA.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
487f8cd2 |
| 01-Feb-2022 |
Jerome Forissier <jerome@forissier.org> |
core: compiler.h: introduce __relrodata_unpaged(x)
Introduce macro __relrodata_unpaged(x) to mark data that need to be unpaged and are essentially read-only but may contain relocations when ASLR is
core: compiler.h: introduce __relrodata_unpaged(x)
Introduce macro __relrodata_unpaged(x) to mark data that need to be unpaged and are essentially read-only but may contain relocations when ASLR is enabled, hence "relocatable read-only". When ASLR is turned off, the macro is identical to __rodata_unpaged(x). When ASLR is on however, the data is emitted in section .data.rel.ro.__unpaged.x which is later gathered by the linker file into the output section .data.rel.ro which is mapped read only at runtime (after relocations are processed) and is also unpaged (when pager is enabled).
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
c0af48e6 |
| 03-Jan-2022 |
Jerome Forissier <jerome@forissier.org> |
core: kern.ld.S: move .scattered_array* into .data.rel.ro
Moves the symbols tagged with .scattered_array* from the .rodata output section into a new output section: .data.rel.ro, which is also writ
core: kern.ld.S: move .scattered_array* into .data.rel.ro
Moves the symbols tagged with .scattered_array* from the .rodata output section into a new output section: .data.rel.ro, which is also writeable (hence the suppression of __SECTION_FLAGS_RODATA in scattered_array.h) but placed in tee.elf to be mapped read-only after relocations are applied. The new section is created only when core ASLR is enabled, otherwise no relocation can occur and we can keep the previous code.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
889fb568 |
| 14-Dec-2021 |
Jerome Forissier <jerome@forissier.org> |
core: add delimited area in .text to store data
A few variables such as boot_mmu_config are stored within the .text section of tee.elf, because they need to be reachable from the identity mapping wh
core: add delimited area in .text to store data
A few variables such as boot_mmu_config are stored within the .text section of tee.elf, because they need to be reachable from the identity mapping which covers a subset of .text. Having them here however is a problem when one wants to measure (hash) the .text section because the runtime content may be different from the content in the tee.elf. In order to workaround this issue, allocate an area in the .text section to gather the data that are modified at boot time. Symbols tagged with .identity_map.data will be stored there. Two delimiters are introduced: __text_data_start and __text_data_end.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
61bdedea |
| 13-Jan-2022 |
Jerome Forissier <jerome@forissier.org> |
core: define DT drivers using scattered arrays
Replace the specific mechanism used to define and enumerate DT drivers with scattered arrays. Doing so simplifies the TEE linker file a bit.
Signed-of
core: define DT drivers using scattered arrays
Replace the specific mechanism used to define and enumerate DT drivers with scattered arrays. Doing so simplifies the TEE linker file a bit.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Suggested-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
7a6682fc |
| 13-Dec-2021 |
Ruchika Gupta <ruchika.gupta@linaro.org> |
Move section .note.gnu.property after .text in lds files
It is observed that clang compiler sometimes places the .note.gnu.property at offset 0. For TA's, the loader expects the user_ta_header at th
Move section .note.gnu.property after .text in lds files
It is observed that clang compiler sometimes places the .note.gnu.property at offset 0. For TA's, the loader expects the user_ta_header at that location while for ldelf, _ldelf_start() is expected at this point. To avoid such conflicts place this section after the text section.
Signed-off-by: Ruchika Gupta <ruchika.gupta@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
0d206ea0 |
| 07-Jun-2021 |
Izik Dubnov <izik@amazon.com> |
core: lpae: use "base table" naming instead of "l1 table"
This is a preparation for supporting base table which is not level 1 (i.e. support level 0). Tries not to change anything functional, but ra
core: lpae: use "base table" naming instead of "l1 table"
This is a preparation for supporting base table which is not level 1 (i.e. support level 0). Tries not to change anything functional, but rather just a renaming. "base table" terminology is referenced from TF-A Renamed CORE_MMU_L1_TBL_OFFSET -> CORE_MMU_BASE_TABLE_OFFSET Added CORE_MMU_BASE_TABLE_LEVEL instead of hard-coded "1" Added CORE_MMU_BASE_TABLE_SHIFT instead of hard-coded "30" Few new defines were copied from TF-A xlat_tables_def.h, like the existing XLAT related defines.
Signed-off-by: Izik Dubnov <izik@amazon.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
27c64925 |
| 12-May-2021 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This w
core: use separate sections for each __rodata_unpaged variable
Adds a mandatory argument to the macro __rodata_unpaged() to take the name of the variable to put in the unpaged rodata section. This will result in separate sections for each such variable and make it easier to debug the pruning of the dependency tree for unpaged sections.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
9ebe34b0 |
| 26-Jan-2021 |
Volodymyr Babchuk <volodymyr_babchuk@epam.com> |
link: make section size definitions relocation-proof
Value of define VCORE_UNPG_RW_SZ is determined by linker script and provided to C code as a symbol value (__vcore_unpg_rw_size). This is a standa
link: make section size definitions relocation-proof
Value of define VCORE_UNPG_RW_SZ is determined by linker script and provided to C code as a symbol value (__vcore_unpg_rw_size). This is a standard way of sharing linker variables with C code, which is described in ld manual.
Problem is that linker sometimes makes those symbols relocatable and ASLR code then moves them to random places with rest of the OP-TEE image.
For example, on build for RCAR platform I am getting those entries in relocation section:
[...] 000000004415b120 R_AARCH64_RELATIVE *ABS*+0x0000000044100180 000000004415af60 R_AARCH64_RELATIVE *ABS*+0x000000004415fc48 000000004415afb0 R_AARCH64_RELATIVE *ABS*+0x00000000000a4000 <====== 000000004415aef8 R_AARCH64_RELATIVE *ABS*+0x000000004415c000 [...]
From programmer's point of view this looks like "constant" VCORE_UNPG_RW_SZ has random value every boot.
Obvious approach is to provide section end address and then calculate size on C side:
#define VCORE_UNPG_RW_SZ ((size_t)(__vcore_unpg_rx_end - __vcore_unpg_rx_start))
But with this approach compiler can't initialize constant values in definitions like
register_phys_mem_ul(MEM_AREA_TEE_RAM_RW, VCORE_UNPG_RW_PA, VCORE_UNPG_RW_SZ);
from core_mmu.c.
Basically, this leads to following constraints:
1. If we calculate section size in linker script, then compiler can use it as a constant expression, but this value may be mangled by ASLR at run-time.
2. We can't calculate section size in C code, because this value can't be used as a constant expression.
This patch provides a workaround around this issue by providing two sets of definitions: old _SZ definition is renamed to _SZ_UNSAFE and it should be used only in places where a constant expression is required and provided it is referenced only before dynamic relocations have been applied, while the new _SZ definition can be used in all other situations.
Value of _new SZ is obtained by deducting section start address from end address. Additional linker symbols are introduced to provide section end addresses.
Fixes: 170e9084a84f ("core: add support for CFG_CORE_ASLR") Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Reviewed-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
f3368ec8 |
| 27-Nov-2020 |
Jerome Forissier <jerome@forissier.org> |
core: arm: kern.ld.S: fix ROUNDUP() and ROUNDDOWN() for Clang
Fixes exceptions on boot when CFG_WITH_ASLR=y CFG_WITH_PAGER=y and the Clang toolchain is used (tested with QEMUv8 and Clang 11.0.0).
T
core: arm: kern.ld.S: fix ROUNDUP() and ROUNDDOWN() for Clang
Fixes exceptions on boot when CFG_WITH_ASLR=y CFG_WITH_PAGER=y and the Clang toolchain is used (tested with QEMUv8 and Clang 11.0.0).
The Clang linker happens to generate non-relocatable references to symbols defined by expressions in the linker script which involve some arithmetic operations on another symbol. More specifically, when rounding up or down addresses to page boundaries using the expressions defined in <util.h>. This commit introduces different ways of doing ROUNDUP() and ROUNDDOWN() which work with both Clang and GCC: - ROUNDUP() is replaced with the linker ALIGN() built-in function, - ROUNDDOWN() is rewritten as 'symbol - something'.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
eb5f87aa |
| 26-Nov-2020 |
Jerome Forissier <jerome@forissier.org> |
core: arm: kern.ld.S: remove redundant line
__rodata_init_end is defined twice. Remove one instance.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wikland
core: arm: kern.ld.S: remove redundant line
__rodata_init_end is defined twice. Remove one instance.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| #
ed30b6c7 |
| 15-Oct-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
early_ta: use scattered array helpers
Simplifies the core linker script by replacing the hard coded .rodata.early_ta section with use of SCATTERED_ARRAY_DEFINE_PG_ITEM() instead.
Reviewed-by: Jerom
early_ta: use scattered array helpers
Simplifies the core linker script by replacing the hard coded .rodata.early_ta section with use of SCATTERED_ARRAY_DEFINE_PG_ITEM() instead.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
55c1b947 |
| 10-Dec-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fix generation of tee.bin
Prior to this patch generation of tee.bin (CFG_WITH_PAGER=n) fails with: GEN out/core/tee.bin Cannot find symbol __init_end core/arch/arm/kernel/link.mk:183: re
core: fix generation of tee.bin
Prior to this patch generation of tee.bin (CFG_WITH_PAGER=n) fails with: GEN out/core/tee.bin Cannot find symbol __init_end core/arch/arm/kernel/link.mk:183: recipe for target 'out/core/tee.bin' failed
Introduce a special __get_tee_init_end to fix this and also avoid confusion with __init_end used in the code for the pager case.
Fixes: 5dd1570ac5b0 ("core: add embedded data region") Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
5966660c |
| 21-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: move relocation to embedded data region
The relocation sections are placed last in the linker script to be kept out of the way for the other sections. The relocation sections are interpreted b
core: move relocation to embedded data region
The relocation sections are placed last in the linker script to be kept out of the way for the other sections. The relocation sections are interpreted by gen_tee_bin.py and converted into a more compact data structure which is stored in the embedded data region.
For each relocation, only one 32-bit offset is kept. Compared to the standard ELF format, the size of the relocation table is either halved (Rel32 type: two 32-bit words per entry) or divided by 6 (Rel64 type: three 64-bit words per entry).
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
5dd1570a |
| 21-Oct-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add embedded data region
Until this patch hashes has been supplied as a single blob following the init part when configured for paging. To facilitate storing additional data when OP-TEE is ini
core: add embedded data region
Until this patch hashes has been supplied as a single blob following the init part when configured for paging. To facilitate storing additional data when OP-TEE is initializing a struct boot_embdata is added. This struct is populated gen_tee_bin.py and later interpreted by assembly boot code and init_runtime().
Previous memory allocation for hashes in the linker script is replaced by this new mechanism.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|