| #
dfdb73f7 |
| 16-Sep-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "bk/no_blx_setup" into integration
* changes: fix: replace stray BL2_AT_EL3 with RESET_TO_BL2 refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint re
Merge changes from topic "bk/no_blx_setup" into integration
* changes: fix: replace stray BL2_AT_EL3 with RESET_TO_BL2 refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint refactor: unify blx_setup() and blx_main() fix(bl2): unify the BL2 EL3 and RME entrypoints
show more ...
|
| #
d158d425 |
| 13-Aug-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor: unify blx_setup() and blx_main()
All BLs have a bl_setup() for things that need to happen early, a fall back into assembly and then bl_main() for the main functionality. This was necessary
refactor: unify blx_setup() and blx_main()
All BLs have a bl_setup() for things that need to happen early, a fall back into assembly and then bl_main() for the main functionality. This was necessary in order to fiddle with PAuth related things that tend to break C calls. Since then PAuth's enablement has seen a lot of refactoring and this is now worked around cleanly so the distinction can be removed. The only tradeoff is that this requires pauth to not be used for the top-level main function.
There are two main benefits to doing this: First, code is easier to understand as it's all together and the entrypoint is smaller. Second, the compiler gets to see more of the code and apply optimisations (importantly LTO).
Change-Id: Iddb93551115a2048988017547eb7b8db441dbd37 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
ee656609 |
| 16-Apr-2025 |
André Przywara <andre.przywara@arm.com> |
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving refactor(cpufeat): convert FEAT_PAuth setup to C refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION chore(cpufeat): remove PAuth presence checks feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
show more ...
|
| #
10ecd580 |
| 26-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT o
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT out of the PAuth enablement and place it in the respective entrypoints where we initialise SCTLR_EL3. This makes PAuth self-contained and SCTLR_EL3 initialisation centralised.
Change-Id: I0c0657ff1e78a9652cd2cf1603478283dc01f17b Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
6129e9a6 |
| 13-Sep-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge "Refactor ARMv8.3 Pointer Authentication support code" into integration
|
| #
ed108b56 |
| 13-Sep-2019 |
Alexei Fedorov <Alexei.Fedorov@arm.com> |
Refactor ARMv8.3 Pointer Authentication support code
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key gene
Refactor ARMv8.3 Pointer Authentication support code
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
show more ...
|
| #
508a48bb |
| 24-May-2019 |
Paul Beesley <paul.beesley@arm.com> |
Merge "Add support for Branch Target Identification" into integration
|
| #
9fc59639 |
| 24-May-2019 |
Alexei Fedorov <Alexei.Fedorov@arm.com> |
Add support for Branch Target Identification
This patch adds the functionality needed for platforms to provide Branch Target Identification (BTI) extension, introduced to AArch64 in Armv8.5-A by add
Add support for Branch Target Identification
This patch adds the functionality needed for platforms to provide Branch Target Identification (BTI) extension, introduced to AArch64 in Armv8.5-A by adding BTI instruction used to mark valid targets for indirect branches. The patch sets new GP bit [50] to the stage 1 Translation Table Block and Page entries to denote guarded EL3 code pages which will cause processor to trap instructions in protected pages trying to perform an indirect branch to any instruction other than BTI. BTI feature is selected by BRANCH_PROTECTION option which supersedes the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication and is disabled by default. Enabling BTI requires compiler support and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0. The assembly macros and helpers are modified to accommodate the BTI instruction. This is an experimental feature. Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3 is now made as an internal flag and BRANCH_PROTECTION flag should be used instead to enable Pointer Authentication. Note. USE_LIBROM=1 option is currently not supported.
Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
show more ...
|
| #
c0ce16fb |
| 13-Mar-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge pull request #1878 from jts-arm/sctlr
Apply stricter speculative load restriction
|
| #
02b57943 |
| 04-Mar-2019 |
John Tsichritzis <john.tsichritzis@arm.com> |
Apply stricter speculative load restriction
The SCTLR.DSSBS bit is zero by default thus disabling speculative loads. However, we also explicitly set it to zero for BL2 and TSP images when each image
Apply stricter speculative load restriction
The SCTLR.DSSBS bit is zero by default thus disabling speculative loads. However, we also explicitly set it to zero for BL2 and TSP images when each image initialises its context. This is done to ensure that the image environment is initialised in a safe state, regardless of the reset value of the bit.
Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
show more ...
|
| #
57bc6424 |
| 27-Feb-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1829 from antonio-nino-diaz-arm/an/pauth
Add Pointer Authentication (ARMv8.3-PAuth) support to the TF
|
| #
9d93fc2f |
| 31-Jan-2019 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
BL2: Enable pointer authentication support
The size increase after enabling options related to ARMv8.3-PAuth is:
+----------------------------+-------+-------+-------+--------+ |
BL2: Enable pointer authentication support
The size increase after enabling options related to ARMv8.3-PAuth is:
+----------------------------+-------+-------+-------+--------+ | | text | bss | data | rodata | +----------------------------+-------+-------+-------+--------+ | CTX_INCLUDE_PAUTH_REGS = 1 | +40 | +0 | +0 | +0 | | | 0.2% | | | | +----------------------------+-------+-------+-------+--------+ | ENABLE_PAUTH = 1 | +664 | +0 | +16 | +0 | | | 3.1% | | 0.9% | | +----------------------------+-------+-------+-------+--------+
Results calculated with the following build configuration:
make PLAT=fvp SPD=tspd DEBUG=1 \ SDEI_SUPPORT=1 \ EL3_EXCEPTION_HANDLING=1 \ TSP_NS_INTR_ASYNC_PREEMPT=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1
The changes for BL2_AT_EL3 aren't done in this commit.
Change-Id: I8c803b40c7160525a06173bc6cdca21c4505837d Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
9a207532 |
| 04-Jan-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1726 from antonio-nino-diaz-arm/an/includes
Sanitise includes across codebase
|
| #
09d40e0e |
| 14-Dec-2018 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Sanitise includes across codebase
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH} - inclu
Sanitise includes across codebase
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them).
For example, this patch had to be created because two headers were called the same way: e0ea0928d5b7 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3a282 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988b9 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems.
Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
cf0886e2 |
| 29-Oct-2018 |
Soby Mathew <soby.mathew@arm.com> |
Merge pull request #1644 from soby-mathew/sm/pie_proto
Position Indepedent Executable (PIE) Support
|
| #
f1722b69 |
| 12-Oct-2018 |
Soby Mathew <soby.mathew@arm.com> |
PIE: Use PC relative adrp/adr for symbol reference
This patch fixes up the AArch64 assembly code to use adrp/adr instructions instead of ldr instruction for reference to symbols. This allows these a
PIE: Use PC relative adrp/adr for symbol reference
This patch fixes up the AArch64 assembly code to use adrp/adr instructions instead of ldr instruction for reference to symbols. This allows these assembly sequences to be Position Independant. Note that the the reference to sizes have been replaced with calculation of size at runtime. This is because size is a constant value and does not depend on execution address and using PC relative instructions for loading them makes them relative to execution address. Also we cannot use `ldr` instruction to load size as it generates a dynamic relocation entry which must *not* be fixed up and it is difficult for a dynamic loader to differentiate which entries need to be skipped.
Change-Id: I8bf4ed5c58a9703629e5498a27624500ef40a836 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
show more ...
|
| #
c7aa7fdf |
| 26-Feb-2018 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #1263 from soby-mathew/sm/dyn_config
Dynamic Configuration Prototype
|
| #
a6f340fe |
| 09-Jan-2018 |
Soby Mathew <soby.mathew@arm.com> |
Introduce the new BL handover interface
This patch introduces a new BL handover interface. It essentially allows passing 4 arguments between the different BL stages. Effort has been made so as to be
Introduce the new BL handover interface
This patch introduces a new BL handover interface. It essentially allows passing 4 arguments between the different BL stages. Effort has been made so as to be compatible with the previous handover interface. The previous blx_early_platform_setup() platform API is now deprecated and the new blx_early_platform_setup2() variant is introduced. The weak compatiblity implementation for the new API is done in the `plat_bl_common.c` file. Some of the new arguments in the new API will be reserved for generic code use when dynamic configuration support is implemented. Otherwise the other registers are available for platform use.
Change-Id: Ifddfe2ea8e32497fe1beb565cac155ad9d50d404 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
show more ...
|
| #
f132b4a0 |
| 04-May-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #925 from dp-arm/dp/spdx
Use SPDX license identifiers
|
| #
82cb2c1a |
| 03-May-2017 |
dp-arm <dimitris.papastamos@arm.com> |
Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file.
NOTE: Files that have been imported by
Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file.
NOTE: Files that have been imported by FreeBSD have not been modified.
[0]: https://spdx.org/
Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
show more ...
|
| #
ed756252 |
| 06-Apr-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #886 from dp-arm/dp/stack-protector
Add support for GCC stack protection
|
| #
51faada7 |
| 24-Feb-2017 |
Douglas Raillard <douglas.raillard@arm.com> |
Add support for GCC stack protection
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options.
A new platform funct
Add support for GCC stack protection
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options.
A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection.
A message is printed at the ERROR level when a stack corruption is detected.
To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection.
FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable.
Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| #
108e4df7 |
| 16-Feb-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #834 from douglas-raillard-arm/dr/use_dc_zva_zeroing
Use DC ZVA instruction to zero memory
|
| #
308d359b |
| 02-Dec-2016 |
Douglas Raillard <douglas.raillard@arm.com> |
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access.
Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated.
Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements).
Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses.
Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access
Usage guidelines: in most cases, zero_normalmem should be preferred.
There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes.
Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations.
Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC.
Fixes ARM-software/tf-issues#408
Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| #
1b5fa6ef |
| 12-Dec-2016 |
danh-arm <dan.handley@arm.com> |
Merge pull request #774 from jeenu-arm/no-return-macro
Define and use no_ret macro where no return is expected
|