| #
7303319b |
| 08-Nov-2025 |
Chris Kay <chris.kay@arm.com> |
Merge changes from topic "NUMA_AWARE_PER_CPU" into integration
* changes: docs(maintainers): add per-cpu framework into maintainers.rst feat(per-cpu): add documentation for per-cpu framework f
Merge changes from topic "NUMA_AWARE_PER_CPU" into integration
* changes: docs(maintainers): add per-cpu framework into maintainers.rst feat(per-cpu): add documentation for per-cpu framework feat(rdv3): enable numa aware per-cpu for RD-V3-Cfg2 feat(per-cpu): migrate amu_ctx to per-cpu framework feat(per-cpu): migrate spm_core_context to per-cpu framework feat(per-cpu): migrate psci_ns_context to per-cpu framework feat(per-cpu): migrate psci_cpu_pd_nodes to per-cpu framework feat(per-cpu): migrate rmm_context to per-cpu framework feat(per-cpu): integrate per-cpu framework into BL31/BL32 feat(per-cpu): introduce framework accessors/definers feat(per-cpu): introduce linker changes for NUMA aware per-cpu framework docs(changelog): add scope for per-cpu framework
show more ...
|
| #
98859b99 |
| 29-Jan-2025 |
Sammit Joshi <sammit.joshi@arm.com> |
feat(per-cpu): integrate per-cpu framework into BL31/BL32
Integrate per-cpu support into BL31/BL32 by extending the following areas:
Zero-initialization: Treats per-cpu sections like .bss and clear
feat(per-cpu): integrate per-cpu framework into BL31/BL32
Integrate per-cpu support into BL31/BL32 by extending the following areas:
Zero-initialization: Treats per-cpu sections like .bss and clears them during early C runtime initialization. For platforms that enable NUMA_AWARE_PER_CPU, invokes a platform hook to zero-initialize node-specific per-cpu regions.
Cache maintenance: Extends the BL31 exit path to clean dcache lines covering the per-cpu region, ensuring data written by the primary core is visible to secondary cores.
tpidr_el3 setup: Initializes tpidr_el3 with the base address of the current CPU’s per-cpu section. This allows per-cpu framework to resolve local cpu accesses efficiently.
The percpu_data object is currently stored in tpidr_el3. Since the per-cpu framework will use tpidr_el3 for this-cpu access, percpu_data must be migrated to avoid conflict. This commit moves percpu_data to the per-cpu framework.
Signed-off-by: Sammit Joshi <sammit.joshi@arm.com> Signed-off-by: Rohit Mathew <rohit.mathew@arm.com> Change-Id: Iff0c2e1f8c0ebd25c4bb0b09bfe15dd4fbe20561
show more ...
|
| #
7832483e |
| 30-Oct-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes I6e4cd8b5,Id5086b3c,I070d62bb into integration
* changes: fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled fix(smccc): don't panic on a feature availability
Merge changes I6e4cd8b5,Id5086b3c,I070d62bb into integration
* changes: fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled fix(smccc): don't panic on a feature availability call with FEAT_RNG_TRAP fix(bl1): use per-world context correctly
show more ...
|
| #
45218c64 |
| 22-Oct-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled
RNG_TRAP will also trap RNDR accesses at EL3 which we don't want as we have no way to handle nested exceptions. Clear the tra
fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled
RNG_TRAP will also trap RNDR accesses at EL3 which we don't want as we have no way to handle nested exceptions. Clear the trap with root context to always allow access at EL3.
Change-Id: I6e4cd8b5a7730f6ffbeed912d9301877d271110d Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
a873d26f |
| 22-Oct-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(bl1): use per-world context correctly
Currently, the configuration with BL1 and BL2 at SEL1 will transition via el3_exit which will restore per-world context. However, that context is never writ
fix(bl1): use per-world context correctly
Currently, the configuration with BL1 and BL2 at SEL1 will transition via el3_exit which will restore per-world context. However, that context is never written to and so zeroes end up in registers, which is not necessarily correct.
This patch gets BL1 to call cm_manage_extensions_per_world() whenever BL2 runs in a lower EL. This allows the per-world registers to have the reset values we intend. An accompanying call to cm_manage_extensions_el3() is also added for completeness.
Doing this shows a small deficiency in cptr_el3 - bits TFP and TCPAC change a lot. This patch makes them consistent by always setting TCPAC and TFP to 0 which unconditionally enable access to CPTR_EL2 and FPCR by default as they are always accessible. Other places that manipulate the TFP bit are removed.
A nice side effect of all of this is that we're now in a position to enable and use any architectural extension in BL2.
Change-Id: I070d62bbf8e9d9b472caf7e2c931c303523be308 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
46aff6fc |
| 26-Sep-2025 |
Mark Dykes <mark.dykes@arm.com> |
Merge "refactor(el3-runtime): move context security states to context.h" into integration
|
| #
34a22a02 |
| 05-Aug-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(el3-runtime): move context security states to context.h
The three security states (S, NS, RL) are architecturally quite consistent - anything that uses them has the same numerical assignmen
refactor(el3-runtime): move context security states to context.h
The three security states (S, NS, RL) are architecturally quite consistent - anything that uses them has the same numerical assignments (0, 1, 2) and they are quite convenient for indexing. However, we're not as consistent in tf-a and this is defined in a few places. Since cpu_data has a dependency on the context management library, use its security state convention in a few more places and take away this responsibility from cpu_data.
Change-Id: Iec73b2be2eef91975554767557de72424d0031f1 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
dfdb73f7 |
| 16-Sep-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "bk/no_blx_setup" into integration
* changes: fix: replace stray BL2_AT_EL3 with RESET_TO_BL2 refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint re
Merge changes from topic "bk/no_blx_setup" into integration
* changes: fix: replace stray BL2_AT_EL3 with RESET_TO_BL2 refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint refactor: unify blx_setup() and blx_main() fix(bl2): unify the BL2 EL3 and RME entrypoints
show more ...
|
| #
d158d425 |
| 13-Aug-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor: unify blx_setup() and blx_main()
All BLs have a bl_setup() for things that need to happen early, a fall back into assembly and then bl_main() for the main functionality. This was necessary
refactor: unify blx_setup() and blx_main()
All BLs have a bl_setup() for things that need to happen early, a fall back into assembly and then bl_main() for the main functionality. This was necessary in order to fiddle with PAuth related things that tend to break C calls. Since then PAuth's enablement has seen a lot of refactoring and this is now worked around cleanly so the distinction can be removed. The only tradeoff is that this requires pauth to not be used for the top-level main function.
There are two main benefits to doing this: First, code is easier to understand as it's all together and the entrypoint is smaller. Second, the compiler gets to see more of the code and apply optimisations (importantly LTO).
Change-Id: Iddb93551115a2048988017547eb7b8db441dbd37 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
e493b522 |
| 19-Jun-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "perf(bl31): convert cpu_data fetching to C" into integration
|
| #
d43b2ea6 |
| 18-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(bl31): convert cpu_data fetching to C
The assembly routines are opaque to the compiler and it can't inline them. There is also no requirement for them to be called without a stack - each of the
perf(bl31): convert cpu_data fetching to C
The assembly routines are opaque to the compiler and it can't inline them. There is also no requirement for them to be called without a stack - each of their calls has a stack available. So convert them to C so that the compiler can do its inlining magic.
On AArch32 we need to be able to call _cpu_data from the entrypoint so it has to stay as a slight exception.
We can also straighten out the type of the cpu_ops_ptr member so we don't have to cast it everywhere.
Change-Id: I9c2939a955b396edf26b99ef36318eebeaab13e6 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
9e0c318d |
| 28-Apr-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge "feat(cpufeat): add support for FEAT_PAUTH_LR" into integration
|
| #
025b1b81 |
| 11-Mar-2025 |
John Powell <john.powell@arm.com> |
feat(cpufeat): add support for FEAT_PAUTH_LR
This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when the new ENABLE_FEAT_PAUTH_LR flag is set.
Currently, PAUTH_LR is only supported
feat(cpufeat): add support for FEAT_PAUTH_LR
This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when the new ENABLE_FEAT_PAUTH_LR flag is set.
Currently, PAUTH_LR is only supported by arm clang compiler and not GCC.
Change-Id: I7db1e34b661ed95cad75850b62878ac5d98466ea Signed-off-by: John Powell <john.powell@arm.com>
show more ...
|
| #
ee656609 |
| 16-Apr-2025 |
André Przywara <andre.przywara@arm.com> |
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving refactor(cpufeat): convert FEAT_PAuth setup to C refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION chore(cpufeat): remove PAuth presence checks feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
show more ...
|
| #
10ecd580 |
| 26-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT o
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT out of the PAuth enablement and place it in the respective entrypoints where we initialise SCTLR_EL3. This makes PAuth self-contained and SCTLR_EL3 initialisation centralised.
Change-Id: I0c0657ff1e78a9652cd2cf1603478283dc01f17b Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
2e0354f5 |
| 25-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps wi
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps with no cache flushes if possible perf(amu): greatly simplify AMU context management perf(mpmm): greatly simplify MPMM enablement
show more ...
|
| #
0a580b51 |
| 15-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously,
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously, this had to happen by writing the enable bits just before reading/writing the relevant context. But since the introduction of root context, this need not be the case. We can have these enables always be present for EL3 and save on some work (and ISBs!) on every context switch.
We can also hoist ZCR_EL3 to a never changing register, as we set its value to be identical for every world, which happens to be the one we want for EL3 too.
Change-Id: I3d950e72049a298008205ba32f230d5a5c02f8b0 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
a8a5d39d |
| 24-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the i
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the init_cpu_data_ptr function perf(cpus): inline the reset function perf(cpus): inline the cpu_get_rev_var call perf(cpus): inline cpu_rev_var checks refactor(cpus): register DSU errata with the errata framework's wrappers refactor(cpus): convert checker functions to standard helpers refactor(cpus): convert the Cortex-A65 to use the errata framework fix(cpus): declare reset errata correctly
show more ...
|
| #
b07c317f |
| 19-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): inline the init_cpu_data_ptr function
Similar to the reset function inline, inline this too to not do a costly branch with no extra cost.
Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdf
perf(cpus): inline the init_cpu_data_ptr function
Similar to the reset function inline, inline this too to not do a costly branch with no extra cost.
Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdfae1e5 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
0d020822 |
| 19-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): inline the reset function
Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the call_reset_handler handler. This way we skip the costly branch at no extra cost as this is
perf(cpus): inline the reset function
Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the call_reset_handler handler. This way we skip the costly branch at no extra cost as this is the only place where this is called.
While we're at it, drop the options for CPU_NO_RESET_FUNC. The only cpus that need that are virtual cpus which can spare the tiny bit of performance lost. The rest are real cores which can save on the check for zero.
Now is a good time to put the assert for a missing cpu in the get_cpu_ops_ptr function so that it's a bit better encapsulated.
Change-Id: Ia7c3dcd13b75e5d7c8bafad4698994ea65f42406 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
b40bc36c |
| 07-Nov-2024 |
Yann Gautier <yann.gautier@st.com> |
Merge "build(bl31): support separated memory for RW DATA" into integration
|
| #
86acbbe2 |
| 26-Aug-2022 |
Ye Li <ye.li@nxp.com> |
build(bl31): support separated memory for RW DATA
Update linker file and init codes to allow using separated memory region for RW DATA. Init codes will copy the RW DATA from the image to the linked
build(bl31): support separated memory for RW DATA
Update linker file and init codes to allow using separated memory region for RW DATA. Init codes will copy the RW DATA from the image to the linked address.
On some NXP platforms, after the BL31 image has been verified, the bl31 image space will be locked/protected as RO only, so need to move the RW DATA and NOBITS out of the bl31 image.
Signed-off-by: Ye Li <ye.li@nxp.com> Reviewed-by: Peng Fan <peng.fan@nxp.com> Signed-off-by: Jacky Bai <ping.bai@nxp.com> Change-Id: I361d9a715890961bf30790a3325f8085a40c0c39
show more ...
|
| #
17ef5da7 |
| 18-Oct-2024 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(context-mgmt): introduce EL3/root context" into integration
|
| #
40e5f7a5 |
| 08-Aug-2023 |
Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com> |
feat(context-mgmt): introduce EL3/root context
* This patch adds root context procedure to restore/configure the registers, which are of importance during EL3 execution.
* EL3/Root context is a s
feat(context-mgmt): introduce EL3/root context
* This patch adds root context procedure to restore/configure the registers, which are of importance during EL3 execution.
* EL3/Root context is a simple restore operation that overwrites the following bits: (MDCR_EL3.SDD, SCR_EL3.{EA, SIF}, PMCR_EL0.DP PSTATE.DIT) while the execution is in EL3.
* It ensures EL3 world maintains its own settings distinct from other worlds (NS/Realm/SWd). With this in place, the EL3 system register settings is no longer influenced by settings of incoming worlds. This allows the EL3/Root world to access features for its own execution at EL3 (eg: Pauth).
* It should be invoked at cold and warm boot entry paths and also at all the possible exception handlers routing to EL3 at runtime. Cold and warm boot paths are handled by including setup_el3_context function in "el3_entrypoint_common" macro, which gets invoked in both the entry paths.
* At runtime, el3_context is setup at the stage, while we get prepared to enter into EL3 via "prepare_el3_entry" routine.
Change-Id: I5c090978c54a53bc1c119d1bc5fa77cd8813cdc2 Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
show more ...
|
| #
4bcf5b84 |
| 29-Jul-2024 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "jc/refact_el1_ctx" into integration
* changes: refactor(cm): convert el1-ctx assembly offset entries to c structure feat(cm): add explicit context entries for ERRATA_SP
Merge changes from topic "jc/refact_el1_ctx" into integration
* changes: refactor(cm): convert el1-ctx assembly offset entries to c structure feat(cm): add explicit context entries for ERRATA_SPECULATIVE_AT
show more ...
|