| #
7303319b |
| 08-Nov-2025 |
Chris Kay <chris.kay@arm.com> |
Merge changes from topic "NUMA_AWARE_PER_CPU" into integration
* changes: docs(maintainers): add per-cpu framework into maintainers.rst feat(per-cpu): add documentation for per-cpu framework f
Merge changes from topic "NUMA_AWARE_PER_CPU" into integration
* changes: docs(maintainers): add per-cpu framework into maintainers.rst feat(per-cpu): add documentation for per-cpu framework feat(rdv3): enable numa aware per-cpu for RD-V3-Cfg2 feat(per-cpu): migrate amu_ctx to per-cpu framework feat(per-cpu): migrate spm_core_context to per-cpu framework feat(per-cpu): migrate psci_ns_context to per-cpu framework feat(per-cpu): migrate psci_cpu_pd_nodes to per-cpu framework feat(per-cpu): migrate rmm_context to per-cpu framework feat(per-cpu): integrate per-cpu framework into BL31/BL32 feat(per-cpu): introduce framework accessors/definers feat(per-cpu): introduce linker changes for NUMA aware per-cpu framework docs(changelog): add scope for per-cpu framework
show more ...
|
| #
98859b99 |
| 29-Jan-2025 |
Sammit Joshi <sammit.joshi@arm.com> |
feat(per-cpu): integrate per-cpu framework into BL31/BL32
Integrate per-cpu support into BL31/BL32 by extending the following areas:
Zero-initialization: Treats per-cpu sections like .bss and clear
feat(per-cpu): integrate per-cpu framework into BL31/BL32
Integrate per-cpu support into BL31/BL32 by extending the following areas:
Zero-initialization: Treats per-cpu sections like .bss and clears them during early C runtime initialization. For platforms that enable NUMA_AWARE_PER_CPU, invokes a platform hook to zero-initialize node-specific per-cpu regions.
Cache maintenance: Extends the BL31 exit path to clean dcache lines covering the per-cpu region, ensuring data written by the primary core is visible to secondary cores.
tpidr_el3 setup: Initializes tpidr_el3 with the base address of the current CPU’s per-cpu section. This allows per-cpu framework to resolve local cpu accesses efficiently.
The percpu_data object is currently stored in tpidr_el3. Since the per-cpu framework will use tpidr_el3 for this-cpu access, percpu_data must be migrated to avoid conflict. This commit moves percpu_data to the per-cpu framework.
Signed-off-by: Sammit Joshi <sammit.joshi@arm.com> Signed-off-by: Rohit Mathew <rohit.mathew@arm.com> Change-Id: Iff0c2e1f8c0ebd25c4bb0b09bfe15dd4fbe20561
show more ...
|
| #
c3e5f6b9 |
| 22-Oct-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "bk/simpler_panic" into integration
* changes: fix(aarch64): do not print EL1 registers on EL3 panic refactor(el3-runtime): streamline cpu_data assembly offsets using th
Merge changes from topic "bk/simpler_panic" into integration
* changes: fix(aarch64): do not print EL1 registers on EL3 panic refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template
show more ...
|
| #
4779becd |
| 06-Aug-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template
The cpu_data structure, just like cpu_ops, is collection of disparate data that must be accessible from both C
refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template
The cpu_data structure, just like cpu_ops, is collection of disparate data that must be accessible from both C and assembly. Achieving this is tricky as there is no way to export structure offsets from C directly so they must be manually recreated with `#define`s and asserts. However, the cpu_data structure is quite old and the assembly offsets are a patchwork of additions and extremely difficult to reason with and modify. In fact, certain currently unused builds with ENABLE_RUNTIME_INSTRUMENTATION=1 fail to build.
To untangle this, convert the assembly offsets to the pattern used for the cpu_ops structure. That is, first define the sizes of every member, as generically as possible, and then chain their offsets one after the other. To make sure this is always correct, add a CASSERT for the offset of every member. This makes it easy to modify the structure and fixes the build failures.
Change-Id: I61aeb55e9c494896663a3c719c10e3c072f56349 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
834f2d55 |
| 03-Oct-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "fix(cm): remove unused macro" into integration
|
| #
c81b9cb9 |
| 04-Jul-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(cm): remove unused macro
It is never referenced.
Change-Id: I538b1f3d8660426faf5bafa68ecda2d637b0bc50 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
|
| #
ee656609 |
| 16-Apr-2025 |
André Przywara <andre.przywara@arm.com> |
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving refactor(cpufeat): convert FEAT_PAuth setup to C refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION chore(cpufeat): remove PAuth presence checks feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
show more ...
|
| #
8d9f5f25 |
| 02-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
FEAT_PAuth is the second to last feature to be a boolean choice - it's either unconditionally compiled in and must be present in hardware or it
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
FEAT_PAuth is the second to last feature to be a boolean choice - it's either unconditionally compiled in and must be present in hardware or it's not compiled in. FEAT_PAuth is architected to be backwards compatible - a subset of the branch guarding instructions (pacia/autia) execute as NOPs when PAuth is not present. That subset is used with `-mbranch-protection=standard` and -march pre-8.3. This patch adds the necessary logic to also check accesses of the non-backward compatible registers and allow a fully checked implementation.
Note that a checked support requires -march to be pre 8.3, as otherwise the compiler will include branch protection instructions that are not NOPs without PAuth (eg retaa) which cannot be checked.
Change-Id: Id942c20cae9d15d25b3d72b8161333642574ddaa Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
51997e3d |
| 02-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpufeat): centralise PAuth key saving
prepare_el3_entry() is meant to be the one-stop shop for all the context we must fiddle with to enter EL3 proper. However, PAuth is the one exception, happ
perf(cpufeat): centralise PAuth key saving
prepare_el3_entry() is meant to be the one-stop shop for all the context we must fiddle with to enter EL3 proper. However, PAuth is the one exception, happening right after. Absorb it into prepare_el3_entry(), handling the BL1/BL31 difference.
This is a good time to also move the key saving into the enable function, also to centralise. With this it becomes apparent that saving keys just before CPU_SUSPEND is redundant as they will be reinitialised when the core wakes up.
Note that the key loading, now in save_gp_pmcr_pauth_regs, does not end in an isb. The effects of the key change are not needed until the isb in the caller, so this isb is not needed.
Change-Id: Idd286bea91140c106ab4c933c5c44b0bc2050ca2 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
60a082ca |
| 03-Apr-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge "fix(build): enable fp during fp save/restore" into integration
|
| #
5141de14 |
| 16-Jan-2025 |
Per Larsen <perlarsen@google.com> |
fix(build): enable fp during fp save/restore
Newer compilers such as clang/LLVM 19 flag uses of floating point instructions when the architecture does not allow for it. We can temporarily enable the
fix(build): enable fp during fp save/restore
Newer compilers such as clang/LLVM 19 flag uses of floating point instructions when the architecture does not allow for it. We can temporarily enable the use of floating point operations where it it is safe and necessary for the build to succeed.
Change-Id: I1a832f846915c35792684906c94aef81c1f72d63 Signed-off-by: Andrei Homescu <ahomescu@google.com> Signed-off-by: Per Larsen <perlarsen@google.com>
show more ...
|
| #
2e0354f5 |
| 25-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps wi
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps with no cache flushes if possible perf(amu): greatly simplify AMU context management perf(mpmm): greatly simplify MPMM enablement
show more ...
|
| #
0a580b51 |
| 15-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously,
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously, this had to happen by writing the enable bits just before reading/writing the relevant context. But since the introduction of root context, this need not be the case. We can have these enables always be present for EL3 and save on some work (and ISBs!) on every context switch.
We can also hoist ZCR_EL3 to a never changing register, as we set its value to be identical for every world, which happens to be the one we want for EL3 too.
Change-Id: I3d950e72049a298008205ba32f230d5a5c02f8b0 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
74dd541f |
| 20-Feb-2025 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "fix(simd): fix base register in fpregs_context_*" into integration
|
| #
09ada2f8 |
| 14-Dec-2024 |
Andrei Homescu <ahomescu@google.com> |
fix(simd): fix base register in fpregs_context_*
The fpregs_state_* macros require the base register to point to the start of the simd_regs_t structure. The fpregs_context_* functions were passing t
fix(simd): fix base register in fpregs_context_*
The fpregs_state_* macros require the base register to point to the start of the simd_regs_t structure. The fpregs_context_* functions were passing the address incorrectly shifted by 512 bytes.
Signed-off-by: Andrei Homescu <ahomescu@google.com> Change-Id: I757a26f8910c2ab648116e001e06baa3deb2eec4
show more ...
|
| #
b41b9997 |
| 19-Dec-2024 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "bk/smccc_feature" into integration
* changes: fix(trbe): add a tsb before context switching fix(spe): add a psb before updating context and remove context saving
|
| #
73d98e37 |
| 02-Dec-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(trbe): add a tsb before context switching
Just like for SPE, we need to synchronize TRBE samples before we change the context to ensure everything goes where it was intended to. If that is not d
fix(trbe): add a tsb before context switching
Just like for SPE, we need to synchronize TRBE samples before we change the context to ensure everything goes where it was intended to. If that is not done, the in-flight entries might use any piece of now incorrect context as there are no implicit ordering requirements.
Prior to root context, the buffer drain hooks would have done that. But now that must happen much earlier. So add a tsb to prepare_el3_entry as well.
Annoyingly, the barrier can be reordered relative to other instructions by default (rule RCKVWP). So add an isb after the psb/tsb to assure that they are ordered, at least as far as context is concerned.
Then, drop the buffer draining hooks. Everything they need to do is already done by now. There's a notable difference in that there are no dsb-s now. Since EL3 does not access the buffers or the feature specific context, we don't need to wait for them to finish.
Finally, drop a stray isb in the context saving macro. It is now absorbed into root context, but was missed.
Change-Id: I30797a40ac7f91d0bb71ad271a1597e85092ccd5 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
f8088733 |
| 21-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(spe): add a psb before updating context and remove context saving
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that "Sampling is always disabled at EL3". That means that disab
fix(spe): add a psb before updating context and remove context saving
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that "Sampling is always disabled at EL3". That means that disabling sampling (writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only reason we save/restore SPE context is because of that disable, so those can be removed too.
There's the issue of draining the profiling buffer though. No new samples will have been generated since entering EL3. However, old samples might still be in-flight. Unless synchronised by a psb csync, those might be affected by our extensive context mutation. Adding a psb in prepare_el3_entry should cater for that. Note that prior to the introduction of root context this was not a problem as context remained unchanged and the hooks took care of the rest.
Then, the only time we care about the buffer actually making it to memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we don't have to do anything, it should be handled for us. Systems without it need a dsb to wait for them to complete. There should be one already in each cpu's powerdown hook which should work.
While on the topic of barriers, the esb barrier is no longer used. Remove it.
Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
17ef5da7 |
| 18-Oct-2024 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(context-mgmt): introduce EL3/root context" into integration
|
| #
40e5f7a5 |
| 08-Aug-2023 |
Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com> |
feat(context-mgmt): introduce EL3/root context
* This patch adds root context procedure to restore/configure the registers, which are of importance during EL3 execution.
* EL3/Root context is a s
feat(context-mgmt): introduce EL3/root context
* This patch adds root context procedure to restore/configure the registers, which are of importance during EL3 execution.
* EL3/Root context is a simple restore operation that overwrites the following bits: (MDCR_EL3.SDD, SCR_EL3.{EA, SIF}, PMCR_EL0.DP PSTATE.DIT) while the execution is in EL3.
* It ensures EL3 world maintains its own settings distinct from other worlds (NS/Realm/SWd). With this in place, the EL3 system register settings is no longer influenced by settings of incoming worlds. This allows the EL3/Root world to access features for its own execution at EL3 (eg: Pauth).
* It should be invoked at cold and warm boot entry paths and also at all the possible exception handlers routing to EL3 at runtime. Cold and warm boot paths are handled by including setup_el3_context function in "el3_entrypoint_common" macro, which gets invoked in both the entry paths.
* At runtime, el3_context is setup at the stage, while we get prepared to enter into EL3 via "prepare_el3_entry" routine.
Change-Id: I5c090978c54a53bc1c119d1bc5fa77cd8813cdc2 Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
show more ...
|
| #
4b6e4e61 |
| 20-Aug-2024 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "mp/simd_ctxt_mgmt" into integration
* changes: feat(fvp): allow SIMD context to be put in TZC DRAM docs(simd): introduce CTX_INCLUDE_SVE_REGS build flag feat(fvp): ad
Merge changes from topic "mp/simd_ctxt_mgmt" into integration
* changes: feat(fvp): allow SIMD context to be put in TZC DRAM docs(simd): introduce CTX_INCLUDE_SVE_REGS build flag feat(fvp): add Cactus partition manifest for EL3 SPMC chore(simd): remove unused macros and utilities for FP feat(el3-spmc): support simd context management upon world switch feat(trusty): switch to simd_ctx_save/restore apis feat(pncd): switch to simd_ctx_save/restore apis feat(spm-mm): switch to simd_ctx_save/restore APIs feat(simd): add rules to rationalize simd ctxt mgmt feat(simd): introduce simd context helper APIs feat(simd): add routines to save, restore sve state feat(simd): add sve state to simd ctxt struct feat(simd): add data struct for simd ctxt management
show more ...
|
| #
6d5319af |
| 17-Jun-2024 |
Madhukar Pappireddy <madhukar.pappireddy@arm.com> |
feat(simd): add routines to save, restore sve state
This adds assembly routines to save and restore SVE registers. In order to share between FPU and SVE the code to save and restore FPCR and FPSR, t
feat(simd): add routines to save, restore sve state
This adds assembly routines to save and restore SVE registers. In order to share between FPU and SVE the code to save and restore FPCR and FPSR, the patch converts code for those registers into macro. Since we will be using simd_ctx_t to save and restore FPU also, we use offsets in simd_ctx_t for FPSR and FPCR. Since simd_ctx_t has the same structure at the beginning as fp_regs_t, those offsets should be the same as CTX_FP_* offsets, when SVE is not enabled. Note that the code also saves and restores FPEXC32 reg along with FPSR and FPCR.
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com> Signed-off-by: Okash Khawaja <okash@google.com> Change-Id: I120c02359794aa6bb6376a464a9afe98bd84ae60
show more ...
|
| #
553b70c3 |
| 19-Aug-2024 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "ar/asymmetricSupport" into integration
* changes: feat(tc): enable trbe errata flags for Cortex-A520 and X4 feat(cm): asymmetric feature support for trbe refactor(err
Merge changes from topic "ar/asymmetricSupport" into integration
* changes: feat(tc): enable trbe errata flags for Cortex-A520 and X4 feat(cm): asymmetric feature support for trbe refactor(errata-abi): move EXTRACT_PARTNUM to arch.h feat(cpus): workaround for Cortex-A520(2938996) and Cortex-X4(2726228) feat(tc): make SPE feature asymmetric feat(cm): handle asymmetry for SPE feature feat(cm): support for asymmetric feature among cores feat(cpufeat): add new feature state for asymmetric features
show more ...
|
| #
43d1d951 |
| 18-Jul-2024 |
Manish Pandey <manish.pandey2@arm.com> |
feat(cpufeat): add new feature state for asymmetric features
Introduce a new feature state CHECK_ASYMMETRIC to cater for the features which are asymmetric across cores. This state is useful for plat
feat(cpufeat): add new feature state for asymmetric features
Introduce a new feature state CHECK_ASYMMETRIC to cater for the features which are asymmetric across cores. This state is useful for platforms which has architectural asymmetric cores (A feature is only present in one type of core e.g. big). This state is similar to FEAT_STATE_CHECK (dynamic detection) except that feature state is also checked on each core during warmboot path and override the context (just for asymmetric features) which was setup by core executing CPU_ON call.
Only Non-secure context will be re-checked as secure and realm context is created on same core.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com> Change-Id: Ic78a0b6ca996e0d7881c43da1a6a0c422f528ef3
show more ...
|
| #
4bcf5b84 |
| 29-Jul-2024 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "jc/refact_el1_ctx" into integration
* changes: refactor(cm): convert el1-ctx assembly offset entries to c structure feat(cm): add explicit context entries for ERRATA_SP
Merge changes from topic "jc/refact_el1_ctx" into integration
* changes: refactor(cm): convert el1-ctx assembly offset entries to c structure feat(cm): add explicit context entries for ERRATA_SPECULATIVE_AT
show more ...
|