| 7e84f3cf | 15-Mar-2024 |
Tushar Khandelwal <tushar.khandelwal@.com> |
feat(rmmd): add FEAT_MEC support
This patch provides architectural support for further use of Memory Encryption Contexts (MEC) by declaring the necessary registers, bits, masks, helpers and values a
feat(rmmd): add FEAT_MEC support
This patch provides architectural support for further use of Memory Encryption Contexts (MEC) by declaring the necessary registers, bits, masks, helpers and values and modifying the necessary registers to enable FEAT_MEC.
Signed-off-by: Tushar Khandelwal <tushar.khandelwal@arm.com> Signed-off-by: Juan Pablo Conde <juanpablo.conde@arm.com> Change-Id: I670dbfcef46e131dcbf3a0b927467ebf6f438fa4
show more ...
|
| 54c9c68a | 19-Apr-2024 |
Nithin G <nithing@amd.com> |
fix(el3-runtime): add const qualifier
This corrects the MISRA violation C2012-8.13: A pointer should point to a const-qualified type whenever possible. Added const qualifier to pointer in the functi
fix(el3-runtime): add const qualifier
This corrects the MISRA violation C2012-8.13: A pointer should point to a const-qualified type whenever possible. Added const qualifier to pointer in the function arguments.
Change-Id: Idf4b8ea7842304849242a06d6ada73f11afc8cde Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| 858dc35c | 25-Apr-2024 |
Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com> |
fix(el3-runtime): add missing curly braces
This corrects the MISRA violation C2012-15.6: The body of an iteration-statement or a selection-statement shall be a compound-statement. Enclosed statement
fix(el3-runtime): add missing curly braces
This corrects the MISRA violation C2012-15.6: The body of an iteration-statement or a selection-statement shall be a compound-statement. Enclosed statement body within the curly braces.
Change-Id: I14a69f79aba98e243fa29a50914431358efa2a49 Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| 0a580b51 | 15-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously,
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously, this had to happen by writing the enable bits just before reading/writing the relevant context. But since the introduction of root context, this need not be the case. We can have these enables always be present for EL3 and save on some work (and ISBs!) on every context switch.
We can also hoist ZCR_EL3 to a never changing register, as we set its value to be identical for every world, which happens to be the one we want for EL3 too.
Change-Id: I3d950e72049a298008205ba32f230d5a5c02f8b0 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 83ec7e45 | 06-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(amu): greatly simplify AMU context management
The current code is incredibly resilient to updates to the spec and has worked quite well so far. However, recent implementations expose a weakness
perf(amu): greatly simplify AMU context management
The current code is incredibly resilient to updates to the spec and has worked quite well so far. However, recent implementations expose a weakness in that this is rather slow. A large part of it is written in assembly, making it opaque to the compiler for optimisations. The future proofness requires reading registers that are effectively `volatile`, making it even harder for the compiler, as well as adding lots of implicit barriers, making it hard for the microarchitecutre to optimise as well.
We can make a few assumptions, checked by a few well placed asserts, and remove a lot of this burden. For a start, at the moment there are 4 group 0 counters with static assignments. Contexting them is a trivial affair that doesn't need a loop. Similarly, there can only be up to 16 group 1 counters. Contexting them is a bit harder, but we can do with a single branch with a falling through switch. If/when both of these change, we have a pair of asserts and the feature detection mechanism to guard us against pretending that we support something we don't.
We can drop contexting of the offset registers. They are fully accessible by EL2 and as such are its responsibility to preserve on powerdown.
Another small thing we can do, is pass the core_pos into the hook. The caller already knows which core we're running on, we don't need to call this non-trivial function again.
Finally, knowing this, we don't really need the auxiliary AMUs to be described by the device tree. Linux doesn't care at the moment, and any information we need for EL3 can be neatly placed in a simple array.
All of this, combined with lifting the actual saving out of assembly, reduces the instructions to save the context from 180 to 40, including a lot fewer branches. The code is also much shorter and easier to read.
Also propagate to aarch32 so that the two don't diverge too much.
Change-Id: Ib62e6e9ba5be7fb9fb8965c8eee148d5598a5361 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| a8a5d39d | 24-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the i
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the init_cpu_data_ptr function perf(cpus): inline the reset function perf(cpus): inline the cpu_get_rev_var call perf(cpus): inline cpu_rev_var checks refactor(cpus): register DSU errata with the errata framework's wrappers refactor(cpus): convert checker functions to standard helpers refactor(cpus): convert the Cortex-A65 to use the errata framework fix(cpus): declare reset errata correctly
show more ...
|
| b07c317f | 19-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): inline the init_cpu_data_ptr function
Similar to the reset function inline, inline this too to not do a costly branch with no extra cost.
Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdf
perf(cpus): inline the init_cpu_data_ptr function
Similar to the reset function inline, inline this too to not do a costly branch with no extra cost.
Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdfae1e5 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 41ae0473 | 03-Feb-2025 |
Sona Mathew <sonarebecca.mathew@arm.com> |
fix(rmm): add support for BRBCR_EL2 register for feat_brbe
Currently BRBE is being disabled for Realm world in EL3 by switching the SBRBE bit in mdcr_el3 register to 0b00. The patch removes the swit
fix(rmm): add support for BRBCR_EL2 register for feat_brbe
Currently BRBE is being disabled for Realm world in EL3 by switching the SBRBE bit in mdcr_el3 register to 0b00. The patch removes the switching of SBRBE bits, and adds context switch of BRBCR_EL2 register.
Change-Id: I66ca13edefc37e40fa265fd438b0b66f7d09b4bb Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
show more ...
|
| 74dd541f | 20-Feb-2025 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "fix(simd): fix base register in fpregs_context_*" into integration |
| 8c52ca8c | 10-Dec-2024 |
Sona Mathew <sonarebecca.mathew@arm.com> |
refactor(cpufeat): add FGT2 and Debugv8p9 to realm state
Enable FEAT_FGT2 and FEAT_Debugv8p9 in Realm state as well.
Change-Id: Ib9cdde3af328ffdd8718b1ba404265757f2e542b Signed-off-by: Sona Mathew
refactor(cpufeat): add FGT2 and Debugv8p9 to realm state
Enable FEAT_FGT2 and FEAT_Debugv8p9 in Realm state as well.
Change-Id: Ib9cdde3af328ffdd8718b1ba404265757f2e542b Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
show more ...
|
| 7455cd17 | 29-Jan-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
fix(cpus): workaround for accessing ICH_VMCR_EL2
When ICH_VMCR_EL2.VBPR1 is written in Secure state (SCR_EL3.NS==0) and then subsequently read in Non-secure state (SCR_EL3.NS==1), a wrong value migh
fix(cpus): workaround for accessing ICH_VMCR_EL2
When ICH_VMCR_EL2.VBPR1 is written in Secure state (SCR_EL3.NS==0) and then subsequently read in Non-secure state (SCR_EL3.NS==1), a wrong value might be returned. The same issue exists in the opposite way.
Adding workaround in EL3 software that performs context save/restore on a change of Security state to use a value of SCR_EL3.NS when accessing ICH_VMCR_EL2 that reflects the Security state that owns the data being saved or restored. For example, EL3 software should set SCR_EL3.NS to 1 when saving or restoring the value ICH_VMCR_EL2 for Non-secure(or Realm) state. EL3 software should clear SCR_EL3.NS to 0 when saving or restoring the value ICH_VMCR_EL2 for Secure state.
SDEN documentation: https://developer.arm.com/documentation/SDEN-1775101/latest/
Change-Id: I9f0403601c6346276e925f02eab55908b009d957 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| 09ada2f8 | 14-Dec-2024 |
Andrei Homescu <ahomescu@google.com> |
fix(simd): fix base register in fpregs_context_*
The fpregs_state_* macros require the base register to point to the start of the simd_regs_t structure. The fpregs_context_* functions were passing t
fix(simd): fix base register in fpregs_context_*
The fpregs_state_* macros require the base register to point to the start of the simd_regs_t structure. The fpregs_context_* functions were passing the address incorrectly shifted by 512 bytes.
Signed-off-by: Andrei Homescu <ahomescu@google.com> Change-Id: I757a26f8910c2ab648116e001e06baa3deb2eec4
show more ...
|
| b53089d8 | 27-Jan-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(pmuv3): setup per world MDCR_EL3" into integration |
| c95aa2eb | 14-Jan-2025 |
Mateusz Sulimowicz <matsul@google.com> |
feat(pmuv3): setup per world MDCR_EL3
MDCR_EL3 register will context switch across all worlds. Thus the pmuv3 init has to be part of context management initialization.
Change-Id: I10ef7a3071c0fc5c1
feat(pmuv3): setup per world MDCR_EL3
MDCR_EL3 register will context switch across all worlds. Thus the pmuv3 init has to be part of context management initialization.
Change-Id: I10ef7a3071c0fc5c11a93d3c9c2a95ec8c6493bf Signed-off-by: Mateusz Sulimowicz <matsul@google.com>
show more ...
|
| 6b8df7b9 | 09-Jan-2025 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1
FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2. However, in configurations where NS_EL2 is not enabled, EL3 must set th
feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1
FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2. However, in configurations where NS_EL2 is not enabled, EL3 must set the HCRX_EL2.MSCEn bit to 1 to enable the feature.
This patch ensures FEAT_MOPS is enabled by setting HCRX_EL2.MSCEn to 1.
Change-Id: Ic4960e0cc14a44279156b79ded50de475b3b21c5 Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
show more ...
|
| 79c0c7fa | 10-Dec-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cm): clean up per-world context
In preparation for SMCCC_ARCH_FEATURE_AVAILABILITY, it is useful for context to be directly related to the underlying system. Currently, certain bits like SC
refactor(cm): clean up per-world context
In preparation for SMCCC_ARCH_FEATURE_AVAILABILITY, it is useful for context to be directly related to the underlying system. Currently, certain bits like SCR_EL3.APK are always set with the understanding that they will only take effect if the feature is present.
However, that is problematic for SMCCC_ARCH_FEATURE_AVAILABILITY (an SMCCC call to report which features firmware enables), as simply reading the enable bit may contradict the ID register, like the APK bit above for a system with no Pauth present.
This patch is to clean up these cases. Add a check for PAuth's presence so that the APK bit remains unset if not present. Also move SPE and TRBE enablement to only the NS context. They already only enable the features for NS only and disable them for Secure and Realm worlds. This change only makes these worlds' context read 0 for easy bitmasking.
There's only a single snag on SPE and TRBE. Currently, their fields have the same values and any world asymmetry is handled by hardware. Since we don't want to do that, the buffers' ownership will change if we just set the fields to 0 for non-NS worlds. Doing that, however, exposes Secure state to a potential denial of service attack - a malicious NS can enable profiling and call an SMC. Then, the owning security state will change and since no SPE/TRBE registers are contexted, Secure state will start generating records. Always have NS world own the buffers to prevent this.
Finally, get rid of manage_extensions_common() as it's just a level of indirection to enable a single feature.
Change-Id: I487bd4c70ac3e2105583917a0e5499e0ee248ed9 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| b41b9997 | 19-Dec-2024 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "bk/smccc_feature" into integration
* changes: fix(trbe): add a tsb before context switching fix(spe): add a psb before updating context and remove context saving |
| 6595f4cb | 13-Dec-2024 |
Igor Podgainõi <igor.podgainoi@arm.com> |
fix(cm): fix context management SYSREG128 write macros
This patch fixes a bug which was introduced in commit 3065513 related to improper saving of EL1 context in the context management library code
fix(cm): fix context management SYSREG128 write macros
This patch fixes a bug which was introduced in commit 3065513 related to improper saving of EL1 context in the context management library code when using 128-bit system registers.
Bug explanation: The function el1_sysregs_context_save still used the normal macros that read all the system registers related to the EL1 context, which then involved casting them to uint64_t and eventually writing them to a memory structure. This means that the context management library was saving EL1-related SYSREG128 registers with the upper 64 bits zeroed out.
Alternative macros had previously been introduced for the EL2 context in the aforementioned commit, but not for EL1.
Some refactoring has also been done as part of this patch: - Re-added "common" back to write_el2_ctx_common_sysreg128 - Added dummy SYSREG128 macros for cases when some features are disabled - Removed some newlines
Change-Id: I15aa2190794ac099a493e5f430220b1c81e1b558 Signed-off-by: Igor Podgainõi <igor.podgainoi@arm.com>
show more ...
|
| 73d98e37 | 02-Dec-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(trbe): add a tsb before context switching
Just like for SPE, we need to synchronize TRBE samples before we change the context to ensure everything goes where it was intended to. If that is not d
fix(trbe): add a tsb before context switching
Just like for SPE, we need to synchronize TRBE samples before we change the context to ensure everything goes where it was intended to. If that is not done, the in-flight entries might use any piece of now incorrect context as there are no implicit ordering requirements.
Prior to root context, the buffer drain hooks would have done that. But now that must happen much earlier. So add a tsb to prepare_el3_entry as well.
Annoyingly, the barrier can be reordered relative to other instructions by default (rule RCKVWP). So add an isb after the psb/tsb to assure that they are ordered, at least as far as context is concerned.
Then, drop the buffer draining hooks. Everything they need to do is already done by now. There's a notable difference in that there are no dsb-s now. Since EL3 does not access the buffers or the feature specific context, we don't need to wait for them to finish.
Finally, drop a stray isb in the context saving macro. It is now absorbed into root context, but was missed.
Change-Id: I30797a40ac7f91d0bb71ad271a1597e85092ccd5 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| f8088733 | 21-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(spe): add a psb before updating context and remove context saving
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that "Sampling is always disabled at EL3". That means that disab
fix(spe): add a psb before updating context and remove context saving
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that "Sampling is always disabled at EL3". That means that disabling sampling (writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only reason we save/restore SPE context is because of that disable, so those can be removed too.
There's the issue of draining the profiling buffer though. No new samples will have been generated since entering EL3. However, old samples might still be in-flight. Unless synchronised by a psb csync, those might be affected by our extensive context mutation. Adding a psb in prepare_el3_entry should cater for that. Note that prior to the introduction of root context this was not a problem as context remained unchanged and the hooks took care of the rest.
Then, the only time we care about the buffer actually making it to memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we don't have to do anything, it should be handled for us. Systems without it need a dsb to wait for them to complete. There should be one already in each cpu's powerdown hook which should work.
While on the topic of barriers, the esb barrier is no longer used. Remove it.
Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| a57e18e4 | 11-Nov-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
feat(fpmr): disable FPMR trap
This patch enables support of FEAT_FPMR by enabling access to FPMR register. It achieves it by setting the EnFPM bit of SCR_EL3. This feature is currently enabled for N
feat(fpmr): disable FPMR trap
This patch enables support of FEAT_FPMR by enabling access to FPMR register. It achieves it by setting the EnFPM bit of SCR_EL3. This feature is currently enabled for NS world only.
Reference: https://developer.arm.com/documentation/109697/2024_09/ Feature-descriptions/The-Armv9-5-architecture-extension?lang=en
Change-Id: I580c409b9b22f8ead0737502280fb9093a3d5dd2 Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
show more ...
|
| 19d52a83 | 09-Aug-2024 |
Andre Przywara <andre.przywara@arm.com> |
feat(cpufeat): add ENABLE_FEAT_LS64_ACCDATA
Armv8.6 introduced the FEAT_LS64 extension, which provides a 64 *byte* store instruction. A related instruction is ST64BV0, which will replace the lowest
feat(cpufeat): add ENABLE_FEAT_LS64_ACCDATA
Armv8.6 introduced the FEAT_LS64 extension, which provides a 64 *byte* store instruction. A related instruction is ST64BV0, which will replace the lowest 32 bits of the data with a value taken from the ACCDATA_EL1 system register (so that EL0 cannot alter them). Using that ST64BV0 instruction and accessing the ACCDATA_EL1 system register is guarded by two SCR_EL3 bits, which we should set to avoid a trap into EL3, when lower ELs use one of those.
Add the required bits and pieces to make this feature usable: - Add the ENABLE_FEAT_LS64_ACCDATA build option (defaulting to 0). - Add the CPUID and SCR_EL3 bit definitions associated with FEAT_LS64. - Add a feature check to check for the existing four variants of the LS64 feature and detect future extensions. - Add code to save and restore the ACCDATA_EL1 register on secure/non-secure context switches. - Enable the feature with runtime detection for FVP and Arm FPGA.
Please note that the *basic* FEAT_LS64 feature does not feature any trap bits, it's only the addition of the ACCDATA_EL1 system register that adds these traps and the SCR_EL3 bits.
Change-Id: Ie3e2ca2d9c4fbbd45c0cc6089accbb825579138a Signed-off-by: Andre Przywara <andre.przywara@arm.com>
show more ...
|
| 830ed392 | 05-Nov-2024 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge "feat(feat_sctlr2): enable FEAT_SCTLR2 for Realm world" into integration |
| b17fecd6 | 28-Oct-2024 |
Javier Almansa Sobrino <javier.almansasobrino@arm.com> |
feat(feat_sctlr2): enable FEAT_SCTLR2 for Realm world
Change-Id: I62e769ae796bbeb41741c2c421a5f129d875f5fb Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com> |
| 30655136 | 06-Sep-2024 |
Govindraj Raja <govindraj.raja@arm.com> |
feat(d128): add support for FEAT_D128
This patch disables trapping to EL3 when the FEAT_D128 specific registers are accessed by setting the SCR_EL3.D128En bit.
If FEAT_D128 is implemented, then FEA
feat(d128): add support for FEAT_D128
This patch disables trapping to EL3 when the FEAT_D128 specific registers are accessed by setting the SCR_EL3.D128En bit.
If FEAT_D128 is implemented, then FEAT_SYSREG128 is implemented. With FEAT_SYSREG128 certain system registers are treated as 128-bit, so we should be context saving and restoring 128-bits instead of 64-bit when FEAT_D128 is enabled.
FEAT_SYSREG128 adds support for MRRS and MSRR instruction which helps us to read write to 128-bit system register. Refer to Arm Architecture Manual for further details.
Change the FVP platform to default to handling this as a dynamic option so the right decision can be made by the code at runtime.
Change-Id: I1a53db5eac29e56c8fbdcd4961ede3abfcb2411a Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com> Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|