| #
7832483e |
| 30-Oct-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes I6e4cd8b5,Id5086b3c,I070d62bb into integration
* changes: fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled fix(smccc): don't panic on a feature availability
Merge changes I6e4cd8b5,Id5086b3c,I070d62bb into integration
* changes: fix(el3-runtime): allow RNDR access at EL3 even when RNG_TRAP is enabled fix(smccc): don't panic on a feature availability call with FEAT_RNG_TRAP fix(bl1): use per-world context correctly
show more ...
|
| #
a873d26f |
| 22-Oct-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(bl1): use per-world context correctly
Currently, the configuration with BL1 and BL2 at SEL1 will transition via el3_exit which will restore per-world context. However, that context is never writ
fix(bl1): use per-world context correctly
Currently, the configuration with BL1 and BL2 at SEL1 will transition via el3_exit which will restore per-world context. However, that context is never written to and so zeroes end up in registers, which is not necessarily correct.
This patch gets BL1 to call cm_manage_extensions_per_world() whenever BL2 runs in a lower EL. This allows the per-world registers to have the reset values we intend. An accompanying call to cm_manage_extensions_el3() is also added for completeness.
Doing this shows a small deficiency in cptr_el3 - bits TFP and TCPAC change a lot. This patch makes them consistent by always setting TCPAC and TFP to 0 which unconditionally enable access to CPTR_EL2 and FPCR by default as they are always accessible. Other places that manipulate the TFP bit are removed.
A nice side effect of all of this is that we're now in a position to enable and use any architectural extension in BL2.
Change-Id: I070d62bbf8e9d9b472caf7e2c931c303523be308 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
3077e437 |
| 18-Sep-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "fix(cpufeat): configure CPTR_EL2.ZEN and CPTR_EL2.TZ to match Linux" into integration
|
| #
7f471c59 |
| 01-Sep-2025 |
Marek Vasut <marek.vasut+renesas@mailbox.org> |
fix(cpufeat): configure CPTR_EL2.ZEN and CPTR_EL2.TZ to match Linux
Linux Documentation/arch/arm64/booting.rst states that: " For CPUs with the Scalable Vector Extension (FEAT_SVE) present: ... -
fix(cpufeat): configure CPTR_EL2.ZEN and CPTR_EL2.TZ to match Linux
Linux Documentation/arch/arm64/booting.rst states that: " For CPUs with the Scalable Vector Extension (FEAT_SVE) present: ... - If the kernel is entered at EL1 and EL2 is present: - CPTR_EL2.TZ (bit 8) must be initialised to 0b0. - CPTR_EL2.ZEN (bits 17:16) must be initialised to 0b11. " Without these settings, Linux kernel hangs on boot when trying to use SVE. Adjust the register settings to match Linux kernel expectations.
Signed-off-by: Marek Vasut <marek.vasut+renesas@mailbox.org> Change-Id: I9a72810dd902b08f9c61f157cc31e603aad2f73a
show more ...
|
| #
2e0354f5 |
| 25-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps wi
Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration
* changes: perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context perf(psci): get PMF timestamps with no cache flushes if possible perf(amu): greatly simplify AMU context management perf(mpmm): greatly simplify MPMM enablement
show more ...
|
| #
0a580b51 |
| 15-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously,
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs to context switch them nonetheless. Previously, this had to happen by writing the enable bits just before reading/writing the relevant context. But since the introduction of root context, this need not be the case. We can have these enables always be present for EL3 and save on some work (and ISBs!) on every context switch.
We can also hoist ZCR_EL3 to a never changing register, as we set its value to be identical for every world, which happens to be the one we want for EL3 too.
Change-Id: I3d950e72049a298008205ba32f230d5a5c02f8b0 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
95620113 |
| 31-Oct-2023 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "refactor(cm): move EL3 registers to global context" into integration
|
| #
461c0a5d |
| 18-Jul-2023 |
Elizabeth Ho <elizabeth.ho@arm.com> |
refactor(cm): move EL3 registers to global context
Currently, EL3 context registers are duplicated per-world per-cpu. Some registers have the same value across all CPUs, so this patch moves these re
refactor(cm): move EL3 registers to global context
Currently, EL3 context registers are duplicated per-world per-cpu. Some registers have the same value across all CPUs, so this patch moves these registers out into a per-world context to reduce memory usage.
Change-Id: I91294e3d5f4af21a58c23599af2bdbd2a747c54a Signed-off-by: Elizabeth Ho <elizabeth.ho@arm.com> Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
show more ...
|
| #
a2d43637 |
| 17-Jul-2023 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "bk/context_refactor" into integration
* changes: refactor(amu): separate the EL2 and EL3 enablement code refactor(cpufeat): separate the EL2 and EL3 enablement code
|
| #
60d330dc |
| 16-Feb-2023 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpufeat): separate the EL2 and EL3 enablement code
Combining the EL2 and EL3 enablement code necessitates that it must be called at el3_exit, which is the only place with enough context to
refactor(cpufeat): separate the EL2 and EL3 enablement code
Combining the EL2 and EL3 enablement code necessitates that it must be called at el3_exit, which is the only place with enough context to make the decision of what needs to be set. Decouple them to allow them to be called from elsewhere.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Change-Id: I147764c42771e7d4100699ec8fae98dac0a505c0
show more ...
|
| #
6d41f123 |
| 29-Mar-2023 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "jc/cpu_feat" into integration
* changes: feat(cpufeat): enable FEAT_SVE for FEAT_STATE_CHECKED feat(cpufeat): enable FEAT_SME for FEAT_STATE_CHECKED
|
| #
2b0bc4e0 |
| 07-Mar-2023 |
Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com> |
feat(cpufeat): enable FEAT_SVE for FEAT_STATE_CHECKED
Add support for runtime detection (ENABLE_SVE_FOR_NS=2), by splitting sve_supported() into an ID register reading function and a second function
feat(cpufeat): enable FEAT_SVE for FEAT_STATE_CHECKED
Add support for runtime detection (ENABLE_SVE_FOR_NS=2), by splitting sve_supported() into an ID register reading function and a second function to report the support status. That function considers both build time settings and runtime information (if needed), and is used before we do SVE specific setup.
Change the FVP platform default to the now supported dynamic option (=2), so the right decision can be made by the code at runtime.
Change-Id: I1caaba2216e8e2a651452254944a003607503216 Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
show more ...
|
| #
1631f9c7 |
| 09-Aug-2022 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "feat(sve): support full SVE vector length" into integration
|
| #
bebcf27f |
| 20-Apr-2022 |
Mark Brown <broonie@kernel.org> |
feat(sve): support full SVE vector length
Currently the SVE code hard codes a maximum vector length of 512 bits when configuring SVE rather than the architecture supported maximum. While this is fin
feat(sve): support full SVE vector length
Currently the SVE code hard codes a maximum vector length of 512 bits when configuring SVE rather than the architecture supported maximum. While this is fine for current physical implementations the architecture allows for vector lengths up to 2048 bits and emulated implementations generally allow any length up to this maximum.
Since there may be system specific reasons to limit the maximum vector length make the limit configurable, defaulting to the architecture maximum. The default should be suitable for most implementations since the hardware will limit the actual vector length selected to what is physically supported in the system.
Signed-off-by: Mark Brown <broonie@kernel.org> Change-Id: I22c32c98a81c0cf9562411189d8a610a5b61ca12
show more ...
|
| #
3015267f |
| 12-Nov-2021 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(sme): enable SME functionality" into integration
|
| #
dc78e62d |
| 08-Jul-2021 |
johpow01 <john.powell@arm.com> |
feat(sme): enable SME functionality
This patch adds two new compile time options to enable SME in TF-A: ENABLE_SME_FOR_NS and ENABLE_SME_FOR_SWD for use in non-secure and secure worlds respectively.
feat(sme): enable SME functionality
This patch adds two new compile time options to enable SME in TF-A: ENABLE_SME_FOR_NS and ENABLE_SME_FOR_SWD for use in non-secure and secure worlds respectively. Setting ENABLE_SME_FOR_NS=1 will enable SME for non-secure worlds and trap SME, SVE, and FPU/SIMD instructions in secure context. Setting ENABLE_SME_FOR_SWD=1 will disable these traps, but support for SME context management does not yet exist in SPM so building with SPD=spmd will fail.
The existing ENABLE_SVE_FOR_NS and ENABLE_SVE_FOR_SWD options cannot be used with SME as it is a superset of SVE and will enable SVE and FPU/SIMD along with SME.
Signed-off-by: John Powell <john.powell@arm.com> Change-Id: Iaaac9d22fe37b4a92315207891da848a8fd0ed73
show more ...
|
| #
a52c5247 |
| 26-Jul-2021 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "sve+amu" into integration
* changes: fix(plat/tc0): enable AMU extension fix(el3_runtime): fix SVE and AMU extension enablement flags
|
| #
68ac5ed0 |
| 08-Jul-2021 |
Arunachalam Ganapathy <arunachalam.ganapathy@arm.com> |
fix(el3_runtime): fix SVE and AMU extension enablement flags
If SVE are enabled for both Non-secure and Secure world along with AMU extension, then it causes the TAM_BIT in CPTR_EL3 to be set upon e
fix(el3_runtime): fix SVE and AMU extension enablement flags
If SVE are enabled for both Non-secure and Secure world along with AMU extension, then it causes the TAM_BIT in CPTR_EL3 to be set upon exit from bl31. This restricts access to the AMU register set in normal world. This fix maintains consistency in both TAM_BIT and CPTR_EZ_BIT by saving and restoring CPTR_EL3 register from EL3 context.
Signed-off-by: Arunachalam Ganapathy <arunachalam.ganapathy@arm.com> Change-Id: Id76ce1d27ee48bed65eb32392036377716aff087
show more ...
|
| #
81a8b2da |
| 30-Jun-2021 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "feat(sve): enable SVE for the secure world" into integration
|
| #
0c5e7d1c |
| 22-Mar-2021 |
Max Shvetsov <maksims.svecovs@arm.com> |
feat(sve): enable SVE for the secure world
Enables SVE support for the secure world via ENABLE_SVE_FOR_SWD. ENABLE_SVE_FOR_SWD defaults to 0 and has to be explicitly set by the platform. SVE is conf
feat(sve): enable SVE for the secure world
Enables SVE support for the secure world via ENABLE_SVE_FOR_SWD. ENABLE_SVE_FOR_SWD defaults to 0 and has to be explicitly set by the platform. SVE is configured during initial setup and then uses EL3 context save/restore routine to switch between SVE configurations for different contexts. Reset value of CPTR_EL3 changed to be most restrictive by default.
Signed-off-by: Max Shvetsov <maksims.svecovs@arm.com> Change-Id: I889fbbc2e435435d66779b73a2d90d1188bf4116
show more ...
|
| #
9a207532 |
| 04-Jan-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1726 from antonio-nino-diaz-arm/an/includes
Sanitise includes across codebase
|
| #
09d40e0e |
| 14-Dec-2018 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Sanitise includes across codebase
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH} - inclu
Sanitise includes across codebase
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them).
For example, this patch had to be created because two headers were called the same way: e0ea0928d5b7 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3a282 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988b9 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems.
Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
2eedba9a |
| 30-Oct-2018 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1651 from antonio-nino-diaz-arm/an/rand-misra
Fix some MISRA defects
|
| #
40daecc1 |
| 25-Oct-2018 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Fix MISRA defects in extension libs
No functional changes.
Change-Id: I2f28f20944f552447ac4e9e755493cd7c0ea1192 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
|
| #
f461da2a |
| 27-Feb-2018 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #1272 from dp-arm/dp/extensions
Refactor SPE/SVE code and fix some bugs in AMUv1 on AArch32
|