| a74b0094 | 16-Apr-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
fix(cpus): add missing add_erratum_entry
Errata 1286807 and 1165522 are missing an add_erratum_entry, which is required by the Errata ABI to report whether the errata are implemented or not.
Change
fix(cpus): add missing add_erratum_entry
Errata 1286807 and 1165522 are missing an add_erratum_entry, which is required by the Errata ABI to report whether the errata are implemented or not.
Change-Id: I19a484c73ac31a90b3ff1b219f647c88a1c81c6e Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| 106ca0cb | 10-Apr-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
chore(cpus): remove in-order checks
Remove runtime in-order checks for Erratum and CVE's. Fix out-of-order issues in CPU files found with CPU Erratum and CVE static checker script run on entire fold
chore(cpus): remove in-order checks
Remove runtime in-order checks for Erratum and CVE's. Fix out-of-order issues in CPU files found with CPU Erratum and CVE static checker script run on entire folder `lib/cpus/aarch64/`.
Change-Id: Iee5a8cb49834e9f35c6c2f2a84065430ca1ec8a6 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| ee656609 | 16-Apr-2025 |
André Przywara <andre.przywara@arm.com> |
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving
Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration
* changes: feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED perf(cpufeat): centralise PAuth key saving refactor(cpufeat): convert FEAT_PAuth setup to C refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION chore(cpufeat): remove PAuth presence checks feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
show more ...
|
| 8d9f5f25 | 02-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
FEAT_PAuth is the second to last feature to be a boolean choice - it's either unconditionally compiled in and must be present in hardware or it
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
FEAT_PAuth is the second to last feature to be a boolean choice - it's either unconditionally compiled in and must be present in hardware or it's not compiled in. FEAT_PAuth is architected to be backwards compatible - a subset of the branch guarding instructions (pacia/autia) execute as NOPs when PAuth is not present. That subset is used with `-mbranch-protection=standard` and -march pre-8.3. This patch adds the necessary logic to also check accesses of the non-backward compatible registers and allow a fully checked implementation.
Note that a checked support requires -march to be pre 8.3, as otherwise the compiler will include branch protection instructions that are not NOPs without PAuth (eg retaa) which cannot be checked.
Change-Id: Id942c20cae9d15d25b3d72b8161333642574ddaa Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 51997e3d | 02-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpufeat): centralise PAuth key saving
prepare_el3_entry() is meant to be the one-stop shop for all the context we must fiddle with to enter EL3 proper. However, PAuth is the one exception, happ
perf(cpufeat): centralise PAuth key saving
prepare_el3_entry() is meant to be the one-stop shop for all the context we must fiddle with to enter EL3 proper. However, PAuth is the one exception, happening right after. Absorb it into prepare_el3_entry(), handling the BL1/BL31 difference.
This is a good time to also move the key saving into the enable function, also to centralise. With this it becomes apparent that saving keys just before CPU_SUSPEND is redundant as they will be reinitialised when the core wakes up.
Note that the key loading, now in save_gp_pmcr_pauth_regs, does not end in an isb. The effects of the key change are not needed until the isb in the caller, so this isb is not needed.
Change-Id: Idd286bea91140c106ab4c933c5c44b0bc2050ca2 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| f8138056 | 02-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpufeat): convert FEAT_PAuth setup to C
An oversimplified view of FEAT_PAuth is that it's a symmetric encryption of the LR. PAC instructions execute as NOPs until explicitly turned on. So i
refactor(cpufeat): convert FEAT_PAuth setup to C
An oversimplified view of FEAT_PAuth is that it's a symmetric encryption of the LR. PAC instructions execute as NOPs until explicitly turned on. So in a function that turns PAuth on, the signing would have executed as a NOP and the authentication will encrypt the address, leading to a failure. That's why enablement is in assembly - we have full control of when pointer authentications happen.
However, assembly is hard to read, is opaque to the compiler for optimisations, and we need to call into C anyway for the platform hook to get the key. So convert it to C. We can instruct the compiler to not generate branch protection for the enable function only and as long as the caller doesn't do branch protection (and all callers are entrypoints written in assembly) everything will work.
Change-Id: I8917a26e1293033c910e3058664e3ca9207359b7 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| b0b7609e | 01-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION
Convert the old style is_armv8_3_pauth_present() to the new style is_feat_pauth_{present, supported}() helpers and hook FEATURE_DETECTION
refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION
Convert the old style is_armv8_3_pauth_present() to the new style is_feat_pauth_{present, supported}() helpers and hook FEATURE_DETECTION into it. This is in preparation for converting FEAT_PAuth to FEAT_STATE.
Change-Id: Iec8c3477fafb2cdae67d39ae4da2cca76a67511a Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 31ddca40 | 14-Apr-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge "feat(psci): remove cpu context init by index" into integration |
| 10ecd580 | 26-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT o
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED
Introduce the is_feat_bti_{supported, present}() helpers and replace checks for ENABLE_BTI with it. Also factor out the setting of SCTLR_EL3.BT out of the PAuth enablement and place it in the respective entrypoints where we initialise SCTLR_EL3. This makes PAuth self-contained and SCTLR_EL3 initialisation centralised.
Change-Id: I0c0657ff1e78a9652cd2cf1603478283dc01f17b Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 25002a00 | 11-Apr-2025 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "perf(libc): use builtin implementations where possible" into integration |
| ef738d19 | 21-Jun-2024 |
Manish Pandey <manish.pandey2@arm.com> |
feat(psci): remove cpu context init by index
Currently, the calling core (meaning the core which received the call to CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in charge of in
feat(psci): remove cpu context init by index
Currently, the calling core (meaning the core which received the call to CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in charge of initialising the context for the waking core (the warmboot entrypoint for both). This is convenient because the calling core can write the context while in coherency and the waking core will only need the context after its entered coherency. This avoids any cache maintenance and makes communication simple.
However, this has 3 main problems: a) asymmetric feature support is problematic - the calling core has no way of knowing the feature set of the waking core. If the two diverge, the architectural feature discovery via ID registers breaks down. We've thus far "fixed" this on a case by case basis which doesn't scale and introduces redundancy.
b) powerdown abandon (pabandon) introduces a contradiction - the calling core has to initialise the context for when the core wakes up, but should the core not powerdown it needs its old context intact. The only way to work around this is by keeping two copies of context which incurs a runtime and memory overhead.
c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs it to make initialisation decisions. We can infer some of this from registers that have already been written but this is awkwardly limiting for what we can do. This also necessitates the split from the context initialisation.
We can solve all three by a making a core be in full ownership of its own context. The calling core then only writes entrypoint information and nothing else. The waking core then initialises its own context as it sees fit with full knowledge of the whole picture.
The only tricky bit is cache coherency - the waking core has to be able to coherently observe its new entrypoint. Calling cores will write to the shared region with coherent caches on. If we make sure to read the context only after the waking core has entered coherency, then we can avoid cache operations and let hardware handle everything.
We can skip the spsr check for FEAT_TCR2 as it doesn't make a difference. We can also skip enabling it twice from generic code.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Signed-off-by: Manish Pandey <manish.pandey2@arm.com> Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e
show more ...
|
| 382ba743 | 07-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(psci): initialise variables
When building with LTO, GCC is uncomfortable that these variables are uninitialised and complains that they may be used before they are initialised. Set them to 0 as
fix(psci): initialise variables
When building with LTO, GCC is uncomfortable that these variables are uninitialised and complains that they may be used before they are initialised. Set them to 0 as there are plenty of asserts to make sure these branches cannot be taken.
Change-Id: Ic1f05e77252e93bdafab033dcb24ad42856ebf9a Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 23302d4a | 08-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(xlat): remove xlat_mpu
The only platform to use this is fvp_r. As this platform is now gone, so is the need for this library. Support for it never went out of "experimental" so it does not appea
fix(xlat): remove xlat_mpu
The only platform to use this is fvp_r. As this platform is now gone, so is the need for this library. Support for it never went out of "experimental" so it does not appear to be finished.
Change-Id: I76499b92ca4368651330f17dc80803991158cc36 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| ba9e6a34 | 08-Apr-2025 |
Andre Przywara <andre.przywara@arm.com> |
feat(cpufeat): add support for PMUv3p9
Armv8.9 introduced the FEAT_PMUV3P9 extension, which allows finer grained control over EL0 usage of PMU registers. This is controlled by the new PMUACR_EL1 sys
feat(cpufeat): add support for PMUv3p9
Armv8.9 introduced the FEAT_PMUV3P9 extension, which allows finer grained control over EL0 usage of PMU registers. This is controlled by the new PMUACR_EL1 system register, access to which is guarded by the MDCR_EL3.EnPM2 bit. We should set this bit to avoid a trap into EL3 when lower ELs access this register.
Add the required bits and pieces to make this feature usable: - Add the CPUID and MDCR_EL3 bit definitions associated with FEAT_PMUV3P9. - Extend the existing PMU feature check to allow v9 now as well. This is fine since we don't context switch PMU registers at all, so we don't need to do much except to flip the MDCR bit: - Set the EnPM2 bit in pmuv3_enable, so the feature is usuable in non-secure world (and there only). - Handle the MDCR bit for the ARCH_FEATURE_AVAILABILITY feature.
Please note that MDCR_EL3.EnPM2 guards other system registers as well, for other PMU related new architecture features.
Change-Id: I288ca15f5c9efd336c64477d1c6fe9543613e238 Signed-off-by: Andre Przywara <andre.przywara@arm.com>
show more ...
|
| 2cadf21b | 12-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(plat): remove fvp_r
The platform has not been maintained for some years and is generally broken. Remove it to avoid confusion.
Change-Id: I93d832d51e114689ec79969af5d96071a03f4a88 Signed-off-by
fix(plat): remove fvp_r
The platform has not been maintained for some years and is generally broken. Remove it to avoid confusion.
Change-Id: I93d832d51e114689ec79969af5d96071a03f4a88 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 34d7f196 | 17-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(libc): use builtin implementations where possible
When conditions are right, eg a small memcpy of a known size and alignment, the compiler may know of a sequence that is optimal for the given c
perf(libc): use builtin implementations where possible
When conditions are right, eg a small memcpy of a known size and alignment, the compiler may know of a sequence that is optimal for the given constraints and inline it. If the compiler doesn't find one, it will emit a call to the generic function (in the libc) which will implement this in the most generic and unconstrained manner. That generic function is rarely the most optimal when constraints are known.
So give the compiler a chance to do this. Replace calls to libc functions that have builtins to the builtin and keep the generic implementation if it decides to emit a call anyway.
And example of this in action is usage of FEAT_MOPS. When the compiler is aware of the feature (-march=armv8.8-a) then it will emit the 3 MOPS instructions instead of calls to our memcpy() and memset() implementations.
Change-Id: I9860cfada1d941b613ebd4da068e9992c387952e Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 10639cc9 | 03-Apr-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "xlnx_fix_gen_uniq_var" into integration
* changes: fix(psci): avoid altering function parameters fix(services): avoid altering function parameters fix(common): ignore
Merge changes from topic "xlnx_fix_gen_uniq_var" into integration
* changes: fix(psci): avoid altering function parameters fix(services): avoid altering function parameters fix(common): ignore the unused function return value fix(psci): modify variable conflicting with external function fix(delay-timer): create unique variable name
show more ...
|
| 26cc2854 | 24-Apr-2024 |
Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com> |
fix(libc): typecast operands to match data type
This corrects the MISRA violation C2012-10.3: The value of an expression shall not be assigned to an object with a narrower essential type or of a diff
fix(libc): typecast operands to match data type
This corrects the MISRA violation C2012-10.3: The value of an expression shall not be assigned to an object with a narrower essential type or of a different essential type category. The condition is explicitly checked against 0U, appending 'U' and typecasting for unsigned comparison.
In spite of generic guidance for 3rd party libraries (https://trustedfirmware-a.readthedocs.io/en/latest/process/coding-style.html#misra-compliance) libc contains some MISRA-C fixes done by commit d5ccb754af86 ("libc: Fix some MISRA defects") in 2021. Also from history it is not clear where libc is coming from that's why there is no way to fix violation in base library.
Change-Id: Ibad03a758001b3a7779b488ee5e19c9ceee51134 Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| 60e5aee1 | 25-Apr-2024 |
Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com> |
fix(libc): add missing curly braces
This corrects the MISRA violation C2012-15.6: The body of an iteration-statement or a selection-statement shall be a compound-statement. Enclosed statement body w
fix(libc): add missing curly braces
This corrects the MISRA violation C2012-15.6: The body of an iteration-statement or a selection-statement shall be a compound-statement. Enclosed statement body within the curly braces.
In spite of generic guidance for 3rd party libraries (https://trustedfirmware-a.readthedocs.io/en/latest/process/coding-style.html#misra-compliance) libc contains some MISRA-C fixes done by commit d5ccb754af86 ("libc: Fix some MISRA defects") in 2021. Also from history it is not clear where libc is coming from that's why there is no way to fix violation in base library.
Change-Id: I9007cfc8019e885cd294dbbd68e6b3f65a6071ba Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| 5141de14 | 16-Jan-2025 |
Per Larsen <perlarsen@google.com> |
fix(build): enable fp during fp save/restore
Newer compilers such as clang/LLVM 19 flag uses of floating point instructions when the architecture does not allow for it. We can temporarily enable the
fix(build): enable fp during fp save/restore
Newer compilers such as clang/LLVM 19 flag uses of floating point instructions when the architecture does not allow for it. We can temporarily enable the use of floating point operations where it it is safe and necessary for the build to succeed.
Change-Id: I1a832f846915c35792684906c94aef81c1f72d63 Signed-off-by: Andrei Homescu <ahomescu@google.com> Signed-off-by: Per Larsen <perlarsen@google.com>
show more ...
|
| 277d7dd6 | 22-Apr-2024 |
Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com> |
fix(libc): explicitly check operators precedence
This corrects the MISRA violation C2012-12.1: The precedence of operators within expressions should be made explicit. Enclosed the subexpression in p
fix(libc): explicitly check operators precedence
This corrects the MISRA violation C2012-12.1: The precedence of operators within expressions should be made explicit. Enclosed the subexpression in parentheses to maintain the precedence.
In spite of generic guidance for 3rd party libraries (https://trustedfirmware-a.readthedocs.io/en/latest/process/coding-style.html#misra-compliance) libc contains some MISRA-C fixes done by commit d5ccb754af86 ("libc: Fix some MISRA defects") in 2021. Also from history it is not clear where libc is coming from that's why there is no way to fix violation in base library.
Change-Id: Ic985b418ecae6f61a0be10114deb6076caaa6e5f Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| ac9f4b4d | 25-Mar-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
fix(cpus): remove errata setting PF_MODE to conservative
The erratum titled “Disabling of data prefetcher with outstanding prefetch TLB miss might cause a deadlock” should not be handled within TF-A
fix(cpus): remove errata setting PF_MODE to conservative
The erratum titled “Disabling of data prefetcher with outstanding prefetch TLB miss might cause a deadlock” should not be handled within TF-A. The current workaround attempts to follow option 2 but misapplies it. Specifically, it statically sets PF_MODE to conservative, which is not the recommended approach. According to the erratum documentation, PF_MODE should be configured in conservative mode only when we disable data prefetcher however this is not done in TF-A and thus the workaround is not needed in TF-A.
The static setting of PF_MODE in TF-A does not correctly address the erratum and may introduce unnecessary performance degradation on platforms that adopt it without fully understanding its implications.
To prevent incorrect or unintended use, the current implementation of this erratum workaround should be removed from TF-A and not adopted by platforms.
List of Impacted CPU's with Errata Numbers and reference to SDEN -
Cortex-A78 - 2132060 - https://developer.arm.com/documentation/SDEN1401784/latest Cortex-A78C - 2132064 - https://developer.arm.com/documentation/SDEN-2004089/latest Cortex-A710 - 2058056 - https://developer.arm.com/documentation/SDEN-1775101/latest Cortex-X2 - 2058056 - https://developer.arm.com/documentation/SDEN-1775100/latest Cortex-X3 - 2070301 - https://developer.arm.com/documentation/SDEN2055130/latest Neoverse-N2 - 2138953 - https://developer.arm.com/documentation/SDEN-1982442/latest Neoverse-V1 - 2108267 - https://developer.arm.com/documentation/SDEN-1401781/latest Neoverse-V2 - 2331132 - https://developer.arm.com/documentation/SDEN-2332927/latest
Change-Id: Icf4048508ae070b2df073cc46c63be058b2779df Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| 23775427 | 27-Mar-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge changes from topic "xlnx_fix_gen_datatype_cast" into integration
* changes: fix(psci): add const qualifier fix(el3-runtime): add const qualifier fix(bl31): add const qualifier fix(cons
Merge changes from topic "xlnx_fix_gen_datatype_cast" into integration
* changes: fix(psci): add const qualifier fix(el3-runtime): add const qualifier fix(bl31): add const qualifier fix(console): typecast expressions to match data type fix(arm-drivers): typecast expressions to match data type fix(arm-drivers): align essential type categories fix(arm-drivers): typecast expression to match data type
show more ...
|
| 518b278b | 24-Mar-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes from topic "hm/handoff-aarch32" into integration
* changes: refactor(arm): simplify early platform setup functions feat(bl32): enable r3 usage for boot args feat(handoff): add li
Merge changes from topic "hm/handoff-aarch32" into integration
* changes: refactor(arm): simplify early platform setup functions feat(bl32): enable r3 usage for boot args feat(handoff): add lib to sp-min sources feat(handoff): add 32-bit variant of SRAM layout feat(handoff): add 32-bit variant of ep info fix(aarch32): avoid using r12 to store boot params fix(arm): reinit secure and non-secure tls refactor(handoff): downgrade error messages
show more ...
|
| b78c307c | 21-Mar-2025 |
Bipin Ravi <bipin.ravi@arm.com> |
Merge changes from topic "ar/cvereorder" into integration
* changes: chore(cpus): rearrange the errata and cve in order in Cortex-X4 chore(cpus): rearrange the errata and cve in order in Neovers
Merge changes from topic "ar/cvereorder" into integration
* changes: chore(cpus): rearrange the errata and cve in order in Cortex-X4 chore(cpus): rearrange the errata and cve in order in Neoverse-V3
show more ...
|