| b0c7709a | 20-Jan-2026 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(cpufeat): give `stxr` distinct src and ret registers
The stxr can cause UNDEF exceptions if the source and return operands overlap. Add an early-clobber constraint to tell the compiler not to do
fix(cpufeat): give `stxr` distinct src and ret registers
The stxr can cause UNDEF exceptions if the source and return operands overlap. Add an early-clobber constraint to tell the compiler not to do that.
Change-Id: I1d2752839e17b847f8981169eaf2798ee8fd100a Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 867fe8ec | 20-Jan-2026 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpus): export midr_match to a more global location
It's a useful little helper that is horribly underused. Put it in common code so that we can use it in future.
Change-Id: I635c581644b07a
refactor(cpus): export midr_match to a more global location
It's a useful little helper that is horribly underused. Put it in common code so that we can use it in future.
Change-Id: I635c581644b07a6ca5ff68bb4fa475c4052da691 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 553c24c3 | 07-Jul-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpufeat): enable FEAT_RAS for FEAT_STATE_CHECKED again
FEAT_RAS was originally converted to FEAT_STATE_CHECKED in 6503ff291. However, the ability to use it was removed with 970a4a8d8 by simply
feat(cpufeat): enable FEAT_RAS for FEAT_STATE_CHECKED again
FEAT_RAS was originally converted to FEAT_STATE_CHECKED in 6503ff291. However, the ability to use it was removed with 970a4a8d8 by simply saying it impacts execution at EL3. That's true, but FEAT_STATE_CHECKED can still be allowed by being a bit clever about it.
First, the remainder of common code can be converted to use the is_feat_ras_supported() helper instead of the `#if FEATURE` pattern. There are no corner cases to consider there. The feature is either present (and appropriate action must be taken) or the feature is not (so we can skip RAS code).
A conscious choice is taken to check the RAS code in synchronize_errors despite it being in a hot path. Any fixed platform that seeks to be performant should be setting features to 0 or 1. Then, the SCTLR_EL3.IESB bit is always set if ENABLE_FEAT_RAS != 0 since we expect FEAT_IESB to be present if FEAT_RAS is (despite the architecture not guaranteeing it). If FEAT_RAS isn't present then we don't particularly care about the status of FEAT_IESB.
Second, platforms that don't set ENABLE_FEAT_RAS must continue to work. This is true out of the box with the is_feat_xyz_supported() helpers, as they make sure to fully disable code within them.
Third, platforms that do set ENABLE_FEAT_RAS=1 must continue to work. This is also true out of the box and no logical change is undertaken in common code.
Finally, ENABLE_FEAT_RAS is set to 2 on FVP. Having RAS implies that the whole handling machinery will be built-in and registered as appropriate. However, when RAS is built-in but not present in hardware, these registrations can still happen, they will only never be invoked at runtime.
Change-Id: I949e648601dc0951ef9c2b217f34136b6ea4b3dc Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 040ab75d | 19-Jan-2026 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(cpus): add support for Rosillo cpu" into integration |
| 96c0c13d | 19-Jan-2026 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "fix(cpufeat): enable access to extended BRPs/WRPs" into integration |
| d62f795c | 19-Jan-2026 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes I215a84bd,I83710d84 into integration
* changes: perf(cpus): reduce the footprint of errata reporting refactor(cpus): make errata reporting more generic |
| c9017cbc | 05-Jan-2026 |
Govindraj Raja <govindraj.raja@arm.com> |
feat(cpus): add support for Rosillo cpu
Add basic CPU library code to support Rosillo CPU
Change-Id: I0e11e511511562297e4dccd2745842ebcfa2bff4 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com> |
| 5c1015b3 | 14-Jan-2026 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge "fix(context-mgmt): actually clear MDCR_EL3 bits" into integration |
| a760277d | 13-Jan-2026 |
Bipin Ravi <bipin.ravi@arm.com> |
Merge "fix(debug): add debug log build option" into integration |
| d0650203 | 12-Dec-2025 |
Jaiprakash Singh <jaiprakashs@marvell.com> |
fix(debug): add debug log build option
When log level set to verbose, xlat prints alot of translation table debug logs.These detail logs keeps on printing for minutes and increase boot time. Also, n
fix(debug): add debug log build option
When log level set to verbose, xlat prints alot of translation table debug logs.These detail logs keeps on printing for minutes and increase boot time. Also, not all users might be interested in the xlat detail logs when verbose is on.
LOG_DEBUG is added to print xlat detail logs only when someone intentionally enables logging.
Change-Id: I3308b49779a692bdce87fb6929c88fdcb713e628 Signed-off-by: Jaiprakash Singh <jaiprakashs@marvell.com>
show more ...
|
| 3247828c | 02-Aug-2022 |
Manoj Kumar <manoj.kumar3@arm.com> |
fix(morello): avoid capability tag fault on data access
TF-A runtime service at EL3 switches the stack pointer from SP_EL3 to SP_EL0. This creates a capability tag fault when the DDC_EL0 is zeroed o
fix(morello): avoid capability tag fault on data access
TF-A runtime service at EL3 switches the stack pointer from SP_EL3 to SP_EL0. This creates a capability tag fault when the DDC_EL0 is zeroed out (purecap user space) as any data accesses computes tag/permission with DDC_EL0 value when SpSel is 0 and when EL3 is in hybrid mode.
As a workaround, this patch creates a per cpu context variable to store DDC_EL0 value so that when EL3 runtime is entered DDC_EL0 is saved on to stack. DDC_EL3 is then copied into DDC_EL0 after switching SP to SP_EL0. Once the runtime finishes, during el3_exit, the saved DDC_EL0 is restored from stack.
Signed-off-by: Selvarasu Ganesan <selvarasu.ganesan@arm.com> Signed-off-by: Manoj Kumar <manoj.kumar3@arm.com> Signed-off-by: Varshit Pandya <varshit.pandya@arm.com> Change-Id: I4e4010f0e20913cb4e35b58fb49a177bdf26feb1
show more ...
|
| 6a548c34 | 02-Aug-2022 |
Manoj Kumar <manoj.kumar3@arm.com> |
feat(morello): add capability load/store/track support to MMU
Morello architecture adds additional bits to TCR_EL3 and uses the HWU bits of page/block descriptors to provision permission for loading
feat(morello): add capability load/store/track support to MMU
Morello architecture adds additional bits to TCR_EL3 and uses the HWU bits of page/block descriptors to provision permission for loading, storing and tracking of valid capability tags.
This patch reserves bit 31 of the existing translation table attribute field which can be used by the user to enable capability load/store/track permission for a given memory region.
This patch also enables this permission for BL31 region.
Signed-off-by: Manoj Kumar <manoj.kumar3@arm.com> Signed-off-by: Varshit Pandya <varshit.pandya@arm.com> Change-Id: I1939c70aac3585969d74b0956529681e840d6f63
show more ...
|
| 27bc1386 | 02-Oct-2020 |
Manoj Kumar <manoj.kumar3@arm.com> |
feat(morello): add Morello capability enablement changes
This patch adds a build macro ENABLE_FEAT_MORELLO which when set will compile BL31 firmware with changes required to boot capability aware so
feat(morello): add Morello capability enablement changes
This patch adds a build macro ENABLE_FEAT_MORELLO which when set will compile BL31 firmware with changes required to boot capability aware software.
It also adds helper function in c and assmbly to check if morello hardware is present and if morello capability is enabled or not.
CE field, bits [23:20] in ID_AA64PFR1_EL1 defines whether morello architecture is present or not, 0b0000 indicates that it is absent and 0b0001 indicates that it is present. While whether capabilities are enabled or not is decided at runtime with ENABLE_FEAT_MORELLO build option.
Reference: https://developer.arm.com/documentation/ddi0606/latest/
Signed-off-by: Manoj Kumar <manoj.kumar3@arm.com> Signed-off-by: Varshit Pandya <varshit.pandya@arm.com> Change-Id: Ib16877acbfcb72c4bd8c08e97e44edc0a3e46089
show more ...
|
| 2edb8b6d | 12-Jan-2026 |
Govindraj Raja <govindraj.raja@arm.com> |
fix(cpufeat): enable access to extended BRPs/WRPs
Access to Extended Breakpoints(BRPs) and Watchpoints(WRPs) are enabled through EBWE bit and this available from DebugV8P9. So enable access to mode
fix(cpufeat): enable access to extended BRPs/WRPs
Access to Extended Breakpoints(BRPs) and Watchpoints(WRPs) are enabled through EBWE bit and this available from DebugV8P9. So enable access to mode select register default from lower EL's.
Though this bit RES0 when we have less than 16 BRPs/WRPs the Mode select register is also RAZ/WI. So having EBWE write by default is harmless. And will avoid trap to EL3 when enable access to bank selection when we have more than 16 BRPs/WRPs.
Change-Id: Ib308be758c0beedde05a5558b0d24a161b79273a Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| ea6625c6 | 12-Jan-2026 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge changes from topic "bk/amu_private" into integration
* changes: fix(cpufeat): prevent FEAT_AMU counters 2 and 3 from counting across worlds fix(cpufeat): disable FEAT_AMU counters on conte
Merge changes from topic "bk/amu_private" into integration
* changes: fix(cpufeat): prevent FEAT_AMU counters 2 and 3 from counting across worlds fix(cpufeat): disable FEAT_AMU counters on context restore feat(per-cpu): migrate AArch32 amu_ctx to per-cpu framework
show more ...
|
| 1df0bb50 | 12-Dec-2025 |
Jaiprakash Singh <jaiprakashs@marvell.com> |
fix(cpus): enable Neoverse-V2 external LLC support
Change-Id: I9582c7405db6862e77db240822e241d4082966f2 Signed-off-by: Jaiprakash Singh <jaiprakashs@marvell.com> |
| 8cd9c18b | 08-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(cpufeat): prevent FEAT_AMU counters 2 and 3 from counting across worlds
FEAT_AMU has 4 architected counters. The lower 2, CPU_CYCLES and CNT_CYCLES, are not considered to be side channels due to
fix(cpufeat): prevent FEAT_AMU counters 2 and 3 from counting across worlds
FEAT_AMU has 4 architected counters. The lower 2, CPU_CYCLES and CNT_CYCLES, are not considered to be side channels due to their low resolution and general availability of the data elsewhere. As such, they are used for critical performance tuning and are expected to never be turned off or context switched when switching worlds.
The upper 2 counters, INST_RETIRED and STALL_BACKEND_MEM, are different. The data they provide is non-critical and expose new information that could be used as a timing side channel, especially of Secure world. This patch adds context switching of these two counters to prevent any such side channel.
This is not done for group 1 auxiliary counters as those are IMP DEF and are inaccessible by default unless overriden by the platform (with AMU_RESTRICT_COUNTERS).
Change-Id: Ib4b946abb810e36736cabb9b84cd837308b4e761 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 7724f91e | 19-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(psci): make CMOs target the whole psci_cpu_data_t
psci_cpu_data_t is tiny - on AArch64 it's 12 bytes. Cache maintenance operations (CMOs) operate on cache lines which are much bigger - usua
refactor(psci): make CMOs target the whole psci_cpu_data_t
psci_cpu_data_t is tiny - on AArch64 it's 12 bytes. Cache maintenance operations (CMOs) operate on cache lines which are much bigger - usually 64 bytes long. As such, issuing a cache clean for a member in the middle of psci_cpu_data_t won't necessarily have the expected effect. The member will be cleaned, sure, but so will the rest of the cache line along with it. If the struct happens to straddle cache lines this will lead to the next 52 bytes, most of which not belonging to psci_cpu_data_t, being cleaned as well and the start of psci_cpu_data_t not being cleaned at all.
This is not a problem because of the per-cpu (and cpu_data before it) section - it is cache size aligned and all data within a single section belongs to the same core so overdoing cache cleans won't have strange side effects.
Regardless, this patch clarifies CMOs around psci_cpu_data_t by always targeting the whole structure. To make sure there is never a situation where it straddles cache lines and this causes weird side effect, its alignment is set to the size of the structure to make sure it is always on the same cache line.
Change-Id: I5d82ee6bb2ce0ed3c6a7e4abb7aa890f5e3bd0af Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 9718d0db | 19-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): reduce the footprint of errata reporting
Since the advent of spin_trylock() it's possible to combine the spinlock with the errata_reported field. If the spinlock is only acquired with a
perf(cpus): reduce the footprint of errata reporting
Since the advent of spin_trylock() it's possible to combine the spinlock with the errata_reported field. If the spinlock is only acquired with a non-blocking call then a successful call means reporting should be done and an unsuccessful one means that reporting would have been done by whoever acquired it. This relies on the lock never being released which this patch does. The effect is a smaller memory footprint and a smaller runtime.
Change-Id: I215a84bd2c91e33703349c41fc59f654f7764b2f Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 9d619dec | 19-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpus): make errata reporting more generic
Only the backing store differs between the cpu_ops_ptr argument so hoist that up and make things easier to follow.
Change-Id: I83710d8475a4a55046c
refactor(cpus): make errata reporting more generic
Only the backing store differs between the cpu_ops_ptr argument so hoist that up and make things easier to follow.
Change-Id: I83710d8475a4a55046cf2eb6d16cd27b8ef0f3c3 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 753c749c | 04-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(cpufeat): disable FEAT_AMU counters on context restore
The FEAT_AMU counters observe UNPREDICTABLE behaviour if written to while counting so they must be disabled first. Further, the save happen
fix(cpufeat): disable FEAT_AMU counters on context restore
The FEAT_AMU counters observe UNPREDICTABLE behaviour if written to while counting so they must be disabled first. Further, the save happens on the PE's powerdown path and the restore happens on the wakeup path so any disable will likely get lost on wakeup.
So add a disable from to the restore path. The restore path will usually have the AMU reset and as such all counters disabled. There is a chance though that the AMU might not have reset with the PE (which is IMPDEF) or a pabandon might have happened so also add a check to skip disabling the counters if they already are.
Even though reading AMU counters while they are enabled is perfectly permissible, keep the disable so that the snapshot of saved values is coherent. Otherwise, over many saves and restores, the values of the later read counters could get out of sync with the ones read earlier.
Change-Id: Iefe6de44f09d8659a6118d5fea40abf82c44be16 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| ef545e81 | 04-Dec-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(per-cpu): migrate AArch32 amu_ctx to per-cpu framework
Brings it in line with AArch64.
Change-Id: I9333ea9cf07679735da169dae0fe90a8856d9801 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.
feat(per-cpu): migrate AArch32 amu_ctx to per-cpu framework
Brings it in line with AArch64.
Change-Id: I9333ea9cf07679735da169dae0fe90a8856d9801 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| e9730867 | 07-Jan-2026 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes I1a57de22,If97ea5fd into integration
* changes: feat(locks): make spin_trylock with exclusives spin until it knows the state of the lock fix(locks): restore spin_trylock's ability
Merge changes I1a57de22,If97ea5fd into integration
* changes: feat(locks): make spin_trylock with exclusives spin until it knows the state of the lock fix(locks): restore spin_trylock's ability to acquire a lock
show more ...
|
| 0eaf5de8 | 06-Jan-2026 |
Lauren Wehrmeister <lauren.wehrmeister@arm.com> |
Merge changes from topic "xl/n2-errata" into integration
* changes: fix(cpus): workaround for Neoverse-N2 erratum 2138953 fix(cpus): workaround for Neoverse-N2 erratum 4302970 fix(cpus): worka
Merge changes from topic "xl/n2-errata" into integration
* changes: fix(cpus): workaround for Neoverse-N2 erratum 2138953 fix(cpus): workaround for Neoverse-N2 erratum 4302970 fix(cpus): workaround for Neoverse-N2 erratum 3888123 refactor(cpus): reorder the errratum build flag for Neoverse-N2
show more ...
|
| ea5a7ab1 | 06-Jan-2026 |
Lauren Wehrmeister <lauren.wehrmeister@arm.com> |
Merge changes from topic "xl/cortex-x3-errata" into integration
* changes: fix(cpus): workaround for Cortex-X3 erratum 4302966 fix(cpus): workaround for Cortex-X3 erratum 3888125 |