| #
ee87353c |
| 28-Oct-2025 |
Mark Dykes <mark.dykes@arm.com> |
Merge "refactor(docs): deduplicate PSCI documentation" into integration
|
| #
b5f120b5 |
| 13-Oct-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(docs): deduplicate PSCI documentation
It is already described in the porting guide and context management sections so it's largely redundant. It also hasn't been updated for a while despite
refactor(docs): deduplicate PSCI documentation
It is already described in the porting guide and context management sections so it's largely redundant. It also hasn't been updated for a while despite lots going on around PSCI so it's clearly not read often. The only part that isn't is that for describing a new secure dispatcher, which belongs in the porting guide.
Change-Id: Icdc53e19565f0785bc8a112e5eb49df1b365c66c Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
8e94c578 |
| 01-Oct-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "ahmed-azeem/introduce-rdaspen" into integration
* changes: feat(dsu): enable PMU registers access at EL1 feat(rdaspen): add DSU to the device tree feat(rdaspen): add
Merge changes from topic "ahmed-azeem/introduce-rdaspen" into integration
* changes: feat(dsu): enable PMU registers access at EL1 feat(rdaspen): add DSU to the device tree feat(rdaspen): add DSU support docs(rdaspen): introduce rdaspen docs feat(rdaspen): enable tbb on rd-aspen platform feat(gicv3): add GIC-720AE model id feat(rdaspen): add BL31 for RD-Aspen platform feat(rdaspen): introduce Arm RD-Aspen platform
show more ...
|
| #
1f866fc9 |
| 18-Sep-2025 |
Amr Mohamed <amr.mohamed@arm.com> |
feat(dsu): enable PMU registers access at EL1
- Disable trapping of write accesses to DSU cluster PMU registers at EL3 and EL2. - Clear the SPME bit in CLUSTERPMMDCR_EL3 to prohibit PMU event co
feat(dsu): enable PMU registers access at EL1
- Disable trapping of write accesses to DSU cluster PMU registers at EL3 and EL2. - Clear the SPME bit in CLUSTERPMMDCR_EL3 to prohibit PMU event counting in the secure state.
Change-Id: If3eb6e997330ae86f45760e0e862c003861f3d66 Signed-off-by: Amr Mohamed <amr.mohamed@arm.com>
show more ...
|
| #
aabab09e |
| 01-Sep-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes Id38d6f1b,I5fcfe8dd,I7f3b50e5 into integration
* changes: fix(cpus): inform the compiler that struct cpu_ops is aligned refactor(el3-runtime): move the initialisation of the cpu_op
Merge changes Id38d6f1b,I5fcfe8dd,I7f3b50e5 into integration
* changes: fix(cpus): inform the compiler that struct cpu_ops is aligned refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C fix(aarch32): make get_cpu_ops_ptr() PCS compliant
show more ...
|
| #
022fcb48 |
| 14-Aug-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C
The difference between AArch32 and AArch64 is insignificant and the usage is identical. The only thing that required the use of
refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C
The difference between AArch32 and AArch64 is insignificant and the usage is identical. The only thing that required the use of assembly was that the get_cpu_ops_ptr() function was not PCS compliant and needed a wrapper to do that instead. That has now been fixed so move this to C so it's more readable and more optimise-able by the compiler.
Change-Id: I5fcfe8ddb122dd35d58adc6d44a7484c5c595815 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
f8901e38 |
| 23-Jun-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge "feat(dsu): support power control and autonomous powerdown config" into integration
|
| #
d52ff2b3 |
| 07-May-2025 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
feat(dsu): support power control and autonomous powerdown config
This patch allows platforms to enable certain DSU settings to ensure memory retention and control over cache power requests. We also
feat(dsu): support power control and autonomous powerdown config
This patch allows platforms to enable certain DSU settings to ensure memory retention and control over cache power requests. We also move the driver out of css into drivers/arm. Platforms can configure the CLUSTERPWRCTLR and CLUSTERPWRDN registers [1] to improve power efficiency.
These registers enable finer-grained control of DSU power state transitions, including powerdown and retention.
IMP_CLUSTERPWRCTLR_EL1 provides: - Functional retention: Allows configuration of the duration of inactivity before the DSU uses CLUSTERPACTIVE to request functional retention.
- Cache power request: These bits are output on CLUSTERPACTIVE[19:16] to indicate to the power controller which cache portions must remain powered.
IMP_CLUSTERPWRDN_EL1 includes: - Powerdown: Triggers full cluster powerdown, including control logic.
- Memory retention: Requests memory retention mode, keeping L3 RAM contents while powering off the rest of the DSU.
The DSU-120 TRM [2] provides the full field definitions, which are used as references in the `dsu_driver_data` structure.
References: [1]: https://developer.arm.com/documentation/100453/latest/ [2]: https://developer.arm.com/documentation/102547/0201/?lang=en
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I2eba808b8f2a27797782a333c65dd092b03208fe
show more ...
|
| #
e3108fad |
| 10-Apr-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "lto-by-default" into integration
* changes: fix(libc): make sure __init functions are garbage collected fix(platforms): remove platform_core_pos_helper()
|
| #
53644fa8 |
| 07-Apr-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(libc): make sure __init functions are garbage collected
RECLAIM_INIT_CODE is useful to remove code that is only necessary during boot. However, these functions are generally called once and as s
fix(libc): make sure __init functions are garbage collected
RECLAIM_INIT_CODE is useful to remove code that is only necessary during boot. However, these functions are generally called once and as such prime candidates for inlining. When building with LTO, the compiler is pretty good at inlining every single one, making this option pointless.
So tell the compiler to not inline these functions. This ensures they are kept separate and they can be garbage collected later. This is expected to cost a little bit of speed due to the extra branching.
Change-Id: Ie83a9ec8db03cb42139742fc6d728d12ce8549d3 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
a8a5d39d |
| 24-Feb-2025 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the i
Merge changes from topic "bk/errata_speed" into integration
* changes: refactor(cpus): declare runtime errata correctly perf(cpus): make reset errata do fewer branches perf(cpus): inline the init_cpu_data_ptr function perf(cpus): inline the reset function perf(cpus): inline the cpu_get_rev_var call perf(cpus): inline cpu_rev_var checks refactor(cpus): register DSU errata with the errata framework's wrappers refactor(cpus): convert checker functions to standard helpers refactor(cpus): convert the Cortex-A65 to use the errata framework fix(cpus): declare reset errata correctly
show more ...
|
| #
89dba82d |
| 22-Jan-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): make reset errata do fewer branches
Errata application is painful for performance. For a start, it's done when the core has just come out of reset, which means branch predictors and cach
perf(cpus): make reset errata do fewer branches
Errata application is painful for performance. For a start, it's done when the core has just come out of reset, which means branch predictors and caches will be empty so a branch to a workaround function must be fetched from memory and that round trip is very slow. Then it also runs with the I-cache off, which means that the loop to iterate over the workarounds must also be fetched from memory on each iteration.
We can remove both branches. First, we can simply apply every erratum directly instead of defining a workaround function and jumping to it. Currently, no errata that need to be applied at both reset and runtime, with the same workaround function, exist. If the need arose in future, this should be achievable with a reset + runtime wrapper combo.
Then, we can construct a function that applies each erratum linearly instead of looping over the list. If this function is part of the reset function, then the only "far" branches at reset will be for the checker functions. Importantly, this mitigates the slowdown even when an erratum is disabled.
The result is ~50% speedup on N1SDP and ~20% on AArch64 Juno on wakeup from PSCI calls that end in powerdown. This is roughly back to the baseline of v2.9, before the errata framework regressed on performance (or a little better). It is important to note that there are other slowdowns since then that remain unknown.
Change-Id: Ie4d5288a331b11fd648e5c4a0b652b74160b07b9 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
0d020822 |
| 19-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(cpus): inline the reset function
Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the call_reset_handler handler. This way we skip the costly branch at no extra cost as this is
perf(cpus): inline the reset function
Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the call_reset_handler handler. This way we skip the costly branch at no extra cost as this is the only place where this is called.
While we're at it, drop the options for CPU_NO_RESET_FUNC. The only cpus that need that are virtual cpus which can spare the tiny bit of performance lost. The rest are real cores which can save on the check for zero.
Now is a good time to put the assert for a missing cpu in the get_cpu_ops_ptr function so that it's a bit better encapsulated.
Change-Id: Ia7c3dcd13b75e5d7c8bafad4698994ea65f42406 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
fcb80d7d |
| 11-Feb-2025 |
Manish Pandey <manish.pandey2@arm.com> |
Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration
* changes: fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset chore(docs): drop the "wfi" from `pwr_
Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration
* changes: fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi` chore(psci): drop skip_wfi variable feat(arm): convert arm platforms to expect a wakeup fix(cpus): avoid SME related loss of context on powerdown feat(psci): allow cores to wake up from powerdown refactor: panic after calling psci_power_down_wfi() refactor(cpus): undo errata mitigations feat(cpus): add sysreg_bit_toggle
show more ...
|
| #
2b5e00d4 |
| 19-Dec-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(psci): allow cores to wake up from powerdown
The simplistic view of a core's powerdown sequence is that power is atomically cut upon calling `wfi`. However, it turns out that it has lots to do
feat(psci): allow cores to wake up from powerdown
The simplistic view of a core's powerdown sequence is that power is atomically cut upon calling `wfi`. However, it turns out that it has lots to do - it has to talk to the interconnect to exit coherency, clean caches, check for RAS errors, etc. These take significant amounts of time and are certainly not atomic. As such there is a significant window of opportunity for external events to happen. Many of these steps are not destructive to context, so theoretically, the core can just "give up" half way (or roll certain actions back) and carry on running. The point in this sequence after which roll back is not possible is called the point of no return.
One of these actions is the checking for RAS errors. It is possible for one to happen during this lengthy sequence, or at least remain undiscovered until that point. If the core were to continue powerdown when that happens, there would be no (easy) way to inform anyone about it. Rejecting the powerdown and letting software handle the error is the best way to implement this.
Arm cores since at least the a510 have included this exact feature. So far it hasn't been deemed necessary to account for it in firmware due to the low likelihood of this happening. However, events like GIC wakeup requests are much more probable. Older cores will powerdown and immediately power back up when this happens. Travis and Gelas include a feature similar to the RAS case above, called powerdown abandon. The idea is that this will improve the latency to service the interrupt by saving on work which the core and software need to do.
So far firmware has relied on the `wfi` being the point of no return and if it doesn't explicitly detect a pending interrupt quite early on, it will embark onto a sequence that it expects to end with shutdown. To accommodate for it not being a point of no return, we must undo all of the system management we did, just like in the warm boot entrypoint.
To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal. Most recent platforms do some platform management and finish on the standard `wfi`, followed by a panic or an endless loop as this is expected to not return. To make this generic, any platform that wishes to support wakeups must instead let common code call `psci_power_down_wfi()` right after. Besides wakeups, this lets common code handle powerdown errata better as well.
Then, the CPU_OFF case is simple - PSCI does not allow it to return. So the best that can be done is to attempt the `wfi` a few times (the choice of 32 is arbitrary) in the hope that the wakeup is transient. If it isn't, the only choice is to panic, as the system is likely to be in a bad state, eg. interrupts weren't routed away. The same applies for SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't matter as the system is going offline one way or another. The RAS case will be considered in a separate patch.
Now, the CPU_SUSPEND case is more involved. First, to powerdown it must wipe its context as it is not written on warm boot. But it cannot be overwritten in case of a wakeup. To avoid the catch 22, save a copy that will only be used if powerdown fails. That is about 500 bytes on the stack so it hopefully doesn't tip anyone over any limits. In future that can be avoided by having a core manage its own context.
Second, when the core wakes up, it must undo anything it did to prepare for poweroff, which for the cores we care about, is writing CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library way of doing this is to simply call the power off hook again and have the hook toggle the bit. If in the future there need to be more complex sequences, their direction can be advised on the value of this bit.
Third, do the actual "resume". Most of the logic is already there for the retention suspend, so that only needs a small touch up to apply to the powerdown case as well. The missing bit is the powerdown specific state management. Luckily, the warmboot entrypoint does exactly that already too, so steal that and we're done.
All of this is hidden behind a FEAT_PABANDON flag since it has a large memory and runtime cost that we don't want to burden non pabandon cores with.
Finally, do some function renaming to better reflect their purpose and make names a little bit more consistent.
Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| #
eee0ec48 |
| 26-Mar-2024 |
Madhukar Pappireddy <madhukar.pappireddy@arm.com> |
Merge changes from topic "mte_fixes" into integration
* changes: build(changelog): move mte to mte2 refactor(mte): remove mte, mte_perm
|
| #
c282384d |
| 07-Mar-2024 |
Govindraj Raja <govindraj.raja@arm.com> |
refactor(mte): remove mte, mte_perm
Currently both FEAT_MTE and FEAT_MTE_PERM aren't used for enabling of any feature bits in EL3. So remove both FEAT handling.
All mte regs that are currently cont
refactor(mte): remove mte, mte_perm
Currently both FEAT_MTE and FEAT_MTE_PERM aren't used for enabling of any feature bits in EL3. So remove both FEAT handling.
All mte regs that are currently context saved/restored are needed only when FEAT_MTE2 is enabled, so move to usage of FEAT_MTE2 and remove FEAT_MTE usage.
BREAKING CHANGE: Any platform or downstream code trying to use SCR_EL3.ATA bit(26) will see failures as this is now moved to be used only with FEAT_MTE2 with commit@ef0d0e5478a3f19cbe70a378b9b184036db38fe2
Change-Id: Id01e154156571f7792135639e17dc5c8d0e17cf8 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| #
2bc0aaad |
| 08-Mar-2024 |
Madhukar Pappireddy <madhukar.pappireddy@arm.com> |
Merge "docs: add documentation for `entry_point_info`" into integration
|
| #
2839a3c4 |
| 30-Jan-2024 |
Harrison Mutai <harrison.mutai@arm.com> |
docs: add documentation for `entry_point_info`
Change-Id: I20b5f2cf70bfff09126f3c0645f40d3e410a4c70 Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>
|
| #
eb889865 |
| 12-Feb-2024 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "feat(mte): add mte2 feat" into integration
|
| #
8e397889 |
| 26-Jan-2024 |
Govindraj Raja <govindraj.raja@arm.com> |
feat(mte): add mte2 feat
Add support for feat mte2. tfsr_el2 is available only with mte2, however currently its context_save/restore is done with mte rather than mte2, so introduce 'is_feat_mte2_sup
feat(mte): add mte2 feat
Add support for feat mte2. tfsr_el2 is available only with mte2, however currently its context_save/restore is done with mte rather than mte2, so introduce 'is_feat_mte2_supported' to check mte2.
Change-Id: I108d9989a8f5b4d1d2f3b9865a914056fa566cf2 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| #
9198ad5b |
| 07-Feb-2024 |
Sandrine Bailleux <sandrine.bailleux@arm.com> |
Merge "docs: fix link to TBBR specification" into integration
|
| #
4290d343 |
| 02-Feb-2024 |
Sandrine Bailleux <sandrine.bailleux@arm.com> |
docs: fix link to TBBR specification
The former link pointed to a page which displayed the following warning message:
We could not find that page in the latest version, so we have taken you to
docs: fix link to TBBR specification
The former link pointed to a page which displayed the following warning message:
We could not find that page in the latest version, so we have taken you to the first page instead
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com> Change-Id: Icf9277770e38bc5e602b75052c2386301984238d
show more ...
|
| #
61dfdfd4 |
| 24-Jan-2024 |
Manish V Badarkhe <manish.badarkhe@arm.com> |
Merge "refactor(mte): deprecate CTX_INCLUDE_MTE_REGS" into integration
|
| #
0a33adc0 |
| 21-Dec-2023 |
Govindraj Raja <govindraj.raja@arm.com> |
refactor(mte): deprecate CTX_INCLUDE_MTE_REGS
Currently CTX_INCLUDE_MTE_REGS is used for dual purpose, to enable allocation tags register and to context save and restore them and also to check if mt
refactor(mte): deprecate CTX_INCLUDE_MTE_REGS
Currently CTX_INCLUDE_MTE_REGS is used for dual purpose, to enable allocation tags register and to context save and restore them and also to check if mte feature is available.
To make it more meaningful, remove CTX_INCLUDE_MTE_REGS and introduce FEAT_MTE. This would enable allocation tags register when FEAT_MTE is enabled and also supported from platform.
Also arch features can be conditionally enabled disabled based on arch version from `make_helpers/arch_features.mk`
Change-Id: Ibdd2d43874634ad7ddff93c7edad6044ae1631ed Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|