| 7455cd17 | 29-Jan-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
fix(cpus): workaround for accessing ICH_VMCR_EL2
When ICH_VMCR_EL2.VBPR1 is written in Secure state (SCR_EL3.NS==0) and then subsequently read in Non-secure state (SCR_EL3.NS==1), a wrong value migh
fix(cpus): workaround for accessing ICH_VMCR_EL2
When ICH_VMCR_EL2.VBPR1 is written in Secure state (SCR_EL3.NS==0) and then subsequently read in Non-secure state (SCR_EL3.NS==1), a wrong value might be returned. The same issue exists in the opposite way.
Adding workaround in EL3 software that performs context save/restore on a change of Security state to use a value of SCR_EL3.NS when accessing ICH_VMCR_EL2 that reflects the Security state that owns the data being saved or restored. For example, EL3 software should set SCR_EL3.NS to 1 when saving or restoring the value ICH_VMCR_EL2 for Non-secure(or Realm) state. EL3 software should clear SCR_EL3.NS to 0 when saving or restoring the value ICH_VMCR_EL2 for Secure state.
SDEN documentation: https://developer.arm.com/documentation/SDEN-1775101/latest/
Change-Id: I9f0403601c6346276e925f02eab55908b009d957 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| 58d98ba8 | 21-Jan-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
chore(cpus): fix incorrect header macro
- errata.h is using incorrect header macro ERRATA_REPORT_H fix this. - Group errata function utilities.
Change-Id: I6a4a8ec6546adb41e24d8885cb445fa8be830148
chore(cpus): fix incorrect header macro
- errata.h is using incorrect header macro ERRATA_REPORT_H fix this. - Group errata function utilities.
Change-Id: I6a4a8ec6546adb41e24d8885cb445fa8be830148 Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
show more ...
|
| db5fe4f4 | 08-Oct-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`
To allow for generic handling of a wakeup, this hook is no longer expected to call wfi itself. Update the name everywhere to reflect this e
chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`
To allow for generic handling of a wakeup, this hook is no longer expected to call wfi itself. Update the name everywhere to reflect this expectation so that future platform implementers don't get misled.
Change-Id: Ic33f0b6da74592ad6778fd802c2f0b85223af614 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| da305ec7 | 26-Sep-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(arm): convert arm platforms to expect a wakeup
Newer cores in upcoming platforms may refuse to power down. The PSCI library is already prepared for this so convert platform code to also allow t
feat(arm): convert arm platforms to expect a wakeup
Newer cores in upcoming platforms may refuse to power down. The PSCI library is already prepared for this so convert platform code to also allow this. This is simple - drop the `wfi` + panic and let common code deal with the fallout. The end result will be the same (sans the message) except the platform will have fewer responsibilities. The only exception is for cores being signalled to power off gracefully ahead of system reset. That path must also be terminal so replace the end with the same psci_pwrdown_cpu_end() to behave the same as the generic implementation. It will handle wakeups and panic, hoping that the system gets reset from under it. The dmb is upgraded to a dsb so no functional change.
Change-Id: I381f96bec8532bda6ccdac65de57971aac42e7e8 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 45c7328c | 20-Sep-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
fix(cpus): avoid SME related loss of context on powerdown
Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to 0) when we're attempting to power down. What they don't tell us is th
fix(cpus): avoid SME related loss of context on powerdown
Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to 0) when we're attempting to power down. What they don't tell us is that if this isn't done, the powerdown request will be rejected. On the CPU_OFF path that's not a problem - we can force SVCR to 0 and be certain the core will power off.
On the suspend to powerdown path, however, we cannot do this. The TRM also tells us that the sequence could also be aborted on eg. GIC interrupts. If this were to happen when we have overwritten SVCR to 0, upon a return to the caller they would experience a loss of context. We know that at least Linux may call into PSCI with SVCR != 0. One option is to save the entire SME context which would be quite expensive just to work around. Another option is to downgrade the request to a normal suspend when SME was left on. This option is better as this is expected to happen rarely enough to ignore the wasted power and we don't want to burden the generic (correct) path with needless context management.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Change-Id: I698fa8490ebf51461f6aa8bba84f9827c5c46ad4
show more ...
|
| 2b5e00d4 | 19-Dec-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(psci): allow cores to wake up from powerdown
The simplistic view of a core's powerdown sequence is that power is atomically cut upon calling `wfi`. However, it turns out that it has lots to do
feat(psci): allow cores to wake up from powerdown
The simplistic view of a core's powerdown sequence is that power is atomically cut upon calling `wfi`. However, it turns out that it has lots to do - it has to talk to the interconnect to exit coherency, clean caches, check for RAS errors, etc. These take significant amounts of time and are certainly not atomic. As such there is a significant window of opportunity for external events to happen. Many of these steps are not destructive to context, so theoretically, the core can just "give up" half way (or roll certain actions back) and carry on running. The point in this sequence after which roll back is not possible is called the point of no return.
One of these actions is the checking for RAS errors. It is possible for one to happen during this lengthy sequence, or at least remain undiscovered until that point. If the core were to continue powerdown when that happens, there would be no (easy) way to inform anyone about it. Rejecting the powerdown and letting software handle the error is the best way to implement this.
Arm cores since at least the a510 have included this exact feature. So far it hasn't been deemed necessary to account for it in firmware due to the low likelihood of this happening. However, events like GIC wakeup requests are much more probable. Older cores will powerdown and immediately power back up when this happens. Travis and Gelas include a feature similar to the RAS case above, called powerdown abandon. The idea is that this will improve the latency to service the interrupt by saving on work which the core and software need to do.
So far firmware has relied on the `wfi` being the point of no return and if it doesn't explicitly detect a pending interrupt quite early on, it will embark onto a sequence that it expects to end with shutdown. To accommodate for it not being a point of no return, we must undo all of the system management we did, just like in the warm boot entrypoint.
To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal. Most recent platforms do some platform management and finish on the standard `wfi`, followed by a panic or an endless loop as this is expected to not return. To make this generic, any platform that wishes to support wakeups must instead let common code call `psci_power_down_wfi()` right after. Besides wakeups, this lets common code handle powerdown errata better as well.
Then, the CPU_OFF case is simple - PSCI does not allow it to return. So the best that can be done is to attempt the `wfi` a few times (the choice of 32 is arbitrary) in the hope that the wakeup is transient. If it isn't, the only choice is to panic, as the system is likely to be in a bad state, eg. interrupts weren't routed away. The same applies for SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't matter as the system is going offline one way or another. The RAS case will be considered in a separate patch.
Now, the CPU_SUSPEND case is more involved. First, to powerdown it must wipe its context as it is not written on warm boot. But it cannot be overwritten in case of a wakeup. To avoid the catch 22, save a copy that will only be used if powerdown fails. That is about 500 bytes on the stack so it hopefully doesn't tip anyone over any limits. In future that can be avoided by having a core manage its own context.
Second, when the core wakes up, it must undo anything it did to prepare for poweroff, which for the cores we care about, is writing CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library way of doing this is to simply call the power off hook again and have the hook toggle the bit. If in the future there need to be more complex sequences, their direction can be advised on the value of this bit.
Third, do the actual "resume". Most of the logic is already there for the retention suspend, so that only needs a small touch up to apply to the powerdown case as well. The missing bit is the powerdown specific state management. Luckily, the warmboot entrypoint does exactly that already too, so steal that and we're done.
All of this is hidden behind a FEAT_PABANDON flag since it has a large memory and runtime cost that we don't want to burden non pabandon cores with.
Finally, do some function renaming to better reflect their purpose and make names a little bit more consistent.
Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 2bd3b397 | 21-Oct-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor: panic after calling psci_power_down_wfi()
This function doesn't return and its callers that don't return either rely on this. Drop the dead attribute and add a panic() after it to make thi
refactor: panic after calling psci_power_down_wfi()
This function doesn't return and its callers that don't return either rely on this. Drop the dead attribute and add a panic() after it to make this expectation explicit. Calling `wfi` in the powerdown sequence is terminal so even if the function was made to return, there would be no functional change.
This is useful for a following patch that makes psci_power_down_wfi() return.
Change-Id: I62ca1ee058b1eaeb046966c795081e01bf45a2eb Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| cc94e71b | 26-Sep-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(cpus): undo errata mitigations
The workarounds introduced in the three patches starting at 888eafa00b99aa06b4ff688407336811a7ff439a assumed that any powerdown request will be (forced to be)
refactor(cpus): undo errata mitigations
The workarounds introduced in the three patches starting at 888eafa00b99aa06b4ff688407336811a7ff439a assumed that any powerdown request will be (forced to be) terminal. This assumption can no longer be the case for new CPUs so there is a need to revisit these older cores. Since we may wake up, we now need to respect the workaround's recommendation that the workaround needs to be reverted on wakeup. So do exactly that.
Introduce a new helper to toggle bits in assembly. This allows us to call the workaround twice, with the first call setting the workaround and second undoing it. This is also used for gelas' an travis' powerdown routines. This is so the same function can be called again
Also fix the condition in the cpu helper macro as it was subtly wrong
Change-Id: Iff9e5251dc9d8670d085d88c070f78991955e7c3 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| bb801857 | 21-Jan-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(cpus): add sysreg_bit_toggle
Introduce a new helper to toggle bits in assembly. This allows us to call the workaround twice, with the first call setting the workaround and second undoing it. Th
feat(cpus): add sysreg_bit_toggle
Introduce a new helper to toggle bits in assembly. This allows us to call the workaround twice, with the first call setting the workaround and second undoing it. This allows the (errata) workaround functions to be used to both apply and undo the mitigation.
This is applied to functions where the undo part will be required in follow-up patches.
Change-Id: I058bad58f5949b2d5fe058101410e33b6be1b8ba Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 8ae6b1ad | 28-Jan-2025 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): apply SMCCC_ARCH_WORKAROUND_4 to affected cpus
This patch implements SMCCC_ARCH_WORKAROUND_4 and allows discovery through SMCCC_ARCH_FEATURES. This mechanism is enabled if CVE_2024_78
fix(security): apply SMCCC_ARCH_WORKAROUND_4 to affected cpus
This patch implements SMCCC_ARCH_WORKAROUND_4 and allows discovery through SMCCC_ARCH_FEATURES. This mechanism is enabled if CVE_2024_7881 [1] is enabled by the platform. If CVE_2024_7881 mitigation is implemented, the discovery call returns 0, if not -1 (SMC_ARCH_CALL_NOT_SUPPORTED).
For more information about SMCCC_ARCH_WORKAROUND_4 [2], please refer to the SMCCC Specification reference provided below.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881 [2]: https://developer.arm.com/documentation/den0028/latest
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I1b1ffaa1f806f07472fd79d5525f81764d99bc79
show more ...
|
| 4caef42a | 16-Sep-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): add support in cpu_ops for CVE-2024-7881
This patch adds new cpu ops function extra4 and a new macro for CVE-2024-7881 [1]. This new macro declare_cpu_ops_wa_4 allows support for new
fix(security): add support in cpu_ops for CVE-2024-7881
This patch adds new cpu ops function extra4 and a new macro for CVE-2024-7881 [1]. This new macro declare_cpu_ops_wa_4 allows support for new CVE check function.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I417389f040c6ead7f96f9b720d29061833f43d37
show more ...
|
| 037a15f5 | 06-Sep-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): add CVE-2024-7881 mitigation to Neoverse-V3
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Neoverse-V3 CPU.
[1]: https://developer.arm.com/Arm%20Securit
fix(security): add CVE-2024-7881 mitigation to Neoverse-V3
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Neoverse-V3 CPU.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: Ib5c644895b8c76d3c7e8b5e6e98d7b9afef7f1ec
show more ...
|
| 56bb1d17 | 06-Sep-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): add CVE-2024-7881 mitigation to Neoverse-V2
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Neoverse-V2 CPU.
[1]: https://developer.arm.com/Arm%20Securit
fix(security): add CVE-2024-7881 mitigation to Neoverse-V2
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Neoverse-V2 CPU.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I129814eb3494b287fd76a3f7dbc50f76553b2565
show more ...
|
| 520c2207 | 06-Sep-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): add CVE-2024-7881 mitigation to Cortex-X925
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Cortex-X925 CPU.
[1]: https://developer.arm.com/Arm%20Securit
fix(security): add CVE-2024-7881 mitigation to Cortex-X925
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Cortex-X925 CPU.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I53e72e4dbc8937cea3c344a5ba04664c50a0792a
show more ...
|
| 6ce6acac | 06-Sep-2024 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
fix(security): add CVE-2024-7881 mitigation to Cortex-X4
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Cortex-X4 CPU.
[1]: https://developer.arm.com/Arm%20Security%20
fix(security): add CVE-2024-7881 mitigation to Cortex-X4
This patch mitigates CVE-2024-7881 [1] by setting CPUACTLR6_EL1[41] to 1 for Cortex-X4 CPU.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I0bec96d4f71a08a89c6612e272ecfb173f80da20
show more ...
|
| 4c23d627 | 28-Jan-2025 |
Olivier Deprez <olivier.deprez@arm.com> |
Merge "fix(spmd): fix build failure due to redefinition" into integration |
| 7b41acaf | 05-Jul-2024 |
Jagdish Gediya <jagdish.gediya@arm.com> |
fix(tc): enable Last-level cache (LLC) for tc4
EXTLLC bit in CPUECTLR_EL1(for non-gelas cpus) and in CPUECTLR2_EL1 register for gelas cpu enables external Last-level cache in the system,
External L
fix(tc): enable Last-level cache (LLC) for tc4
EXTLLC bit in CPUECTLR_EL1(for non-gelas cpus) and in CPUECTLR2_EL1 register for gelas cpu enables external Last-level cache in the system,
External LLC is present on TC4 systems in MCN but it is not enabled in CPU registers so enable it.
On TC4, Gelas vs Non-Gelas CPUs have different bits to enable EXTLLC so take care of that as well.
Change-Id: Ic6a74b4af110a3c34d19131676e51901ea2bf6e3 Signed-off-by: Jagdish Gediya <jagdish.gediya@arm.com> Signed-off-by: Icen.Zeyada <Icen.Zeyada2@arm.com>
show more ...
|
| c89438bc | 16-Sep-2024 |
Jerry Wang <Jerry.Wang4@arm.com> |
feat(gic): add support for local chip addressing
This patch adds support for Local Chip Addressing (LCA). In a multi-chip system, enablig LCA allows each GIC Distributor to maintain its own version
feat(gic): add support for local chip addressing
This patch adds support for Local Chip Addressing (LCA). In a multi-chip system, enablig LCA allows each GIC Distributor to maintain its own version of routing table. This feature is activated when the GICD_CFGID.LCA bit is set to 1.
The existing `gic600_multichip_data` data structure did not account for the LCA feature. To support LCA: - `rt_owner_base` is replaced by `base_addrs[]`. This is required because each GICD in the system needs to be configured independently, and their base addresses must be passed to the driver. - `chip_addrs` is changed from 1D to 2D array to store the routing table for each chip's GICD. The entries in `chip_addrs` are configuration dependent, as the GIC specification does not enforce this.
On a multi-chip platform with chip count N where LCA is enabled by default, the `gic600_multichip_data` structure should contain all copies of the routing table (N*N entries). On platforms where LCA is not supported, only the first sub-array with N entries is required. The function signature of `gic600_multichip_init` remains unchanged, but if the LCA feature is enabled, the driver will expect the routing table configuration in the described format.
Change-Id: I8830c2cf90db6a0cae78e99914cd32c637284a2b Signed-off-by: Jerry Wang <Jerry.Wang4@arm.com>
show more ...
|
| bd691136 | 10-Jan-2025 |
Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com> |
feat(nxp-clk): add a basic get_rate implementation
Replace the dummy implementation of clk_ops.get_rate with a basic version that only handles the oscillator objects. Subsequent commits will add mor
feat(nxp-clk): add a basic get_rate implementation
Replace the dummy implementation of clk_ops.get_rate with a basic version that only handles the oscillator objects. Subsequent commits will add more objects to this list.
Change-Id: I8c1bbbfa6b116fdcf5a1f1353bdb52b474bac831 Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
show more ...
|
| f532cd30 | 15-Jan-2025 |
Govindraj Raja <govindraj.raja@arm.com> |
Merge changes I137f69be,Ia2e7168f,I0e569d12,I614272ec,Ib68293f2 into integration
* changes: perf(psci): pass my_core_pos around instead of calling it repeatedly refactor(psci): move timestamp co
Merge changes I137f69be,Ia2e7168f,I0e569d12,I614272ec,Ib68293f2 into integration
* changes: perf(psci): pass my_core_pos around instead of calling it repeatedly refactor(psci): move timestamp collection to psci_pwrdown_cpu refactor(psci): factor common code out of the standby finisher refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state docs(psci): drop outdated cache maintenance comment
show more ...
|
| 6b8df7b9 | 09-Jan-2025 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1
FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2. However, in configurations where NS_EL2 is not enabled, EL3 must set th
feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1
FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2. However, in configurations where NS_EL2 is not enabled, EL3 must set the HCRX_EL2.MSCEn bit to 1 to enable the feature.
This patch ensures FEAT_MOPS is enabled by setting HCRX_EL2.MSCEn to 1.
Change-Id: Ic4960e0cc14a44279156b79ded50de475b3b21c5 Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
show more ...
|
| 61b5ef21 | 27-Nov-2024 |
Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com> |
feat(s32g274a): split early clock initialization
Initializing all early clocks before the MMU is enabled can impact boot time. Therefore, splitting the setup into A53 clocks and peripheral clocks ca
feat(s32g274a): split early clock initialization
Initializing all early clocks before the MMU is enabled can impact boot time. Therefore, splitting the setup into A53 clocks and peripheral clocks can be beneficial, with the peripheral clocks configured after fully initializing the MMU.
Change-Id: I19644227b66effab8e2c43e64e057ea0c8625ebc Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
show more ...
|
| 514c7380 | 26-Nov-2024 |
Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com> |
feat(nxp-clk): dynamic map of the clock modules
Add all clock modules as entries in MMU using dynamic regions.
Change-Id: I56f724ced4bd024554c7b38afd14ea420de80cc6 Signed-off-by: Ghennadi Procopciu
feat(nxp-clk): dynamic map of the clock modules
Add all clock modules as entries in MMU using dynamic regions.
Change-Id: I56f724ced4bd024554c7b38afd14ea420de80cc6 Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
show more ...
|
| 3b802105 | 06-Nov-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
perf(psci): pass my_core_pos around instead of calling it repeatedly
On some platforms plat_my_core_pos is a nontrivial function that takes a bit of time and the compiler really doesn't like to inli
perf(psci): pass my_core_pos around instead of calling it repeatedly
On some platforms plat_my_core_pos is a nontrivial function that takes a bit of time and the compiler really doesn't like to inline. In the PSCI library, at least, we have no need to keep repeatedly calling it and we can instead pass it around as an argument. This saves on a lot of redundant calls, speeding the library up a bit.
Change-Id: I137f69bea80d7cac90d7a20ffe98e1ba8d77246f Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 0c836554 | 30-Sep-2024 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state
The target_pwrlvl field in the psci cpu data struct only stores the highest power domain that a CPU_SUSPEND call affected, and is u
refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state
The target_pwrlvl field in the psci cpu data struct only stores the highest power domain that a CPU_SUSPEND call affected, and is used to resume those same domains on warm reset. If the cpu is otherwise OFF (never turned on or CPU_OFF), then this needs to be the highest power level because we don't know the highest level that will be off.
So skip the invalidation and always keep the field to the maximum value. During suspend the field will be lowered to the appropriate value and then put back after wakeup.
Also, do that in the suspend to standby path as well as it will have been written before the sleep and it might end up incorrect.
Change-Id: I614272ec387e1d83023c94700780a0f538a9a6b6 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|