History log of /rk3399_ARM-atf/lib/psci/psci_suspend.c (Results 1 – 25 of 74)
Revision Date Author Comments
# 7138e659 26-Aug-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge "fix(psci): add missing curly braces" into integration


# bac32cc4 24-Apr-2025 Saivardhan Thatikonda <saivardhan.thatikonda@amd.com>

fix(psci): add missing curly braces

This corrects the MISRA violation C2012-15.6:
The body of an iteration-statement or a selection-statement shall
be a compound-statement.
Enclosed statement body w

fix(psci): add missing curly braces

This corrects the MISRA violation C2012-15.6:
The body of an iteration-statement or a selection-statement shall
be a compound-statement.
Enclosed statement body within the curly braces.

Change-Id: Ida2460b7fe6f27b23382a1259a5ac93fe36bd48d
Signed-off-by: Saivardhan Thatikonda <saivardhan.thatikonda@amd.com>
Signed-off-by: Suraj Kakade <suraj.hanumantkakade@amd.com>

show more ...


# 35b2bbf4 28-Jul-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "bk/pabandon_cleanup" into integration

* changes:
feat(cpus): add pabandon support to the Alto cpu
feat(psci): optimise clock init on a pabandon
feat(psci): check that

Merge changes from topic "bk/pabandon_cleanup" into integration

* changes:
feat(cpus): add pabandon support to the Alto cpu
feat(psci): optimise clock init on a pabandon
feat(psci): check that CPUs handled a pabandon
feat(psci): make pabandon support generic
refactor(psci): unify coherency exit between AArch64 and AArch32
refactor(psci): absorb psci_power_down_wfi() into common code
refactor(platforms): remove usage of psci_power_down_wfi
fix(cm): disable SPE/TRBE correctly

show more ...


# fd914fc8 30-Jun-2025 Boyan Karatotev <boyan.karatotev@arm.com>

feat(psci): optimise clock init on a pabandon

When a powerdown abandon happens, all state will be preserved. As such,
there is no need to re-initialise the timer counter when unwinding.

Change-Id:

feat(psci): optimise clock init on a pabandon

When a powerdown abandon happens, all state will be preserved. As such,
there is no need to re-initialise the timer counter when unwinding.

Change-Id: I64185792a118fd04ca036abbb9be8f670d0d8328
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 04c39e46 24-Mar-2025 Boyan Karatotev <boyan.karatotev@arm.com>

feat(psci): make pabandon support generic

Support for aborted powerdowns does not require much dedicated code.
Rather, it is largely a matter of orchestrating things to happen in the
right order.

T

feat(psci): make pabandon support generic

Support for aborted powerdowns does not require much dedicated code.
Rather, it is largely a matter of orchestrating things to happen in the
right order.

The only exception to this are older secure world dispatchers, which
assume that a CPU_SUSPEND call will be terminal and therefore can
clobber context. This was patched over in common code and hidden behind
a flag. This patch moves this to the dispatchers themselves.

Dispatchers that don't register svc_suspend{_finish} are unaffected.
Those that do must save the NS context before clobbering it and
restoring in only in case of a pabandon. Due to this operation being
non-trivial, this patch makes the assumption that these dispatchers will
only be present on hardware that does not support pabandon and therefore
does not add any contexting for them. In case this assumption ever
changes, asserts are added that should alert us of this change.

Change-Id: I94a907515b782b4d2136c0d274246cfe1d567c0e
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 139a5d05 18-Apr-2025 Madhukar Pappireddy <madhukar.pappireddy@arm.com>

Merge changes I86959e67,I0b0d1d36,I5b5267f4,I056c8710,I3474aa97 into integration

* changes:
chore: fix preprocessor checks
refactor: convert arm platforms to use the generic GIC driver
refacto

Merge changes I86959e67,I0b0d1d36,I5b5267f4,I056c8710,I3474aa97 into integration

* changes:
chore: fix preprocessor checks
refactor: convert arm platforms to use the generic GIC driver
refactor(gic): promote most of the GIC driver to common code
refactor: make arm_gicv2.c and arm_gicv3.c common
refactor(fvp): use more arm generic code for gicv3

show more ...


# 5d893410 07-Jan-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(gic): promote most of the GIC driver to common code

More often than not, Arm based systems include some revision of a GIC.
There are two ways of adding support for them in platform code - c

refactor(gic): promote most of the GIC driver to common code

More often than not, Arm based systems include some revision of a GIC.
There are two ways of adding support for them in platform code - calling
the top-level helpers from plat/arm/common/arm_gicvX.c or by using the
driver directly. Both of these methods allow for a high degree of
customisation - most functions are defined to be weak and there are no
calls to any of them in generic code.

As it turns out, requirements around those GICs are largely the same.
Platforms that use arm_gicvX.c use the helpers identically among each
other. Platforms that use the driver directly tend to end up with calls
that look a lot like the arm_gicvX.c helpers and the weakness of the
functions are never exercised.

All of this results in a lot of code duplication to do what is
essentially the same thing. Even though it's not a lot of code, when
multiplied among many platforms it becomes significant and makes
refactoring it quite difficult. It's also bug prone since the steps are
a little convoluted and things are likely to work even with subtle
errors (see 50009f61177421118f42d6a000611ba0e613d54b).

So promote as much of the GIC to be called from common code. Do the
setup in bl31_main() and have every PSCI method do the state management
directly instead of delegating it to the platform hooks. We can base
this implementation on arm_gicvX.c since they already offer logical
names and have worked quite well so far with minimal changes.

The main benefit of doing this is reduced code duplication. If we assume
that, outside of some platform setup, GIC management is identical, then
a platform can add support by telling the build system, regardless of
GIC revision. The other benefit is performance - BL31 and PSCI already
know the core_pos and they can pass it as an argument instead of having
to call plat_my_core_pos(). Now, the only platform specific GIC actions
necessary are the saving and restoring of context on entering and
exiting a power domain. The PSCI library does not keep track of this so
it is unable perform it itself. The routines themselves are also
provided.

For compatibility all of this is hidden behind a build flag. Platforms
are encouraged to adopt this driver, but it would not be practical to
convert and validate every GIC based platform.

This patch renames the functions in question to follow the
gic_<function>() convention. This allows the names to be version
agnostic.

Finally, drop the weak definitions - they are unused, likely to remain
so, and can be added back if the need arises.

Change-Id: I5b5267f4b72f633fb1096400ec8e4b208694135f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# ee656609 16-Apr-2025 André Przywara <andre.przywara@arm.com>

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving
refactor(cpufeat): convert FEAT_PAuth setup to C
refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION
chore(cpufeat): remove PAuth presence checks
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED

show more ...


# 51997e3d 02-Apr-2025 Boyan Karatotev <boyan.karatotev@arm.com>

perf(cpufeat): centralise PAuth key saving

prepare_el3_entry() is meant to be the one-stop shop for all the context
we must fiddle with to enter EL3 proper. However, PAuth is the one
exception, happ

perf(cpufeat): centralise PAuth key saving

prepare_el3_entry() is meant to be the one-stop shop for all the context
we must fiddle with to enter EL3 proper. However, PAuth is the one
exception, happening right after. Absorb it into prepare_el3_entry(),
handling the BL1/BL31 difference.

This is a good time to also move the key saving into the enable
function, also to centralise. With this it becomes apparent that saving
keys just before CPU_SUSPEND is redundant as they will be reinitialised
when the core wakes up.

Note that the key loading, now in save_gp_pmcr_pauth_regs, does not end
in an isb. The effects of the key change are not needed until the isb
in the caller, so this isb is not needed.

Change-Id: Idd286bea91140c106ab4c933c5c44b0bc2050ca2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 31ddca40 14-Apr-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "feat(psci): remove cpu context init by index" into integration


# ef738d19 21-Jun-2024 Manish Pandey <manish.pandey2@arm.com>

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of in

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of initialising the context for the waking core (the warmboot
entrypoint for both). This is convenient because the calling core can
write the context while in coherency and the waking core will only need
the context after its entered coherency. This avoids any cache
maintenance and makes communication simple.

However, this has 3 main problems:
a) asymmetric feature support is problematic - the calling core has no
way of knowing the feature set of the waking core. If the two
diverge, the architectural feature discovery via ID registers breaks
down. We've thus far "fixed" this on a case by case basis which
doesn't scale and introduces redundancy.

b) powerdown abandon (pabandon) introduces a contradiction - the calling
core has to initialise the context for when the core wakes up, but
should the core not powerdown it needs its old context intact. The only
way to work around this is by keeping two copies of context which
incurs a runtime and memory overhead.

c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs
it to make initialisation decisions. We can infer some of this from
registers that have already been written but this is awkwardly
limiting for what we can do. This also necessitates the split from
the context initialisation.

We can solve all three by a making a core be in full ownership of its
own context. The calling core then only writes entrypoint information
and nothing else. The waking core then initialises its own context as it
sees fit with full knowledge of the whole picture.

The only tricky bit is cache coherency - the waking core has to be able
to coherently observe its new entrypoint. Calling cores will write to
the shared region with coherent caches on. If we make sure to read the
context only after the waking core has entered coherency, then we can
avoid cache operations and let hardware handle everything.

We can skip the spsr check for FEAT_TCR2 as it doesn't make a
difference. We can also skip enabling it twice from generic code.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e

show more ...


# 10639cc9 03-Apr-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "xlnx_fix_gen_uniq_var" into integration

* changes:
fix(psci): avoid altering function parameters
fix(services): avoid altering function parameters
fix(common): ignore

Merge changes from topic "xlnx_fix_gen_uniq_var" into integration

* changes:
fix(psci): avoid altering function parameters
fix(services): avoid altering function parameters
fix(common): ignore the unused function return value
fix(psci): modify variable conflicting with external function
fix(delay-timer): create unique variable name

show more ...


# 0839cfc9 19-Apr-2024 Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>

fix(psci): modify variable conflicting with external function

This corrects the MISRA violation C2012-5.8:
Identifiers that define objects or functions with
external linkage shall be unique.
Modify

fix(psci): modify variable conflicting with external function

This corrects the MISRA violation C2012-5.8:
Identifiers that define objects or functions with
external linkage shall be unique.
Modify the variable name to prevent conflict with
external function declaration

Change-Id: I2f109242b6dd3b3c5e9289881e3dd5466c74fcb5
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>

show more ...


# 2e0354f5 25-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps wi

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps with no cache flushes if possible
perf(amu): greatly simplify AMU context management
perf(mpmm): greatly simplify MPMM enablement

show more ...


# 83ec7e45 06-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(amu): greatly simplify AMU context management

The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness

perf(amu): greatly simplify AMU context management

The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness in that this is rather slow. A large part of it is written in
assembly, making it opaque to the compiler for optimisations. The
future proofness requires reading registers that are effectively
`volatile`, making it even harder for the compiler, as well as adding
lots of implicit barriers, making it hard for the microarchitecutre to
optimise as well.

We can make a few assumptions, checked by a few well placed asserts, and
remove a lot of this burden. For a start, at the moment there are 4
group 0 counters with static assignments. Contexting them is a trivial
affair that doesn't need a loop. Similarly, there can only be up to 16
group 1 counters. Contexting them is a bit harder, but we can do with a
single branch with a falling through switch. If/when both of these
change, we have a pair of asserts and the feature detection mechanism to
guard us against pretending that we support something we don't.

We can drop contexting of the offset registers. They are fully
accessible by EL2 and as such are its responsibility to preserve on
powerdown.

Another small thing we can do, is pass the core_pos into the hook.
The caller already knows which core we're running on, we don't need to
call this non-trivial function again.

Finally, knowing this, we don't really need the auxiliary AMUs to be
described by the device tree. Linux doesn't care at the moment, and any
information we need for EL3 can be neatly placed in a simple array.

All of this, combined with lifting the actual saving out of assembly,
reduces the instructions to save the context from 180 to 40, including a
lot fewer branches. The code is also much shorter and easier to read.

Also propagate to aarch32 so that the two don't diverge too much.

Change-Id: Ib62e6e9ba5be7fb9fb8965c8eee148d5598a5361
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# fcb80d7d 11-Feb-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration

* changes:
fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset
chore(docs): drop the "wfi" from `pwr_

Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration

* changes:
fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset
chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`
chore(psci): drop skip_wfi variable
feat(arm): convert arm platforms to expect a wakeup
fix(cpus): avoid SME related loss of context on powerdown
feat(psci): allow cores to wake up from powerdown
refactor: panic after calling psci_power_down_wfi()
refactor(cpus): undo errata mitigations
feat(cpus): add sysreg_bit_toggle

show more ...


# db5fe4f4 08-Oct-2024 Boyan Karatotev <boyan.karatotev@arm.com>

chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`

To allow for generic handling of a wakeup, this hook is no longer
expected to call wfi itself. Update the name everywhere to reflect this
e

chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`

To allow for generic handling of a wakeup, this hook is no longer
expected to call wfi itself. Update the name everywhere to reflect this
expectation so that future platform implementers don't get misled.

Change-Id: Ic33f0b6da74592ad6778fd802c2f0b85223af614
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# dc0bf486 08-Oct-2024 Boyan Karatotev <boyan.karatotev@arm.com>

chore(psci): drop skip_wfi variable

There is now a convent place at the end of the function to jump to when
we shouldn't power down. This eliminates the need for skip_wfi.

Change-Id: I8d1a88c77463e

chore(psci): drop skip_wfi variable

There is now a convent place at the end of the function to jump to when
we shouldn't power down. This eliminates the need for skip_wfi.

Change-Id: I8d1a88c77463e8ee6765b185adbe3d23d9fce101
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 2b5e00d4 19-Dec-2024 Boyan Karatotev <boyan.karatotev@arm.com>

feat(psci): allow cores to wake up from powerdown

The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do

feat(psci): allow cores to wake up from powerdown

The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do - it has to talk to the interconnect to exit coherency, clean
caches, check for RAS errors, etc. These take significant amounts of
time and are certainly not atomic. As such there is a significant window
of opportunity for external events to happen. Many of these steps are
not destructive to context, so theoretically, the core can just "give
up" half way (or roll certain actions back) and carry on running. The
point in this sequence after which roll back is not possible is called
the point of no return.

One of these actions is the checking for RAS errors. It is possible for
one to happen during this lengthy sequence, or at least remain
undiscovered until that point. If the core were to continue powerdown
when that happens, there would be no (easy) way to inform anyone about
it. Rejecting the powerdown and letting software handle the error is the
best way to implement this.

Arm cores since at least the a510 have included this exact feature. So
far it hasn't been deemed necessary to account for it in firmware due to
the low likelihood of this happening. However, events like GIC wakeup
requests are much more probable. Older cores will powerdown and
immediately power back up when this happens. Travis and Gelas include a
feature similar to the RAS case above, called powerdown abandon. The
idea is that this will improve the latency to service the interrupt by
saving on work which the core and software need to do.

So far firmware has relied on the `wfi` being the point of no return and
if it doesn't explicitly detect a pending interrupt quite early on, it
will embark onto a sequence that it expects to end with shutdown. To
accommodate for it not being a point of no return, we must undo all of
the system management we did, just like in the warm boot entrypoint.

To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal.
Most recent platforms do some platform management and finish on the
standard `wfi`, followed by a panic or an endless loop as this is
expected to not return. To make this generic, any platform that wishes
to support wakeups must instead let common code call
`psci_power_down_wfi()` right after. Besides wakeups, this lets common
code handle powerdown errata better as well.

Then, the CPU_OFF case is simple - PSCI does not allow it to return. So
the best that can be done is to attempt the `wfi` a few times (the
choice of 32 is arbitrary) in the hope that the wakeup is transient. If
it isn't, the only choice is to panic, as the system is likely to be in
a bad state, eg. interrupts weren't routed away. The same applies for
SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't
matter as the system is going offline one way or another. The RAS case
will be considered in a separate patch.

Now, the CPU_SUSPEND case is more involved. First, to powerdown it must
wipe its context as it is not written on warm boot. But it cannot be
overwritten in case of a wakeup. To avoid the catch 22, save a copy that
will only be used if powerdown fails. That is about 500 bytes on the
stack so it hopefully doesn't tip anyone over any limits. In future that
can be avoided by having a core manage its own context.

Second, when the core wakes up, it must undo anything it did to prepare
for poweroff, which for the cores we care about, is writing
CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library
way of doing this is to simply call the power off hook again and have
the hook toggle the bit. If in the future there need to be more complex
sequences, their direction can be advised on the value of this bit.

Third, do the actual "resume". Most of the logic is already there for
the retention suspend, so that only needs a small touch up to apply to
the powerdown case as well. The missing bit is the powerdown specific
state management. Luckily, the warmboot entrypoint does exactly that
already too, so steal that and we're done.

All of this is hidden behind a FEAT_PABANDON flag since it has a large
memory and runtime cost that we don't want to burden non pabandon cores
with.

Finally, do some function renaming to better reflect their purpose and
make names a little bit more consistent.

Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# f532cd30 15-Jan-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge changes I137f69be,Ia2e7168f,I0e569d12,I614272ec,Ib68293f2 into integration

* changes:
perf(psci): pass my_core_pos around instead of calling it repeatedly
refactor(psci): move timestamp co

Merge changes I137f69be,Ia2e7168f,I0e569d12,I614272ec,Ib68293f2 into integration

* changes:
perf(psci): pass my_core_pos around instead of calling it repeatedly
refactor(psci): move timestamp collection to psci_pwrdown_cpu
refactor(psci): factor common code out of the standby finisher
refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state
docs(psci): drop outdated cache maintenance comment

show more ...


# 3b802105 06-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(psci): pass my_core_pos around instead of calling it repeatedly

On some platforms plat_my_core_pos is a nontrivial function that takes a
bit of time and the compiler really doesn't like to inli

perf(psci): pass my_core_pos around instead of calling it repeatedly

On some platforms plat_my_core_pos is a nontrivial function that takes a
bit of time and the compiler really doesn't like to inline. In the PSCI
library, at least, we have no need to keep repeatedly calling it and we
can instead pass it around as an argument. This saves on a lot of
redundant calls, speeding the library up a bit.

Change-Id: I137f69bea80d7cac90d7a20ffe98e1ba8d77246f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 9b1e800e 10-Oct-2024 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(psci): move timestamp collection to psci_pwrdown_cpu

psci_pwrdown_cpu has two callers, both of which save timestamps meant to
measure how much time the cache maintenance operations take. Mo

refactor(psci): move timestamp collection to psci_pwrdown_cpu

psci_pwrdown_cpu has two callers, both of which save timestamps meant to
measure how much time the cache maintenance operations take. Move the
timestamp collection inside to save on a bit of code duplication.

Change-Id: Ia2e7168faf7773d99b696cbdb6c98db7b58e31cf
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 44ee7714 30-Sep-2024 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(psci): factor common code out of the standby finisher

psci_suspend_to_standby_finisher and psci_cpu_suspend_finish do mostly
the same stuff, besides the system management associated with th

refactor(psci): factor common code out of the standby finisher

psci_suspend_to_standby_finisher and psci_cpu_suspend_finish do mostly
the same stuff, besides the system management associated with their
respective wakeup paths. So bring the wake from standby path in line
with the wake from reset path - caller acquires locks and manages
context. This way both behave in vaguely the same way. We can also bring
their names in line so it's more apparent how they are different.

This is in preparation for cores waking from sleep, coming in another
patch. No functional change is expected.

Change-Id: I0e569d12f65d231606080faa0149d22efddc386d
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 0c836554 30-Sep-2024 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state

The target_pwrlvl field in the psci cpu data struct only stores the
highest power domain that a CPU_SUSPEND call affected, and is u

refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state

The target_pwrlvl field in the psci cpu data struct only stores the
highest power domain that a CPU_SUSPEND call affected, and is used to
resume those same domains on warm reset. If the cpu is otherwise OFF
(never turned on or CPU_OFF), then this needs to be the highest power
level because we don't know the highest level that will be off.

So skip the invalidation and always keep the field to the maximum value.
During suspend the field will be lowered to the appropriate value and
then put back after wakeup.

Also, do that in the suspend to standby path as well as it will have
been written before the sleep and it might end up incorrect.

Change-Id: I614272ec387e1d83023c94700780a0f538a9a6b6
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# e2ce7d34 24-Jul-2023 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "bk/context_refactor" into integration

* changes:
refactor(psci): extract cm_prepare_el3_exit_ns() to a common location
refactor(cm): set MDCR_EL3/CPTR_EL3 bits in respe

Merge changes from topic "bk/context_refactor" into integration

* changes:
refactor(psci): extract cm_prepare_el3_exit_ns() to a common location
refactor(cm): set MDCR_EL3/CPTR_EL3 bits in respective feat_init_el3() only
fix(cm): set MDCR_EL3.{NSPBE, STE} explicitly
refactor(cm): factor out EL2 register setting when EL2 is unused

show more ...


123