| f396aec8 | 09-Sep-2025 |
Arvind Ram Prakash <arvind.ramprakash@arm.com> |
feat(cpufeat): add support for FEAT_IDTE3
This patch adds support for FEAT_IDTE3, which introduces support for handling the trapping of Group 3 and Group 5 (only GMID_EL1) registers to EL3 (unless t
feat(cpufeat): add support for FEAT_IDTE3
This patch adds support for FEAT_IDTE3, which introduces support for handling the trapping of Group 3 and Group 5 (only GMID_EL1) registers to EL3 (unless trapped to EL2). IDTE3 allows EL3 to modify the view of ID registers for lower ELs, and this capability is used to disable fields of ID registers tied to disabled features.
The ID registers are initially read as-is and stored in context. Then, based on the feature enablement status for each world, if a particular feature is disabled, its corresponding field in the cached ID register is set to Res0. When lower ELs attempt to read an ID register, the cached ID register value is returned. This allows EL3 to prevent lower ELs from accessing feature-specific system registers that are disabled in EL3, even though the hardware implements them.
The emulated ID register values are stored primarily in per-world context, except for certain debug-related ID registers such as ID_AA64DFR0_EL1 and ID_AA64DFR1_EL1, which are stored in the cpu_data and are unique to each PE. This is done to support feature asymmetry that is commonly seen in debug features.
FEAT_IDTE3 traps all Group 3 ID registers in the range op0 == 3, op1 == 0, CRn == 0, CRm == {2–7}, op2 == {0–7} and the Group 5 GMID_EL1 register. However, only a handful of ID registers contain fields used to detect features enabled in EL3. Hence, we only cache those ID registers, while the rest are transparently returned as is to the lower EL.
This patch updates the CREATE_FEATURE_FUNCS macro to generate update_feat_xyz_idreg_field() functions that disable ID register fields on a per-feature basis. The enabled_worlds scope is used to disable ID register fields for security states where the feature is not enabled.
This EXPERIMENTAL feature is controlled by the ENABLE_FEAT_IDTE3 build flag and is currently disabled by default.
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com> Change-Id: I5f998eeab81bb48c7595addc5595313a9ebb96d5
show more ...
|
| 6d2d846f | 04-Jul-2025 |
Sammit Joshi <sammit.joshi@arm.com> |
feat(per-cpu): migrate psci_ns_context to per-cpu framework
migrate psci_ns_context object to the NUMA-aware per-cpu framework to optimize memory access and to efficiently utilize memory.
Signed-of
feat(per-cpu): migrate psci_ns_context to per-cpu framework
migrate psci_ns_context object to the NUMA-aware per-cpu framework to optimize memory access and to efficiently utilize memory.
Signed-off-by: Sammit Joshi <sammit.joshi@arm.com> Change-Id: Ie8b9f4eea8c61d4de9996d9370634cbd08ff1d8d
show more ...
|
| a9eb44d4 | 18-Apr-2024 |
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com> |
fix(psci): initialise variable to default zero
This corrects the MISRA violation C2012-9.1: The value of an object with automatic storage duration shall not be read before it has been set. Initializ
fix(psci): initialise variable to default zero
This corrects the MISRA violation C2012-9.1: The value of an object with automatic storage duration shall not be read before it has been set. Initialized the variable to default value zero.
Change-Id: I225ae4487b05fc47728222765029d6e1fe292ac1 Signed-off-by: Nithin G <nithing@amd.com> Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
show more ...
|
| fd914fc8 | 30-Jun-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(psci): optimise clock init on a pabandon
When a powerdown abandon happens, all state will be preserved. As such, there is no need to re-initialise the timer counter when unwinding.
Change-Id:
feat(psci): optimise clock init on a pabandon
When a powerdown abandon happens, all state will be preserved. As such, there is no need to re-initialise the timer counter when unwinding.
Change-Id: I64185792a118fd04ca036abbb9be8f670d0d8328 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 461b62b5 | 25-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(psci): check that CPUs handled a pabandon
Up to now PSCI assumed that if a pabandon happened then the CPU driver will have handled it. This patch adds a simple protocol to make sure that this i
feat(psci): check that CPUs handled a pabandon
Up to now PSCI assumed that if a pabandon happened then the CPU driver will have handled it. This patch adds a simple protocol to make sure that this is indeed the case. The chosen method is with a return value that is highly unlikely on cores that are unaware of pabandon (x0 will be primed with 1 and if used should be overwritten with the value of CPUPWRCTLR_EL1 which should have its last bit set to power off and its top bits RES0; the ACK value is chosen to be the exact opposite). An alternative method would have been to add a field in cpu_ops, however that would have required more major refactoring across many cpus and would have taken up more memory on older platforms, so it was not chosen.
Change-Id: I5826c0e4802e104d295c4ecbd80b5f676d2cd871 Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| 04c39e46 | 24-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
feat(psci): make pabandon support generic
Support for aborted powerdowns does not require much dedicated code. Rather, it is largely a matter of orchestrating things to happen in the right order.
T
feat(psci): make pabandon support generic
Support for aborted powerdowns does not require much dedicated code. Rather, it is largely a matter of orchestrating things to happen in the right order.
The only exception to this are older secure world dispatchers, which assume that a CPU_SUSPEND call will be terminal and therefore can clobber context. This was patched over in common code and hidden behind a flag. This patch moves this to the dispatchers themselves.
Dispatchers that don't register svc_suspend{_finish} are unaffected. Those that do must save the NS context before clobbering it and restoring in only in case of a pabandon. Due to this operation being non-trivial, this patch makes the assumption that these dispatchers will only be present on hardware that does not support pabandon and therefore does not add any contexting for them. In case this assumption ever changes, asserts are added that should alert us of this change.
Change-Id: I94a907515b782b4d2136c0d274246cfe1d567c0e Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| aadb4b56 | 12-Mar-2025 |
Boyan Karatotev <boyan.karatotev@arm.com> |
refactor(psci): unify coherency exit between AArch64 and AArch32
The procedure is fairly simple: if we have hardware assisted coherency, call into the cpu driver and let it do its thing. If we don't
refactor(psci): unify coherency exit between AArch64 and AArch32
The procedure is fairly simple: if we have hardware assisted coherency, call into the cpu driver and let it do its thing. If we don't, then we must turn data caches off, handle the confusion that causes with the stack, and call into the cpu driver which will flush the caches that need flushing.
On AArch32 the above happens in common code. On AArch64, however, the turning off of the caches happens in the cpu driver. Since we're dealing with the stack, we must exercise control over it and implement this in assembly. But as the two implementations are nominally different (in the ordering of operations), the part that is in assembly is quite large as jumping back to C to handle the difference might involve the stack.
Presumably, the AArch difference was introduced in order to cater for a possible implementation where turning off the caches requires an IMP DEF sequence. Well, Arm no longer makes cores without hardware assisted coherency, so this eventually is not possible.
So take this part out of the cpu driver and put it into common code, just like in AArch32. With this, there is no longer a need call prepare_cpu_pwr_dwn() in a different order either - we can delay it a bit to happen after the stack management. So the two AArch-s flows become identical. We can convert prepare_cpu_pwr_dwn() to C and leave psci_do_pwrdown_cache_maintenance() only to exercise control over stack.
Change-Id: Ie4759ebe20bb74b60533c6a47dbc2b101875900f Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
show more ...
|
| e8c3fddb | 24-Apr-2025 |
Prasad Kummari <prasad.kummari@amd.com> |
fix(psci): initialise variable to default zero
This corrects the MISRA violation C2012-9.1:
The value of an object with automatic storage duration shall not be read before it has been set.Initializ
fix(psci): initialise variable to default zero
This corrects the MISRA violation C2012-9.1:
The value of an object with automatic storage duration shall not be read before it has been set.Initialized the variable to default value zero.
Change-Id: Ib3a2a853b82ce3c3ba1f518705d7ac2e01130e37 Signed-off-by: Prasad Kummari <prasad.kummari@amd.com> Signed-off-by: Saivardhan Thatikonda <saivardhan.thatikonda@amd.com>
show more ...
|