History log of /rk3399_ARM-atf/docs/components/context-management-library.rst (Results 1 – 16 of 16)
Revision Date Author Comments
# ef397720 10-Nov-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "ar/idte3" into integration

* changes:
feat(cpufeat): add support for FEAT_IDTE3
feat(cpufeat): include enabled security state scope
feat(cpufeat): add ID register def

Merge changes from topic "ar/idte3" into integration

* changes:
feat(cpufeat): add support for FEAT_IDTE3
feat(cpufeat): include enabled security state scope
feat(cpufeat): add ID register defines and read helpers

show more ...


# f396aec8 09-Sep-2025 Arvind Ram Prakash <arvind.ramprakash@arm.com>

feat(cpufeat): add support for FEAT_IDTE3

This patch adds support for FEAT_IDTE3, which introduces support
for handling the trapping of Group 3 and Group 5 (only GMID_EL1)
registers to EL3 (unless t

feat(cpufeat): add support for FEAT_IDTE3

This patch adds support for FEAT_IDTE3, which introduces support
for handling the trapping of Group 3 and Group 5 (only GMID_EL1)
registers to EL3 (unless trapped to EL2). IDTE3 allows EL3 to
modify the view of ID registers for lower ELs, and this capability
is used to disable fields of ID registers tied to disabled features.

The ID registers are initially read as-is and stored in context.
Then, based on the feature enablement status for each world, if a
particular feature is disabled, its corresponding field in the
cached ID register is set to Res0. When lower ELs attempt to read
an ID register, the cached ID register value is returned. This
allows EL3 to prevent lower ELs from accessing feature-specific
system registers that are disabled in EL3, even though the hardware
implements them.

The emulated ID register values are stored primarily in per-world
context, except for certain debug-related ID registers such as
ID_AA64DFR0_EL1 and ID_AA64DFR1_EL1, which are stored in the
cpu_data and are unique to each PE. This is done to support feature
asymmetry that is commonly seen in debug features.

FEAT_IDTE3 traps all Group 3 ID registers in the range
op0 == 3, op1 == 0, CRn == 0, CRm == {2–7}, op2 == {0–7} and the
Group 5 GMID_EL1 register. However, only a handful of ID registers
contain fields used to detect features enabled in EL3. Hence, we
only cache those ID registers, while the rest are transparently
returned as is to the lower EL.

This patch updates the CREATE_FEATURE_FUNCS macro to generate
update_feat_xyz_idreg_field() functions that disable ID register
fields on a per-feature basis. The enabled_worlds scope is used to
disable ID register fields for security states where the feature is
not enabled.

This EXPERIMENTAL feature is controlled by the ENABLE_FEAT_IDTE3
build flag and is currently disabled by default.

Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I5f998eeab81bb48c7595addc5595313a9ebb96d5

show more ...


# 7303319b 08-Nov-2025 Chris Kay <chris.kay@arm.com>

Merge changes from topic "NUMA_AWARE_PER_CPU" into integration

* changes:
docs(maintainers): add per-cpu framework into maintainers.rst
feat(per-cpu): add documentation for per-cpu framework
f

Merge changes from topic "NUMA_AWARE_PER_CPU" into integration

* changes:
docs(maintainers): add per-cpu framework into maintainers.rst
feat(per-cpu): add documentation for per-cpu framework
feat(rdv3): enable numa aware per-cpu for RD-V3-Cfg2
feat(per-cpu): migrate amu_ctx to per-cpu framework
feat(per-cpu): migrate spm_core_context to per-cpu framework
feat(per-cpu): migrate psci_ns_context to per-cpu framework
feat(per-cpu): migrate psci_cpu_pd_nodes to per-cpu framework
feat(per-cpu): migrate rmm_context to per-cpu framework
feat(per-cpu): integrate per-cpu framework into BL31/BL32
feat(per-cpu): introduce framework accessors/definers
feat(per-cpu): introduce linker changes for NUMA aware per-cpu framework
docs(changelog): add scope for per-cpu framework

show more ...


# f5dca2a9 29-Jan-2025 Rohit Mathew <rohit.mathew@arm.com>

feat(per-cpu): migrate spm_core_context to per-cpu framework

migrate spm_core_context objects to the NUMA-aware per-cpu framework to
optimize memory access and to efficiently utilize memory.

Signed

feat(per-cpu): migrate spm_core_context to per-cpu framework

migrate spm_core_context objects to the NUMA-aware per-cpu framework to
optimize memory access and to efficiently utilize memory.

Signed-off-by: Sammit Joshi <sammit.joshi@arm.com>
Signed-off-by: Rohit Mathew <rohit.mathew@arm.com>
Change-Id: Ie600ae755cfb738adde51cfc4af3cddbbccbbaef

show more ...


# 6d2d846f 04-Jul-2025 Sammit Joshi <sammit.joshi@arm.com>

feat(per-cpu): migrate psci_ns_context to per-cpu framework

migrate psci_ns_context object to the NUMA-aware per-cpu
framework to optimize memory access and to efficiently
utilize memory.

Signed-of

feat(per-cpu): migrate psci_ns_context to per-cpu framework

migrate psci_ns_context object to the NUMA-aware per-cpu
framework to optimize memory access and to efficiently
utilize memory.

Signed-off-by: Sammit Joshi <sammit.joshi@arm.com>
Change-Id: Ie8b9f4eea8c61d4de9996d9370634cbd08ff1d8d

show more ...


# f708e9dd 29-Jan-2025 Rohit Mathew <rohit.mathew@arm.com>

feat(per-cpu): migrate rmm_context to per-cpu framework

migrate rmm_context objects to the NUMA-aware per-cpu framework to
optimize memory access and to efficiently utilize memory.

Signed-off-by: S

feat(per-cpu): migrate rmm_context to per-cpu framework

migrate rmm_context objects to the NUMA-aware per-cpu framework to
optimize memory access and to efficiently utilize memory.

Signed-off-by: Sammit Joshi <sammit.joshi@arm.com>
Signed-off-by: Rohit Mathew <rohit.mathew@arm.com>
Change-Id: I72d49c3d860dac10bd3930ce400b0199bedd887b

show more ...


# 31ddca40 14-Apr-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "feat(psci): remove cpu context init by index" into integration


# ef738d19 21-Jun-2024 Manish Pandey <manish.pandey2@arm.com>

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of in

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of initialising the context for the waking core (the warmboot
entrypoint for both). This is convenient because the calling core can
write the context while in coherency and the waking core will only need
the context after its entered coherency. This avoids any cache
maintenance and makes communication simple.

However, this has 3 main problems:
a) asymmetric feature support is problematic - the calling core has no
way of knowing the feature set of the waking core. If the two
diverge, the architectural feature discovery via ID registers breaks
down. We've thus far "fixed" this on a case by case basis which
doesn't scale and introduces redundancy.

b) powerdown abandon (pabandon) introduces a contradiction - the calling
core has to initialise the context for when the core wakes up, but
should the core not powerdown it needs its old context intact. The only
way to work around this is by keeping two copies of context which
incurs a runtime and memory overhead.

c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs
it to make initialisation decisions. We can infer some of this from
registers that have already been written but this is awkwardly
limiting for what we can do. This also necessitates the split from
the context initialisation.

We can solve all three by a making a core be in full ownership of its
own context. The calling core then only writes entrypoint information
and nothing else. The waking core then initialises its own context as it
sees fit with full knowledge of the whole picture.

The only tricky bit is cache coherency - the waking core has to be able
to coherently observe its new entrypoint. Calling cores will write to
the shared region with coherent caches on. If we make sure to read the
context only after the waking core has entered coherency, then we can
avoid cache operations and let hardware handle everything.

We can skip the spsr check for FEAT_TCR2 as it doesn't make a
difference. We can also skip enabling it twice from generic code.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e

show more ...


# 2e0354f5 25-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps wi

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps with no cache flushes if possible
perf(amu): greatly simplify AMU context management
perf(mpmm): greatly simplify MPMM enablement

show more ...


# 0a580b51 15-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context

SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs
to context switch them nonetheless. Previously,

perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context

SVE and SME aren't enabled symmetrically for all worlds, but EL3 needs
to context switch them nonetheless. Previously, this had to happen by
writing the enable bits just before reading/writing the relevant
context. But since the introduction of root context, this need not be
the case. We can have these enables always be present for EL3 and save
on some work (and ISBs!) on every context switch.

We can also hoist ZCR_EL3 to a never changing register, as we set its
value to be identical for every world, which happens to be the one we
want for EL3 too.

Change-Id: I3d950e72049a298008205ba32f230d5a5c02f8b0
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 91213031 15-Nov-2024 Govindraj Raja <govindraj.raja@arm.com>

Merge "docs(context-mgmt): add Root-Context documentation" into integration


# 0f3cd515 10-Nov-2024 Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>

docs(context-mgmt): add Root-Context documentation

* This patch adds some details on the EL3/Root-Context
and its related interfaces.

* Additionally it updates the existing details on the
inter

docs(context-mgmt): add Root-Context documentation

* This patch adds some details on the EL3/Root-Context
and its related interfaces.

* Additionally it updates the existing details on the
interfaces, related to various CPU context entries which
have been improvised recently.

Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Change-Id: I81a992fe09feca4dc3d579a48e54a4763425e052

show more ...


# 553b70c3 19-Aug-2024 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "ar/asymmetricSupport" into integration

* changes:
feat(tc): enable trbe errata flags for Cortex-A520 and X4
feat(cm): asymmetric feature support for trbe
refactor(err

Merge changes from topic "ar/asymmetricSupport" into integration

* changes:
feat(tc): enable trbe errata flags for Cortex-A520 and X4
feat(cm): asymmetric feature support for trbe
refactor(errata-abi): move EXTRACT_PARTNUM to arch.h
feat(cpus): workaround for Cortex-A520(2938996) and Cortex-X4(2726228)
feat(tc): make SPE feature asymmetric
feat(cm): handle asymmetry for SPE feature
feat(cm): support for asymmetric feature among cores
feat(cpufeat): add new feature state for asymmetric features

show more ...


# 43d1d951 18-Jul-2024 Manish Pandey <manish.pandey2@arm.com>

feat(cpufeat): add new feature state for asymmetric features

Introduce a new feature state CHECK_ASYMMETRIC to cater for the features
which are asymmetric across cores. This state is useful for plat

feat(cpufeat): add new feature state for asymmetric features

Introduce a new feature state CHECK_ASYMMETRIC to cater for the features
which are asymmetric across cores. This state is useful for platforms
which has architectural asymmetric cores (A feature is only present in
one type of core e.g. big).
This state is similar to FEAT_STATE_CHECK (dynamic detection) except
that feature state is also checked on each core during warmboot path and
override the context (just for asymmetric features) which was setup by
core executing CPU_ON call.

Only Non-secure context will be re-checked as secure and realm context
is created on same core.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ic78a0b6ca996e0d7881c43da1a6a0c422f528ef3

show more ...


# 23fc05a3 10-May-2024 Manish Pandey <manish.pandey2@arm.com>

Merge "docs(context-mgmt): add documentation for context management library" into integration


# 4efd2193 30-Oct-2023 Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>

docs(context-mgmt): add documentation for context management library

This patch adds some documentation for the context management library.
It mainly covers the design at a higher level, with more f

docs(context-mgmt): add documentation for context management library

This patch adds some documentation for the context management library.
It mainly covers the design at a higher level, with more focus on
the cold boot and warm boot entries as well as the operations
involved during context switch. Further it also includes a section
on feature enablement for individual world contexts.

Change-Id: I77005730f4df7f183f56a2c6dd04f6362e813c07
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>

show more ...