History log of /rk3399_ARM-atf/include/lib/el3_runtime/cpu_data.h (Results 1 – 25 of 56)
Revision Date Author Comments
# ef397720 10-Nov-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "ar/idte3" into integration

* changes:
feat(cpufeat): add support for FEAT_IDTE3
feat(cpufeat): include enabled security state scope
feat(cpufeat): add ID register def

Merge changes from topic "ar/idte3" into integration

* changes:
feat(cpufeat): add support for FEAT_IDTE3
feat(cpufeat): include enabled security state scope
feat(cpufeat): add ID register defines and read helpers

show more ...


# f396aec8 09-Sep-2025 Arvind Ram Prakash <arvind.ramprakash@arm.com>

feat(cpufeat): add support for FEAT_IDTE3

This patch adds support for FEAT_IDTE3, which introduces support
for handling the trapping of Group 3 and Group 5 (only GMID_EL1)
registers to EL3 (unless t

feat(cpufeat): add support for FEAT_IDTE3

This patch adds support for FEAT_IDTE3, which introduces support
for handling the trapping of Group 3 and Group 5 (only GMID_EL1)
registers to EL3 (unless trapped to EL2). IDTE3 allows EL3 to
modify the view of ID registers for lower ELs, and this capability
is used to disable fields of ID registers tied to disabled features.

The ID registers are initially read as-is and stored in context.
Then, based on the feature enablement status for each world, if a
particular feature is disabled, its corresponding field in the
cached ID register is set to Res0. When lower ELs attempt to read
an ID register, the cached ID register value is returned. This
allows EL3 to prevent lower ELs from accessing feature-specific
system registers that are disabled in EL3, even though the hardware
implements them.

The emulated ID register values are stored primarily in per-world
context, except for certain debug-related ID registers such as
ID_AA64DFR0_EL1 and ID_AA64DFR1_EL1, which are stored in the
cpu_data and are unique to each PE. This is done to support feature
asymmetry that is commonly seen in debug features.

FEAT_IDTE3 traps all Group 3 ID registers in the range
op0 == 3, op1 == 0, CRn == 0, CRm == {2–7}, op2 == {0–7} and the
Group 5 GMID_EL1 register. However, only a handful of ID registers
contain fields used to detect features enabled in EL3. Hence, we
only cache those ID registers, while the rest are transparently
returned as is to the lower EL.

This patch updates the CREATE_FEATURE_FUNCS macro to generate
update_feat_xyz_idreg_field() functions that disable ID register
fields on a per-feature basis. The enabled_worlds scope is used to
disable ID register fields for security states where the feature is
not enabled.

This EXPERIMENTAL feature is controlled by the ENABLE_FEAT_IDTE3
build flag and is currently disabled by default.

Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I5f998eeab81bb48c7595addc5595313a9ebb96d5

show more ...


# 7303319b 08-Nov-2025 Chris Kay <chris.kay@arm.com>

Merge changes from topic "NUMA_AWARE_PER_CPU" into integration

* changes:
docs(maintainers): add per-cpu framework into maintainers.rst
feat(per-cpu): add documentation for per-cpu framework
f

Merge changes from topic "NUMA_AWARE_PER_CPU" into integration

* changes:
docs(maintainers): add per-cpu framework into maintainers.rst
feat(per-cpu): add documentation for per-cpu framework
feat(rdv3): enable numa aware per-cpu for RD-V3-Cfg2
feat(per-cpu): migrate amu_ctx to per-cpu framework
feat(per-cpu): migrate spm_core_context to per-cpu framework
feat(per-cpu): migrate psci_ns_context to per-cpu framework
feat(per-cpu): migrate psci_cpu_pd_nodes to per-cpu framework
feat(per-cpu): migrate rmm_context to per-cpu framework
feat(per-cpu): integrate per-cpu framework into BL31/BL32
feat(per-cpu): introduce framework accessors/definers
feat(per-cpu): introduce linker changes for NUMA aware per-cpu framework
docs(changelog): add scope for per-cpu framework

show more ...


# 98859b99 29-Jan-2025 Sammit Joshi <sammit.joshi@arm.com>

feat(per-cpu): integrate per-cpu framework into BL31/BL32

Integrate per-cpu support into BL31/BL32 by extending the following
areas:

Zero-initialization: Treats per-cpu sections like .bss and clear

feat(per-cpu): integrate per-cpu framework into BL31/BL32

Integrate per-cpu support into BL31/BL32 by extending the following
areas:

Zero-initialization: Treats per-cpu sections like .bss and clears them
during early C runtime initialization. For platforms that enable
NUMA_AWARE_PER_CPU, invokes a platform hook to zero-initialize
node-specific per-cpu regions.

Cache maintenance: Extends the BL31 exit path to clean dcache lines
covering the per-cpu region, ensuring data written by the primary core
is visible to secondary cores.

tpidr_el3 setup: Initializes tpidr_el3 with the base address of the
current CPU’s per-cpu section. This allows per-cpu framework to
resolve local cpu accesses efficiently.

The percpu_data object is currently stored in tpidr_el3. Since the
per-cpu framework will use tpidr_el3 for this-cpu access, percpu_data
must be migrated to avoid conflict. This commit moves percpu_data to
the per-cpu framework.

Signed-off-by: Sammit Joshi <sammit.joshi@arm.com>
Signed-off-by: Rohit Mathew <rohit.mathew@arm.com>
Change-Id: Iff0c2e1f8c0ebd25c4bb0b09bfe15dd4fbe20561

show more ...


# c3e5f6b9 22-Oct-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "bk/simpler_panic" into integration

* changes:
fix(aarch64): do not print EL1 registers on EL3 panic
refactor(el3-runtime): streamline cpu_data assembly offsets using th

Merge changes from topic "bk/simpler_panic" into integration

* changes:
fix(aarch64): do not print EL1 registers on EL3 panic
refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template

show more ...


# 4779becd 06-Aug-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template

The cpu_data structure, just like cpu_ops, is collection of disparate
data that must be accessible from both C

refactor(el3-runtime): streamline cpu_data assembly offsets using the cpu_ops template

The cpu_data structure, just like cpu_ops, is collection of disparate
data that must be accessible from both C and assembly. Achieving this is
tricky as there is no way to export structure offsets from C directly so
they must be manually recreated with `#define`s and asserts. However,
the cpu_data structure is quite old and the assembly offsets are a
patchwork of additions and extremely difficult to reason with and
modify. In fact, certain currently unused builds with
ENABLE_RUNTIME_INSTRUMENTATION=1 fail to build.

To untangle this, convert the assembly offsets to the pattern used for
the cpu_ops structure. That is, first define the sizes of every member,
as generically as possible, and then chain their offsets one after the
other. To make sure this is always correct, add a CASSERT for the offset
of every member. This makes it easy to modify the structure and fixes
the build failures.

Change-Id: I61aeb55e9c494896663a3c719c10e3c072f56349
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 833e3c40 02-Oct-2025 Mark Dykes <mark.dykes@arm.com>

Merge "fix: remove unused cpu_data related macros" into integration


# 2c730eea 12-Sep-2025 Boyan Karatotev <boyan.karatotev@arm.com>

fix: remove unused cpu_data related macros

There are no uses for CPU_DATA_PSCI_LOCK_OFFSET so it is removed.

PLAT_PCPU_DATA_SIZE is also unused in ST platforms and causes offsets
to mismatch when t

fix: remove unused cpu_data related macros

There are no uses for CPU_DATA_PSCI_LOCK_OFFSET so it is removed.

PLAT_PCPU_DATA_SIZE is also unused in ST platforms and causes offsets
to mismatch when the linker garbage collects it. It is also removed.

CPU_DATA_PLAT_PCPU_OFFSET is also removed as its only use is in
rcar_lock_get() and related macros which are never called since all
calls of these macros lack an argument.

Change-Id: I883ab58c56b4082e0e8b19a8d8f6186945bcc58e
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 46aff6fc 26-Sep-2025 Mark Dykes <mark.dykes@arm.com>

Merge "refactor(el3-runtime): move context security states to context.h" into integration


# 40c2cfdd 25-Sep-2025 Mark Dykes <mark.dykes@arm.com>

Merge "refactor(el3-runtime): extract cpu_data limitations to top-level constraints" into integration


# 34a22a02 05-Aug-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(el3-runtime): move context security states to context.h

The three security states (S, NS, RL) are architecturally quite
consistent - anything that uses them has the same numerical assignmen

refactor(el3-runtime): move context security states to context.h

The three security states (S, NS, RL) are architecturally quite
consistent - anything that uses them has the same numerical assignments
(0, 1, 2) and they are quite convenient for indexing. However, we're not
as consistent in tf-a and this is defined in a few places. Since
cpu_data has a dependency on the context management library, use its
security state convention in a few more places and take away this
responsibility from cpu_data.

Change-Id: Iec73b2be2eef91975554767557de72424d0031f1
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 01b3d394 05-Aug-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(el3-runtime): extract cpu_data limitations to top-level constraints

CRASH_REPORTING is checked via an `#error` statement in the header,
while EL3_EXCEPTION_HANDLING is carefully carved out

refactor(el3-runtime): extract cpu_data limitations to top-level constraints

CRASH_REPORTING is checked via an `#error` statement in the header,
while EL3_EXCEPTION_HANDLING is carefully carved out when not supported.
However, both are only used on AArch64 builds and never on AArch32. We
can promote both to proper make constraints and keep the cpu_data
implementation a little bit simpler.

Change-Id: Ia164e046f953a552dc6e6cf624961a90669eaeeb
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# aabab09e 01-Sep-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes Id38d6f1b,I5fcfe8dd,I7f3b50e5 into integration

* changes:
fix(cpus): inform the compiler that struct cpu_ops is aligned
refactor(el3-runtime): move the initialisation of the cpu_op

Merge changes Id38d6f1b,I5fcfe8dd,I7f3b50e5 into integration

* changes:
fix(cpus): inform the compiler that struct cpu_ops is aligned
refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C
fix(aarch32): make get_cpu_ops_ptr() PCS compliant

show more ...


# 022fcb48 14-Aug-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C

The difference between AArch32 and AArch64 is insignificant and the
usage is identical. The only thing that required the use of

refactor(el3-runtime): move the initialisation of the cpu_ops_ptr to C

The difference between AArch32 and AArch64 is insignificant and the
usage is identical. The only thing that required the use of assembly was
that the get_cpu_ops_ptr() function was not PCS compliant and needed a
wrapper to do that instead. That has now been fixed so move this to C so
it's more readable and more optimise-able by the compiler.

Change-Id: I5fcfe8ddb122dd35d58adc6d44a7484c5c595815
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# ee37db50 09-Jul-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "xlnx_fix_gen_op_datatype" into integration

* changes:
fix(el3-runtime): typecast operands to match data type
fix(arm): typecast operands to match data type


# f05b4894 24-Apr-2024 Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>

fix(el3-runtime): typecast operands to match data type

This corrects the MISRA violation C2012-10.3:
The value of an expression shall not be assigned to an object with a
narrower essential type or o

fix(el3-runtime): typecast operands to match data type

This corrects the MISRA violation C2012-10.3:
The value of an expression shall not be assigned to an object with a
narrower essential type or of a different essential type category.
Replaced usage of 'unsigned int' with 'size_t' to ensure type
consistency and prevent assignment to a narrower or different
essential type.

Change-Id: I79501e216a04753ebd005d64375357b9332440d9
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>

show more ...


# e493b522 19-Jun-2025 Manish Pandey <manish.pandey2@arm.com>

Merge "perf(bl31): convert cpu_data fetching to C" into integration


# d43b2ea6 18-Mar-2025 Boyan Karatotev <boyan.karatotev@arm.com>

perf(bl31): convert cpu_data fetching to C

The assembly routines are opaque to the compiler and it can't inline
them. There is also no requirement for them to be called without a
stack - each of the

perf(bl31): convert cpu_data fetching to C

The assembly routines are opaque to the compiler and it can't inline
them. There is also no requirement for them to be called without a
stack - each of their calls has a stack available. So convert them to C
so that the compiler can do its inlining magic.

On AArch32 we need to be able to call _cpu_data from the entrypoint so
it has to stay as a slight exception.

We can also straighten out the type of the cpu_ops_ptr member so we
don't have to cast it everywhere.

Change-Id: I9c2939a955b396edf26b99ef36318eebeaab13e6
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 31ddca40 14-Apr-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "feat(psci): remove cpu context init by index" into integration


# ef738d19 21-Jun-2024 Manish Pandey <manish.pandey2@arm.com>

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of in

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of initialising the context for the waking core (the warmboot
entrypoint for both). This is convenient because the calling core can
write the context while in coherency and the waking core will only need
the context after its entered coherency. This avoids any cache
maintenance and makes communication simple.

However, this has 3 main problems:
a) asymmetric feature support is problematic - the calling core has no
way of knowing the feature set of the waking core. If the two
diverge, the architectural feature discovery via ID registers breaks
down. We've thus far "fixed" this on a case by case basis which
doesn't scale and introduces redundancy.

b) powerdown abandon (pabandon) introduces a contradiction - the calling
core has to initialise the context for when the core wakes up, but
should the core not powerdown it needs its old context intact. The only
way to work around this is by keeping two copies of context which
incurs a runtime and memory overhead.

c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs
it to make initialisation decisions. We can infer some of this from
registers that have already been written but this is awkwardly
limiting for what we can do. This also necessitates the split from
the context initialisation.

We can solve all three by a making a core be in full ownership of its
own context. The calling core then only writes entrypoint information
and nothing else. The waking core then initialises its own context as it
sees fit with full knowledge of the whole picture.

The only tricky bit is cache coherency - the waking core has to be able
to coherently observe its new entrypoint. Calling cores will write to
the shared region with coherent caches on. If we make sure to read the
context only after the waking core has entered coherency, then we can
avoid cache operations and let hardware handle everything.

We can skip the spsr check for FEAT_TCR2 as it doesn't make a
difference. We can also skip enabling it twice from generic code.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e

show more ...


# a8a5d39d 24-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "bk/errata_speed" into integration

* changes:
refactor(cpus): declare runtime errata correctly
perf(cpus): make reset errata do fewer branches
perf(cpus): inline the i

Merge changes from topic "bk/errata_speed" into integration

* changes:
refactor(cpus): declare runtime errata correctly
perf(cpus): make reset errata do fewer branches
perf(cpus): inline the init_cpu_data_ptr function
perf(cpus): inline the reset function
perf(cpus): inline the cpu_get_rev_var call
perf(cpus): inline cpu_rev_var checks
refactor(cpus): register DSU errata with the errata framework's wrappers
refactor(cpus): convert checker functions to standard helpers
refactor(cpus): convert the Cortex-A65 to use the errata framework
fix(cpus): declare reset errata correctly

show more ...


# b07c317f 19-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(cpus): inline the init_cpu_data_ptr function

Similar to the reset function inline, inline this too to not do a costly
branch with no extra cost.

Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdf

perf(cpus): inline the init_cpu_data_ptr function

Similar to the reset function inline, inline this too to not do a costly
branch with no extra cost.

Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdfae1e5
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# fa5fc59f 23-Oct-2024 Olivier Deprez <olivier.deprez@arm.com>

Merge "fix(el3-runtime): correct CASSERT for cpu data size" into integration


# 483dc2e4 11-Jan-2024 Omkar Anand Kulkarni <omkar.kulkarni@arm.com>

fix(el3-runtime): correct CASSERT for cpu data size

Build breaks when EL3_EXCEPTION_HANDLING option is enabled. The
CPU_DATA_SIZE macro does not consider the size required to save the
ehf_data field

fix(el3-runtime): correct CASSERT for cpu data size

Build breaks when EL3_EXCEPTION_HANDLING option is enabled. The
CPU_DATA_SIZE macro does not consider the size required to save the
ehf_data field of cpu_data structure.

include/lib/el3_runtime/cpu_data.h:163:17: error: size of array
'assert_cpu_data_size_mismatch' is negative
assert_cpu_data_size_mismatch);

This patch adds support to consider ehf_data field to calculate the
CPU_DATA_SIZE macro. Also adds relevant checks and asserts if the
ehf_data field is not considered.

Signed-off-by: Omkar Anand Kulkarni <omkar.kulkarni@arm.com>
Change-Id: I3c11b2982f4a612ce28e46848b5c5035a8f8efc2

show more ...


# 1d651211 06-Oct-2021 Soby Mathew <soby.mathew@arm.com>

Merge changes from topic "za/feat_rme" into integration

* changes:
refactor(gpt): productize and refactor GPT library
feat(rme): disable Watchdog for Arm platforms if FEAT_RME enabled
docs(rme

Merge changes from topic "za/feat_rme" into integration

* changes:
refactor(gpt): productize and refactor GPT library
feat(rme): disable Watchdog for Arm platforms if FEAT_RME enabled
docs(rme): add build and run instructions for FEAT_RME
fix(plat/fvp): bump BL2 stack size
fix(plat/fvp): allow changing the kernel DTB load address
refactor(plat/arm): rename ARM_DTB_DRAM_NS region macros
refactor(plat/fvp): update FVP platform DTS for FEAT_RME
feat(plat/arm): add GPT initialization code for Arm platforms
feat(plat/fvp): add memory map for FVP platform for FEAT_RME
refactor(plat/arm): modify memory region attributes to account for FEAT_RME
feat(plat/fvp): add RMM image support for FVP platform
feat(rme): add GPT Library
feat(rme): add ENABLE_RME build option and support for RMM image
refactor(makefile): remove BL prefixes in build macros
feat(rme): add context management changes for FEAT_RME
feat(rme): add Test Realm Payload (TRP)
feat(rme): add RMM dispatcher (RMMD)
feat(rme): run BL2 in root world when FEAT_RME is enabled
feat(rme): add xlat table library changes for FEAT_RME
feat(rme): add Realm security state definition
feat(rme): add register definitions and helper functions for FEAT_RME

show more ...


123