History log of /rk3399_ARM-atf/include/common/feat_detect.h (Results 1 – 10 of 10)
Revision Date Author Comments
# 4ca4b3e2 29-Jul-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes I3e086865,I47f05a9f,Iee495571 into integration

* changes:
fix(cpufeat): update FEAT_PAUTH's feat detect line to tri-state
fix(cpufeat): do feature detection before feature enableme

Merge changes I3e086865,I47f05a9f,Iee495571 into integration

* changes:
fix(cpufeat): update FEAT_PAUTH's feat detect line to tri-state
fix(cpufeat): do feature detection before feature enablement
feat(cpufeat): do feature detection on secondary cores too

show more ...


# d335bbb1 03-Jul-2025 Boyan Karatotev <boyan.karatotev@arm.com>

feat(cpufeat): do feature detection on secondary cores too

Feature detection currently only happens on the boot core, however, it
is possible to have asymmetry between cores. TF-A supports limited s

feat(cpufeat): do feature detection on secondary cores too

Feature detection currently only happens on the boot core, however, it
is possible to have asymmetry between cores. TF-A supports limited such
configurations so it should check secondary cores too.

Change-Id: Iee4955714685be9ae6a017af4a6c284e835ff299
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 31ddca40 14-Apr-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "feat(psci): remove cpu context init by index" into integration


# ef738d19 21-Jun-2024 Manish Pandey <manish.pandey2@arm.com>

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of in

feat(psci): remove cpu context init by index

Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of initialising the context for the waking core (the warmboot
entrypoint for both). This is convenient because the calling core can
write the context while in coherency and the waking core will only need
the context after its entered coherency. This avoids any cache
maintenance and makes communication simple.

However, this has 3 main problems:
a) asymmetric feature support is problematic - the calling core has no
way of knowing the feature set of the waking core. If the two
diverge, the architectural feature discovery via ID registers breaks
down. We've thus far "fixed" this on a case by case basis which
doesn't scale and introduces redundancy.

b) powerdown abandon (pabandon) introduces a contradiction - the calling
core has to initialise the context for when the core wakes up, but
should the core not powerdown it needs its old context intact. The only
way to work around this is by keeping two copies of context which
incurs a runtime and memory overhead.

c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs
it to make initialisation decisions. We can infer some of this from
registers that have already been written but this is awkwardly
limiting for what we can do. This also necessitates the split from
the context initialisation.

We can solve all three by a making a core be in full ownership of its
own context. The calling core then only writes entrypoint information
and nothing else. The waking core then initialises its own context as it
sees fit with full knowledge of the whole picture.

The only tricky bit is cache coherency - the waking core has to be able
to coherently observe its new entrypoint. Calling cores will write to
the shared region with coherent caches on. If we make sure to read the
context only after the waking core has entered coherency, then we can
avoid cache operations and let hardware handle everything.

We can skip the spsr check for FEAT_TCR2 as it doesn't make a
difference. We can also skip enabling it twice from generic code.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e

show more ...


# 553b70c3 19-Aug-2024 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "ar/asymmetricSupport" into integration

* changes:
feat(tc): enable trbe errata flags for Cortex-A520 and X4
feat(cm): asymmetric feature support for trbe
refactor(err

Merge changes from topic "ar/asymmetricSupport" into integration

* changes:
feat(tc): enable trbe errata flags for Cortex-A520 and X4
feat(cm): asymmetric feature support for trbe
refactor(errata-abi): move EXTRACT_PARTNUM to arch.h
feat(cpus): workaround for Cortex-A520(2938996) and Cortex-X4(2726228)
feat(tc): make SPE feature asymmetric
feat(cm): handle asymmetry for SPE feature
feat(cm): support for asymmetric feature among cores
feat(cpufeat): add new feature state for asymmetric features

show more ...


# 43d1d951 18-Jul-2024 Manish Pandey <manish.pandey2@arm.com>

feat(cpufeat): add new feature state for asymmetric features

Introduce a new feature state CHECK_ASYMMETRIC to cater for the features
which are asymmetric across cores. This state is useful for plat

feat(cpufeat): add new feature state for asymmetric features

Introduce a new feature state CHECK_ASYMMETRIC to cater for the features
which are asymmetric across cores. This state is useful for platforms
which has architectural asymmetric cores (A feature is only present in
one type of core e.g. big).
This state is similar to FEAT_STATE_CHECK (dynamic detection) except
that feature state is also checked on each core during warmboot path and
override the context (just for asymmetric features) which was setup by
core executing CPU_ON call.

Only Non-secure context will be re-checked as secure and realm context
is created on same core.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ic78a0b6ca996e0d7881c43da1a6a0c422f528ef3

show more ...


# 344e5e81 19-Jan-2023 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "feat_state_rework" into integration

* changes:
feat(fvp): enable FEAT_HCX by default
refactor(context-mgmt): move FEAT_HCX save/restore into C
refactor(cpufeat): conv

Merge changes from topic "feat_state_rework" into integration

* changes:
feat(fvp): enable FEAT_HCX by default
refactor(context-mgmt): move FEAT_HCX save/restore into C
refactor(cpufeat): convert FEAT_HCX to new scheme
feat(fvp): enable FEAT_FGT by default
refactor(context-mgmt): move FEAT_FGT save/restore code into C
refactor(amu): convert FEAT_AMUv1 to new scheme
refactor(cpufeat): decouple FGT feature detection and build flags
refactor(cpufeat): check FEAT_FGT in a new way
refactor(cpufeat): move helpers into .c file, rename FEAT_STATE_
feat(aarch64): make ID system register reads non-volatile

show more ...


# 69c17f52 14-Nov-2022 Andre Przywara <andre.przywara@arm.com>

refactor(cpufeat): move helpers into .c file, rename FEAT_STATE_

The FEATURE_DETECTION functionality had some definitions in a header
file, although they were only used internally in the .c file.
Mo

refactor(cpufeat): move helpers into .c file, rename FEAT_STATE_

The FEATURE_DETECTION functionality had some definitions in a header
file, although they were only used internally in the .c file.
Move them over there, since there are of no interest to other users.

Also use the opportuntiy to rename the less telling FEAT_STATE_[12]
names, and let the "0" case join the game. We use DISABLED, ALWAYS, and
CHECK now, so that the casual reader has some idea what those numbers
are supposed to mean.

feature_panic() becomes "static inline", since disabling all features
makes it unused, so the compiler complains otherwise.

Finally add a new category "cpufeat" to cover CPU feature related
changes.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Change-Id: If0c8ba91ad22440260ccff383c33bdd055eefbdc

show more ...


# f6ca81dd 07-Apr-2022 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "jc/detect_feat" into integration

* changes:
docs(build): update the feature enablement flags
refactor(el3-runtime): replace ARM_ARCH_AT_LEAST macro with FEAT flags
re

Merge changes from topic "jc/detect_feat" into integration

* changes:
docs(build): update the feature enablement flags
refactor(el3-runtime): replace ARM_ARCH_AT_LEAST macro with FEAT flags
refactor(el3-runtime): add arch-features detection mechanism

show more ...


# 6a0da736 17-Jan-2022 Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>

refactor(el3-runtime): add arch-features detection mechanism

This patch adds architectural features detection procedure to ensure
features enabled are present in the given hardware implementation.

refactor(el3-runtime): add arch-features detection mechanism

This patch adds architectural features detection procedure to ensure
features enabled are present in the given hardware implementation.

It verifies whether the architecture build flags passed during
compilation match the respective features by reading their ID
registers. It reads through all the enabled feature specific ID
registers at once and panics in case of mismatch(feature enabled
but not implemented in PE).

Feature flags are used at sections (context_management,
save and restore routines of registers) during context switch.
If the enabled feature flag is not supported by the PE, it causes an
exception while saving or restoring the registers guarded by them.

With this mechanism, the build flags are validated at an early
phase prior to their usage, thereby preventing any undefined action
under their control.

This implementation is based on tristate approach for each feature and
currently FEAT_STATE=0 and FEAT_STATE=1 are covered as part of this
patch. FEAT_STATE=2 is planned for phase-2 implementation and will be
taken care separately.

The patch has been explicitly tested, by adding a new test_config
with build config enabling majority of the features and detected
all of them under FVP launched with parameters enabling v8.7 features.

Note: This is an experimental procedure and the mechanism itself is
guarded by a macro "FEATURE_DETECTION", which is currently being
disabled by default.

The "FEATURE_DETECTION" macro is documented and the platforms are
encouraged to make use of this diagnostic tool by enabling this
"FEATURE_DETECTION" flag explicitly and get used to its behaviour
during booting before the procedure gets mandated.

Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Change-Id: Ia23d95430fe82d417a938b672bfb5edc401b0f43

show more ...