History log of /rk3399_ARM-atf/include/arch/aarch64/asm_macros.S (Results 1 – 25 of 48)
Revision Date Author Comments
# 7b49b2ec 18-Dec-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge changes from topic "xl/c1pro-errata" into integration

* changes:
fix(cpus): workaround for C1-Pro erratum 3686597
fix(cpus): workaround for C1-Pro erratum 3300099
fix(cpus): workaround f

Merge changes from topic "xl/c1pro-errata" into integration

* changes:
fix(cpus): workaround for C1-Pro erratum 3686597
fix(cpus): workaround for C1-Pro erratum 3300099
fix(cpus): workaround for C1-Pro erratum 3338470
fix(cpus): workaround for C1-Pro erratum 3362007
fix(cpus): workaround for C1-Pro erratum 3684268
fix(cpus): workaround for C1-Pro erratum 3694158
fix(cpus): workaround for C1-Pro erratum 3706576

show more ...


# 429f4f6e 10-Dec-2025 Xialin Liu <xialin.liu@arm.com>

fix(cpus): workaround for C1-Pro erratum 3686597

C1-Pro erratum 3686597 is a Cat B erratum that applies
to revisions r0p0, r1p0 and is fixed in r1p1.

This erratum can be avoided by setting IMP_CPUE

fix(cpus): workaround for C1-Pro erratum 3686597

C1-Pro erratum 3686597 is a Cat B erratum that applies
to revisions r0p0, r1p0 and is fixed in r1p1.

This erratum can be avoided by setting IMP_CPUECTLR_EL1[57]
to 1.

SDEN documentation:
https://developer.arm.com/documentation/SDEN-3273080/1300/?lang=en

Change-Id: I59a5d9316bf66793eae5dac08102231d0e2640fb
Signed-off-by: Xialin Liu <xialin.liu@arm.com>

show more ...


# ab471aeb 29-Oct-2025 Lauren Wehrmeister <lauren.wehrmeister@arm.com>

Merge "fix(security): add clrbhb support" into integration


# d6affea1 02-Oct-2025 Govindraj Raja <govindraj.raja@arm.com>

fix(security): add clrbhb support

TF-A mitigates spectre-bhb(CVE-2022-23960) issue with loop
workaround based on - https://developer.arm.com/documentation/110280/latest/

On platforms that support `

fix(security): add clrbhb support

TF-A mitigates spectre-bhb(CVE-2022-23960) issue with loop
workaround based on - https://developer.arm.com/documentation/110280/latest/

On platforms that support `clrbhb` instruction it is recommended to
use `clrbhb` instruction instead of the loop workaround.

Ref- https://developer.arm.com/documentation/102898/0108/

Change-Id: Ie6e56e96378503456a1617d5e5d51bc64c2e0f0b
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>

show more ...


# dfdb73f7 16-Sep-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "bk/no_blx_setup" into integration

* changes:
fix: replace stray BL2_AT_EL3 with RESET_TO_BL2
refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint
re

Merge changes from topic "bk/no_blx_setup" into integration

* changes:
fix: replace stray BL2_AT_EL3 with RESET_TO_BL2
refactor(aarch64): move BL31 specific setup out of the PSCI entrypoint
refactor: unify blx_setup() and blx_main()
fix(bl2): unify the BL2 EL3 and RME entrypoints

show more ...


# 04cf04c7 13-Aug-2025 Boyan Karatotev <boyan.karatotev@arm.com>

fix(bl2): unify the BL2 EL3 and RME entrypoints

BL2 has 3(!) entrypoints:
1) the regular EL1 entrypoint (once per AArch)
2) an EL3 entrypoint
3) an EL3 entrypoint with RME

The EL1 and EL3 entryp

fix(bl2): unify the BL2 EL3 and RME entrypoints

BL2 has 3(!) entrypoints:
1) the regular EL1 entrypoint (once per AArch)
2) an EL3 entrypoint
3) an EL3 entrypoint with RME

The EL1 and EL3 entrypoints are quite distinct so it's useful to keep
them separate. But the EL3 and RME entrypoints are conceptually
identical just configured differently and having slightly different
assumptions (eg whether we can rely on BL1). So put them together with
only the configuration as a difference. This has a few benefits:
* makes the naming consistent - BL2 always runs at EL1, BL2_EL3 always
runs at EL3. This is most important for the linker script.
* paves the way for ENABLE_RME and RESET_TO_BL2 to coexist.
* allows for more general refactors

Currently, ENABLE_RME and RESET_TO_BL2 are mutually exclusive (from a
makefile constraint) so the checks are simplified to one or the other as
there is no danger of their simultaneous use.

Change-Id: Iecffab2ff3a0bd7823f8277d9f66e22e4f42cc8c
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 9e0c318d 28-Apr-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge "feat(cpufeat): add support for FEAT_PAUTH_LR" into integration


# b1e1f42e 25-Apr-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge changes I005586ef,I0d4d74bc into integration

* changes:
fix(cpufeat): replace "bti" mnemonic with hint instructions
fix(cpufeat): improve xpaci wrapper


# bdac600b 15-Apr-2025 Andre Przywara <andre.przywara@arm.com>

fix(cpufeat): replace "bti" mnemonic with hint instructions

Older GNU binutils version require to specify at least "armv8.5-a" for
the ARM architecture revision to accept "bti" instructions in the
a

fix(cpufeat): replace "bti" mnemonic with hint instructions

Older GNU binutils version require to specify at least "armv8.5-a" for
the ARM architecture revision to accept "bti" instructions in the
assembly code. Binutils v2.35 have relaxed this, since "bti" is in the
hint space, so is ignored on older cores and does NOT require a BTI
enabled core to execute.

To not exclude those older binutils versions (as shipped with Ubuntu
20.04), use the "hint" encoding for the "bti" instructions, which are
accepted regardless of the minimum architecture revision. Hide this
encoding in a macro, to make the "bti" usage more readable in the
source code.

Change-Id: I005586efd8974a3f2c7202896c881bb5fed07eea
Signed-off-by: Andre Przywara <andre.przywara@arm.com>

show more ...


# 025b1b81 11-Mar-2025 John Powell <john.powell@arm.com>

feat(cpufeat): add support for FEAT_PAUTH_LR

This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when
the new ENABLE_FEAT_PAUTH_LR flag is set.

Currently, PAUTH_LR is only supported

feat(cpufeat): add support for FEAT_PAUTH_LR

This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when
the new ENABLE_FEAT_PAUTH_LR flag is set.

Currently, PAUTH_LR is only supported by arm clang compiler and not GCC.

Change-Id: I7db1e34b661ed95cad75850b62878ac5d98466ea
Signed-off-by: John Powell <john.powell@arm.com>

show more ...


# ee656609 16-Apr-2025 André Przywara <andre.przywara@arm.com>

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving
refactor(cpufeat): convert FEAT_PAuth setup to C
refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION
chore(cpufeat): remove PAuth presence checks
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED

show more ...


# 8d9f5f25 02-Apr-2025 Boyan Karatotev <boyan.karatotev@arm.com>

feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED

FEAT_PAuth is the second to last feature to be a boolean choice - it's
either unconditionally compiled in and must be present in hardware or
it

feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED

FEAT_PAuth is the second to last feature to be a boolean choice - it's
either unconditionally compiled in and must be present in hardware or
it's not compiled in. FEAT_PAuth is architected to be backwards
compatible - a subset of the branch guarding instructions (pacia/autia)
execute as NOPs when PAuth is not present. That subset is used with
`-mbranch-protection=standard` and -march pre-8.3. This patch adds the
necessary logic to also check accesses of the non-backward compatible
registers and allow a fully checked implementation.

Note that a checked support requires -march to be pre 8.3, as otherwise
the compiler will include branch protection instructions that are not
NOPs without PAuth (eg retaa) which cannot be checked.

Change-Id: Id942c20cae9d15d25b3d72b8161333642574ddaa
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# a8a5d39d 24-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "bk/errata_speed" into integration

* changes:
refactor(cpus): declare runtime errata correctly
perf(cpus): make reset errata do fewer branches
perf(cpus): inline the i

Merge changes from topic "bk/errata_speed" into integration

* changes:
refactor(cpus): declare runtime errata correctly
perf(cpus): make reset errata do fewer branches
perf(cpus): inline the init_cpu_data_ptr function
perf(cpus): inline the reset function
perf(cpus): inline the cpu_get_rev_var call
perf(cpus): inline cpu_rev_var checks
refactor(cpus): register DSU errata with the errata framework's wrappers
refactor(cpus): convert checker functions to standard helpers
refactor(cpus): convert the Cortex-A65 to use the errata framework
fix(cpus): declare reset errata correctly

show more ...


# 0d020822 19-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(cpus): inline the reset function

Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the
call_reset_handler handler. This way we skip the costly branch at no
extra cost as this is

perf(cpus): inline the reset function

Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the
call_reset_handler handler. This way we skip the costly branch at no
extra cost as this is the only place where this is called.

While we're at it, drop the options for CPU_NO_RESET_FUNC. The only cpus
that need that are virtual cpus which can spare the tiny bit of
performance lost. The rest are real cores which can save on the check
for zero.

Now is a good time to put the assert for a missing cpu in the
get_cpu_ops_ptr function so that it's a bit better encapsulated.

Change-Id: Ia7c3dcd13b75e5d7c8bafad4698994ea65f42406
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 0d22145f 10-Feb-2025 Manish Pandey <manish.pandey2@arm.com>

Merge "fix: add support for 128-bit sysregs to EL3 crash handler" into integration


# 58fadd62 15-Nov-2024 Igor Podgainõi <igor.podgainoi@arm.com>

fix: add support for 128-bit sysregs to EL3 crash handler

The following changes have been made:
* Add new sysreg definitions and ASM macro is_feat_sysreg128_present_asm
* Add registers TTBR0_EL2 and

fix: add support for 128-bit sysregs to EL3 crash handler

The following changes have been made:
* Add new sysreg definitions and ASM macro is_feat_sysreg128_present_asm
* Add registers TTBR0_EL2 and VTTBR_EL2 to EL3 crash handler output
* Use MRRS instead of MRS for registers TTBR0_EL1, TTBR0_EL2, TTBR1_EL1,
VTTBR_EL2 and PAR_EL1

Change-Id: I0e20b2c35251f3afba2df794c1f8bc0c46c197ff
Signed-off-by: Igor Podgainõi <igor.podgainoi@arm.com>

show more ...


# b41b9997 19-Dec-2024 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "bk/smccc_feature" into integration

* changes:
fix(trbe): add a tsb before context switching
fix(spe): add a psb before updating context and remove context saving


# 73d98e37 02-Dec-2024 Boyan Karatotev <boyan.karatotev@arm.com>

fix(trbe): add a tsb before context switching

Just like for SPE, we need to synchronize TRBE samples before we change
the context to ensure everything goes where it was intended to. If that
is not d

fix(trbe): add a tsb before context switching

Just like for SPE, we need to synchronize TRBE samples before we change
the context to ensure everything goes where it was intended to. If that
is not done, the in-flight entries might use any piece of now incorrect
context as there are no implicit ordering requirements.

Prior to root context, the buffer drain hooks would have done that. But
now that must happen much earlier. So add a tsb to prepare_el3_entry as
well.

Annoyingly, the barrier can be reordered relative to other instructions
by default (rule RCKVWP). So add an isb after the psb/tsb to assure that
they are ordered, at least as far as context is concerned.

Then, drop the buffer draining hooks. Everything they need to do is
already done by now. There's a notable difference in that there are no
dsb-s now. Since EL3 does not access the buffers or the feature
specific context, we don't need to wait for them to finish.

Finally, drop a stray isb in the context saving macro. It is now
absorbed into root context, but was missed.

Change-Id: I30797a40ac7f91d0bb71ad271a1597e85092ccd5
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# f8088733 21-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

fix(spe): add a psb before updating context and remove context saving

In the chapter about FEAT_SPE (D16.4 specifically) it is stated that
"Sampling is always disabled at EL3". That means that disab

fix(spe): add a psb before updating context and remove context saving

In the chapter about FEAT_SPE (D16.4 specifically) it is stated that
"Sampling is always disabled at EL3". That means that disabling sampling
(writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only
reason we save/restore SPE context is because of that disable, so those
can be removed too.

There's the issue of draining the profiling buffer though. No new
samples will have been generated since entering EL3. However, old
samples might still be in-flight. Unless synchronised by a psb csync,
those might be affected by our extensive context mutation. Adding a psb
in prepare_el3_entry should cater for that. Note that prior to the
introduction of root context this was not a problem as context remained
unchanged and the hooks took care of the rest.

Then, the only time we care about the buffer actually making it to
memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we
don't have to do anything, it should be handled for us. Systems without
it need a dsb to wait for them to complete. There should be one already
in each cpu's powerdown hook which should work.

While on the topic of barriers, the esb barrier is no longer used.
Remove it.

Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 2c746960 03-May-2024 Manish Pandey <manish.pandey2@arm.com>

Merge changes I9eba2e34,Iab2a2a2f into integration

* changes:
refactor(cpus): replace adr with adr_l
refactor(build): introduce adr_l macro


# 31857d4c 22-Feb-2024 Hsin-Hsiung Wang <hsin-hsiung.wang@mediatek.com>

refactor(build): introduce adr_l macro

Introduce the macro "adr_l," which can handle symbols or labels that
exceed the 1MB access range compared to the "adr" instruction.

Change-Id: Iab2a2a2f8a11a5

refactor(build): introduce adr_l macro

Introduce the macro "adr_l," which can handle symbols or labels that
exceed the 1MB access range compared to the "adr" instruction.

Change-Id: Iab2a2a2f8a11a5e21e386f1001ba27a8de621132
Signed-off-by: Hsin-Hsiung Wang <hsin-hsiung.wang@mediatek.com>

show more ...


# 6f802c44 02-Nov-2023 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "mp/exceptions" into integration

* changes:
docs(ras): update RAS documentation
docs(el3-runtime): update BL31 exception vector handling
fix(el3-runtime): restrict low

Merge changes from topic "mp/exceptions" into integration

* changes:
docs(ras): update RAS documentation
docs(el3-runtime): update BL31 exception vector handling
fix(el3-runtime): restrict lower el EA handlers in FFH mode
fix(ras): remove RAS_FFH_SUPPORT and introduce FFH_SUPPORT
fix(ras): restrict ENABLE_FEAT_RAS to have only two states
feat(ras): use FEAT_IESB for error synchronization
feat(el3-runtime): modify vector entry paths

show more ...


# 970a4a8d 10-Oct-2023 Manish Pandey <manish.pandey2@arm.com>

fix(ras): restrict ENABLE_FEAT_RAS to have only two states

As part of migrating RAS extension to feature detection mechanism, the
macro ENABLE_FEAT_RAS was allowed to have dynamic detection (FEAT_ST

fix(ras): restrict ENABLE_FEAT_RAS to have only two states

As part of migrating RAS extension to feature detection mechanism, the
macro ENABLE_FEAT_RAS was allowed to have dynamic detection (FEAT_STATE
2). Considering this feature does impact execution of EL3 and we need
to know at compile time about the presence of this feature. Do not use
dynamic detection part of feature detection mechanism.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I23858f641f81fbd81b6b17504eb4a2cc65c1a752

show more ...


# 6597fcf1 26-Jun-2023 Manish Pandey <manish.pandey2@arm.com>

feat(ras): use FEAT_IESB for error synchronization

For synchronization of errors at exception boundries TF-A uses "esb"
instruction with FEAT_RAS or "dsb" and "isb" otherwise. The problem
with esb i

feat(ras): use FEAT_IESB for error synchronization

For synchronization of errors at exception boundries TF-A uses "esb"
instruction with FEAT_RAS or "dsb" and "isb" otherwise. The problem
with esb instruction is, along with synching errors it might also
consume the error, which is not ideal in all scenarios. On the other
hand we can't use dsb always as its in the hot path.

To solve above mentioned problem the best way is to use FEAT_IESB
feature which provides controls to insert an implicit Error
synchronization event at exception entry and exception return.

Assumption in TF-A is, if RAS Extension is present then FEAT_IESB will
also be present and enabled.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ie5861eec5da4028a116406bb4d1fea7dac232456

show more ...


# d04c04a4 25-May-2023 Manish Pandey <manish.pandey2@arm.com>

feat(el3-runtime): modify vector entry paths

Vector entries in EL3 from lower ELs, first check for any pending
async EAs from lower EL before handling the original exception.
This happens when there

feat(el3-runtime): modify vector entry paths

Vector entries in EL3 from lower ELs, first check for any pending
async EAs from lower EL before handling the original exception.
This happens when there is an error (EA) in the system which is not
yet signaled to PE while executing at lower EL. During entry into EL3
the errors (EA) are synchronized causing async EA to pend at EL3.

On detecting the pending EA (via ISR_EL1.A) EL3 either reflects it back
to lower EL (KFH) or handles it in EL3 (FFH) based on EA routing model.

In case of Firmware First handling mode (FFH), EL3 handles the pended
EA first before returing back to handle the original exception.

While in case of Kernel First handling mode (KFH), EL3 will return back
to lower EL without handling the original exception. On returing to
lower EL, EA will be pended. In KFH mode there is a risk of back and
forth between EL3 and lower EL if the EA is masked at lower EL or
priority of EA is lower than that of original exception. This is a
limitation in current architecture but can be solved in future if EL3
gets a capability to inject virtual SError.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I3a2a31de7cf454d9d690b1ef769432a5b24f6c11

show more ...


12