History log of /rk3399_ARM-atf/docs/getting_started/build-options.rst (Results 51 – 75 of 386)
Revision Date Author Comments
# f69f5512 30-Apr-2025 Nandan J <Nandan.J@arm.com>

feat(smcc): introduce a new vendor_el3 service for ACS SMC handler

In preparation to add support for the Architecture Compliance Suite
SMC services, reserve a SMC ID and introduce a handler function

feat(smcc): introduce a new vendor_el3 service for ACS SMC handler

In preparation to add support for the Architecture Compliance Suite
SMC services, reserve a SMC ID and introduce a handler function.
Currently, an empty placeholder function is added and future support
will be introduced for the handler support.

More info on System ACS, please refer below link,
https://developer.arm.com/Architectures/Architectural%20Compliance%20Suite

Signed-off-by: Nandan J <Nandan.J@arm.com>
Change-Id: Ib13ccae9d3829e3dcd1cd33c4a7f27efe1436d03

show more ...


# 9e0c318d 28-Apr-2025 Govindraj Raja <govindraj.raja@arm.com>

Merge "feat(cpufeat): add support for FEAT_PAUTH_LR" into integration


# 025b1b81 11-Mar-2025 John Powell <john.powell@arm.com>

feat(cpufeat): add support for FEAT_PAUTH_LR

This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when
the new ENABLE_FEAT_PAUTH_LR flag is set.

Currently, PAUTH_LR is only supported

feat(cpufeat): add support for FEAT_PAUTH_LR

This patch enables FEAT_PAUTH_LR at EL3 on systems that support it when
the new ENABLE_FEAT_PAUTH_LR flag is set.

Currently, PAUTH_LR is only supported by arm clang compiler and not GCC.

Change-Id: I7db1e34b661ed95cad75850b62878ac5d98466ea
Signed-off-by: John Powell <john.powell@arm.com>

show more ...


# 139a5d05 18-Apr-2025 Madhukar Pappireddy <madhukar.pappireddy@arm.com>

Merge changes I86959e67,I0b0d1d36,I5b5267f4,I056c8710,I3474aa97 into integration

* changes:
chore: fix preprocessor checks
refactor: convert arm platforms to use the generic GIC driver
refacto

Merge changes I86959e67,I0b0d1d36,I5b5267f4,I056c8710,I3474aa97 into integration

* changes:
chore: fix preprocessor checks
refactor: convert arm platforms to use the generic GIC driver
refactor(gic): promote most of the GIC driver to common code
refactor: make arm_gicv2.c and arm_gicv3.c common
refactor(fvp): use more arm generic code for gicv3

show more ...


# 5d893410 07-Jan-2025 Boyan Karatotev <boyan.karatotev@arm.com>

refactor(gic): promote most of the GIC driver to common code

More often than not, Arm based systems include some revision of a GIC.
There are two ways of adding support for them in platform code - c

refactor(gic): promote most of the GIC driver to common code

More often than not, Arm based systems include some revision of a GIC.
There are two ways of adding support for them in platform code - calling
the top-level helpers from plat/arm/common/arm_gicvX.c or by using the
driver directly. Both of these methods allow for a high degree of
customisation - most functions are defined to be weak and there are no
calls to any of them in generic code.

As it turns out, requirements around those GICs are largely the same.
Platforms that use arm_gicvX.c use the helpers identically among each
other. Platforms that use the driver directly tend to end up with calls
that look a lot like the arm_gicvX.c helpers and the weakness of the
functions are never exercised.

All of this results in a lot of code duplication to do what is
essentially the same thing. Even though it's not a lot of code, when
multiplied among many platforms it becomes significant and makes
refactoring it quite difficult. It's also bug prone since the steps are
a little convoluted and things are likely to work even with subtle
errors (see 50009f61177421118f42d6a000611ba0e613d54b).

So promote as much of the GIC to be called from common code. Do the
setup in bl31_main() and have every PSCI method do the state management
directly instead of delegating it to the platform hooks. We can base
this implementation on arm_gicvX.c since they already offer logical
names and have worked quite well so far with minimal changes.

The main benefit of doing this is reduced code duplication. If we assume
that, outside of some platform setup, GIC management is identical, then
a platform can add support by telling the build system, regardless of
GIC revision. The other benefit is performance - BL31 and PSCI already
know the core_pos and they can pass it as an argument instead of having
to call plat_my_core_pos(). Now, the only platform specific GIC actions
necessary are the saving and restoring of context on entering and
exiting a power domain. The PSCI library does not keep track of this so
it is unable perform it itself. The routines themselves are also
provided.

For compatibility all of this is hidden behind a build flag. Platforms
are encouraged to adopt this driver, but it would not be practical to
convert and validate every GIC based platform.

This patch renames the functions in question to follow the
gic_<function>() convention. This allows the names to be version
agnostic.

Finally, drop the weak definitions - they are unused, likely to remain
so, and can be added back if the need arises.

Change-Id: I5b5267f4b72f633fb1096400ec8e4b208694135f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# ee656609 16-Apr-2025 André Przywara <andre.przywara@arm.com>

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving

Merge changes Id942c20c,Idd286bea,I8917a26e,Iec8c3477,If3c25dcd, ... into integration

* changes:
feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED
perf(cpufeat): centralise PAuth key saving
refactor(cpufeat): convert FEAT_PAuth setup to C
refactor(cpufeat): prepare FEAT_PAuth for FEATURE_DETECTION
chore(cpufeat): remove PAuth presence checks
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKED

show more ...


# 8d9f5f25 02-Apr-2025 Boyan Karatotev <boyan.karatotev@arm.com>

feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED

FEAT_PAuth is the second to last feature to be a boolean choice - it's
either unconditionally compiled in and must be present in hardware or
it

feat(cpufeat): enable FEAT_PAuth to FEAT_STATE_CHECKED

FEAT_PAuth is the second to last feature to be a boolean choice - it's
either unconditionally compiled in and must be present in hardware or
it's not compiled in. FEAT_PAuth is architected to be backwards
compatible - a subset of the branch guarding instructions (pacia/autia)
execute as NOPs when PAuth is not present. That subset is used with
`-mbranch-protection=standard` and -march pre-8.3. This patch adds the
necessary logic to also check accesses of the non-backward compatible
registers and allow a fully checked implementation.

Note that a checked support requires -march to be pre 8.3, as otherwise
the compiler will include branch protection instructions that are not
NOPs without PAuth (eg retaa) which cannot be checked.

Change-Id: Id942c20cae9d15d25b3d72b8161333642574ddaa
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 7e848540 20-Mar-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes from topic "dtpm_poc" into integration

* changes:
feat(docs): update mboot threat model with dTPM
docs(tpm): add design documentation for dTPM
fix(rpi3): expose BL1_RW to BL2 ma

Merge changes from topic "dtpm_poc" into integration

* changes:
feat(docs): update mboot threat model with dTPM
docs(tpm): add design documentation for dTPM
fix(rpi3): expose BL1_RW to BL2 map for mboot
feat(rpi3): add dTPM backed measured boot
feat(tpm): add Infineon SLB9670 GPIO SPI config
feat(tpm): add tpm drivers and framework
feat(io): add generic gpio spi bit-bang driver
feat(rpi3): implement eventlog handoff to BL33
feat(rpi3): implement mboot for rpi3

show more ...


# a2dd13ca 21-Oct-2024 Abhi Singh <abhi.singh@arm.com>

docs(tpm): add design documentation for dTPM

-documentation for Discrete TPM drivers.
-documentation for a proof of concept on rpi3;
Measured Boot using Discrete TPM.

Signed-off-by: Abhi Singh <ab

docs(tpm): add design documentation for dTPM

-documentation for Discrete TPM drivers.
-documentation for a proof of concept on rpi3;
Measured Boot using Discrete TPM.

Signed-off-by: Abhi Singh <abhi.singh@arm.com>
Change-Id: If8e7c14a1c0b9776af872104aceeff21a13bd821

show more ...


# c5ea3fac 12-Mar-2025 Soby Mathew <soby.mathew@arm.com>

Merge "feat(rmmd): add FEAT_MEC support" into integration


# 7e84f3cf 15-Mar-2024 Tushar Khandelwal <tushar.khandelwal@.com>

feat(rmmd): add FEAT_MEC support

This patch provides architectural support for further use of
Memory Encryption Contexts (MEC) by declaring the necessary
registers, bits, masks, helpers and values a

feat(rmmd): add FEAT_MEC support

This patch provides architectural support for further use of
Memory Encryption Contexts (MEC) by declaring the necessary
registers, bits, masks, helpers and values and modifying the
necessary registers to enable FEAT_MEC.

Signed-off-by: Tushar Khandelwal <tushar.khandelwal@arm.com>
Signed-off-by: Juan Pablo Conde <juanpablo.conde@arm.com>
Change-Id: I670dbfcef46e131dcbf3a0b927467ebf6f438fa4

show more ...


# 2e0354f5 25-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps wi

Merge changes I3d950e72,Id315a8fe,Ib62e6e9b,I1d0475b2 into integration

* changes:
perf(cm): drop ZCR_EL3 saving and some ISBs and replace them with root context
perf(psci): get PMF timestamps with no cache flushes if possible
perf(amu): greatly simplify AMU context management
perf(mpmm): greatly simplify MPMM enablement

show more ...


# 83ec7e45 06-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(amu): greatly simplify AMU context management

The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness

perf(amu): greatly simplify AMU context management

The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness in that this is rather slow. A large part of it is written in
assembly, making it opaque to the compiler for optimisations. The
future proofness requires reading registers that are effectively
`volatile`, making it even harder for the compiler, as well as adding
lots of implicit barriers, making it hard for the microarchitecutre to
optimise as well.

We can make a few assumptions, checked by a few well placed asserts, and
remove a lot of this burden. For a start, at the moment there are 4
group 0 counters with static assignments. Contexting them is a trivial
affair that doesn't need a loop. Similarly, there can only be up to 16
group 1 counters. Contexting them is a bit harder, but we can do with a
single branch with a falling through switch. If/when both of these
change, we have a pair of asserts and the feature detection mechanism to
guard us against pretending that we support something we don't.

We can drop contexting of the offset registers. They are fully
accessible by EL2 and as such are its responsibility to preserve on
powerdown.

Another small thing we can do, is pass the core_pos into the hook.
The caller already knows which core we're running on, we don't need to
call this non-trivial function again.

Finally, knowing this, we don't really need the auxiliary AMUs to be
described by the device tree. Linux doesn't care at the moment, and any
information we need for EL3 can be neatly placed in a simple array.

All of this, combined with lifting the actual saving out of assembly,
reduces the instructions to save the context from 180 to 40, including a
lot fewer branches. The code is also much shorter and easier to read.

Also propagate to aarch32 so that the two don't diverge too much.

Change-Id: Ib62e6e9ba5be7fb9fb8965c8eee148d5598a5361
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# 2590e819 25-Nov-2024 Boyan Karatotev <boyan.karatotev@arm.com>

perf(mpmm): greatly simplify MPMM enablement

MPMM is a core-specific microarchitectural feature. It has been present
in every Arm core since the Cortex-A510 and has been implemented in
exactly the s

perf(mpmm): greatly simplify MPMM enablement

MPMM is a core-specific microarchitectural feature. It has been present
in every Arm core since the Cortex-A510 and has been implemented in
exactly the same way. Despite that, it is enabled more like an
architectural feature with a top level enable flag. This utilised the
identical implementation.

This duality has left MPMM in an awkward place, where its enablement
should be generic, like an architectural feature, but since it is not,
it should also be core-specific if it ever changes. One choice to do
this has been through the device tree.

This has worked just fine so far, however, recent implementations expose
a weakness in that this is rather slow - the device tree has to be read,
there's a long call stack of functions with many branches, and system
registers are read. In the hot path of PSCI CPU powerdown, this has a
significant and measurable impact. Besides it being a rather large
amount of code that is difficult to understand.

Since MPMM is a microarchitectural feature, its correct placement is in
the reset function. The essence of the current enablement is to write
CPUPPMCR_EL3.MPMM_EN if CPUPPMCR_EL3.MPMMPINCTL == 0. Replacing the C
enablement with an assembly macro in each CPU's reset function achieves
the same effect with just a single close branch and a grand total of 6
instructions (versus the old 2 branches and 32 instructions).

Having done this, the device tree entry becomes redundant. Should a core
that doesn't support MPMM arise, this can cleanly be handled in the
reset function. As such, the whole ENABLE_MPMM_FCONF and platform hooks
mechanisms become obsolete and are removed.

Change-Id: I1d0475b21a1625bb3519f513ba109284f973ffdf
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# fcb80d7d 11-Feb-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration

* changes:
fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset
chore(docs): drop the "wfi" from `pwr_

Merge changes I765a7fa0,Ic33f0b6d,I8d1a88c7,I381f96be,I698fa849, ... into integration

* changes:
fix(cpus): clear CPUPWRCTLR_EL1.CORE_PWRDN_EN_BIT on reset
chore(docs): drop the "wfi" from `pwr_domain_pwr_down_wfi`
chore(psci): drop skip_wfi variable
feat(arm): convert arm platforms to expect a wakeup
fix(cpus): avoid SME related loss of context on powerdown
feat(psci): allow cores to wake up from powerdown
refactor: panic after calling psci_power_down_wfi()
refactor(cpus): undo errata mitigations
feat(cpus): add sysreg_bit_toggle

show more ...


# b9315f50 06-Feb-2025 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "feat(cpus): add ENABLE_ERRATA_ALL flag" into integration


# 593ae354 22-Mar-2023 Boyan Karatotev <boyan.karatotev@arm.com>

feat(cpus): add ENABLE_ERRATA_ALL flag

Now that all errata flags are all conveniently in a single list we can
make sweeping decisions about their values. The first use-case is to
enable all errata i

feat(cpus): add ENABLE_ERRATA_ALL flag

Now that all errata flags are all conveniently in a single list we can
make sweeping decisions about their values. The first use-case is to
enable all errata in TF-A. This is useful for CI runs where it is
impractical to list every single one. This should help with the long
standing issue of errata not being built or tested.

Also add missing CPUs with errata to `ENABLE_ERRATA_ALL` to enable all
errata builds in CI.

Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I2b456d304d7bf3215c7c4f4fd70b56ecbcb09979

show more ...


# 45c7328c 20-Sep-2024 Boyan Karatotev <boyan.karatotev@arm.com>

fix(cpus): avoid SME related loss of context on powerdown

Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to
0) when we're attempting to power down. What they don't tell us is th

fix(cpus): avoid SME related loss of context on powerdown

Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to
0) when we're attempting to power down. What they don't tell us is that
if this isn't done, the powerdown request will be rejected. On the
CPU_OFF path that's not a problem - we can force SVCR to 0 and be
certain the core will power off.

On the suspend to powerdown path, however, we cannot do this. The TRM
also tells us that the sequence could also be aborted on eg. GIC
interrupts. If this were to happen when we have overwritten SVCR to 0,
upon a return to the caller they would experience a loss of context. We
know that at least Linux may call into PSCI with SVCR != 0. One option
is to save the entire SME context which would be quite expensive just to
work around. Another option is to downgrade the request to a normal
suspend when SME was left on. This option is better as this is expected
to happen rarely enough to ignore the wasted power and we don't want to
burden the generic (correct) path with needless context management.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I698fa8490ebf51461f6aa8bba84f9827c5c46ad4

show more ...


# 2b5e00d4 19-Dec-2024 Boyan Karatotev <boyan.karatotev@arm.com>

feat(psci): allow cores to wake up from powerdown

The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do

feat(psci): allow cores to wake up from powerdown

The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do - it has to talk to the interconnect to exit coherency, clean
caches, check for RAS errors, etc. These take significant amounts of
time and are certainly not atomic. As such there is a significant window
of opportunity for external events to happen. Many of these steps are
not destructive to context, so theoretically, the core can just "give
up" half way (or roll certain actions back) and carry on running. The
point in this sequence after which roll back is not possible is called
the point of no return.

One of these actions is the checking for RAS errors. It is possible for
one to happen during this lengthy sequence, or at least remain
undiscovered until that point. If the core were to continue powerdown
when that happens, there would be no (easy) way to inform anyone about
it. Rejecting the powerdown and letting software handle the error is the
best way to implement this.

Arm cores since at least the a510 have included this exact feature. So
far it hasn't been deemed necessary to account for it in firmware due to
the low likelihood of this happening. However, events like GIC wakeup
requests are much more probable. Older cores will powerdown and
immediately power back up when this happens. Travis and Gelas include a
feature similar to the RAS case above, called powerdown abandon. The
idea is that this will improve the latency to service the interrupt by
saving on work which the core and software need to do.

So far firmware has relied on the `wfi` being the point of no return and
if it doesn't explicitly detect a pending interrupt quite early on, it
will embark onto a sequence that it expects to end with shutdown. To
accommodate for it not being a point of no return, we must undo all of
the system management we did, just like in the warm boot entrypoint.

To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal.
Most recent platforms do some platform management and finish on the
standard `wfi`, followed by a panic or an endless loop as this is
expected to not return. To make this generic, any platform that wishes
to support wakeups must instead let common code call
`psci_power_down_wfi()` right after. Besides wakeups, this lets common
code handle powerdown errata better as well.

Then, the CPU_OFF case is simple - PSCI does not allow it to return. So
the best that can be done is to attempt the `wfi` a few times (the
choice of 32 is arbitrary) in the hope that the wakeup is transient. If
it isn't, the only choice is to panic, as the system is likely to be in
a bad state, eg. interrupts weren't routed away. The same applies for
SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't
matter as the system is going offline one way or another. The RAS case
will be considered in a separate patch.

Now, the CPU_SUSPEND case is more involved. First, to powerdown it must
wipe its context as it is not written on warm boot. But it cannot be
overwritten in case of a wakeup. To avoid the catch 22, save a copy that
will only be used if powerdown fails. That is about 500 bytes on the
stack so it hopefully doesn't tip anyone over any limits. In future that
can be avoided by having a core manage its own context.

Second, when the core wakes up, it must undo anything it did to prepare
for poweroff, which for the cores we care about, is writing
CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library
way of doing this is to simply call the power off hook again and have
the hook toggle the bit. If in the future there need to be more complex
sequences, their direction can be advised on the value of this bit.

Third, do the actual "resume". Most of the logic is already there for
the retention suspend, so that only needs a small touch up to apply to
the powerdown case as well. The missing bit is the powerdown specific
state management. Luckily, the warmboot entrypoint does exactly that
already too, so steal that and we're done.

All of this is hidden behind a FEAT_PABANDON flag since it has a large
memory and runtime cost that we don't want to burden non pabandon cores
with.

Finally, do some function renaming to better reflect their purpose and
make names a little bit more consistent.

Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>

show more ...


# efe18729 15-Jan-2025 Manish Pandey <manish.pandey2@arm.com>

Merge "feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1" into integration


# 6b8df7b9 09-Jan-2025 Arvind Ram Prakash <arvind.ramprakash@arm.com>

feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1

FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2.
However, in configurations where NS_EL2 is not enabled,
EL3 must set th

feat(mops): enable FEAT_MOPS in EL3 when INIT_UNUSED_NS_EL2=1

FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2.
However, in configurations where NS_EL2 is not enabled,
EL3 must set the HCRX_EL2.MSCEn bit to 1 to enable the feature.

This patch ensures FEAT_MOPS is enabled by setting HCRX_EL2.MSCEn to 1.

Change-Id: Ic4960e0cc14a44279156b79ded50de475b3b21c5
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>

show more ...


# 6157ef37 09-Jan-2025 Manish Pandey <manish.pandey2@arm.com>

Merge changes from topic "bk/smccc_feature" into integration

* changes:
feat(smccc): implement SMCCC_ARCH_FEATURE_AVAILABILITY
refactor(cm): clean up per-world context
refactor(cm): change own

Merge changes from topic "bk/smccc_feature" into integration

* changes:
feat(smccc): implement SMCCC_ARCH_FEATURE_AVAILABILITY
refactor(cm): clean up per-world context
refactor(cm): change owning security state when a feature is disabled

show more ...


# 8db17052 25-Oct-2024 Boyan Karatotev <boyan.karatotev@arm.com>

feat(smccc): implement SMCCC_ARCH_FEATURE_AVAILABILITY

SMCCC_ARCH_FEATURE_AVAILABILITY [1] is a call to query firmware about
the features it is aware of and enables. This is useful when a feature
is

feat(smccc): implement SMCCC_ARCH_FEATURE_AVAILABILITY

SMCCC_ARCH_FEATURE_AVAILABILITY [1] is a call to query firmware about
the features it is aware of and enables. This is useful when a feature
is not enabled at EL3, eg due to an older FW image, but it is present in
hardware. In those cases, the EL1 ID registers do not reflect the usable
feature set and this call should provide the necessary information to
remedy that.

The call itself is very lightweight - effectively a sanitised read of
the relevant system register. Bits that are not relevant to feature
enablement are masked out and active low bits are converted to active
high.

The implementation is also very simple. All relevant, irrelevant, and
inverted bits combined into bitmasks at build time. Then at runtime the
masks are unconditionally applied to produce the right result. This
assumes that context managers will make sure that disabled features
do not have their bits set and the registers are context switched if
any fields in them make enablement ambiguous.

Features that are not yet supported in TF-A have not been added. On
debug builds, calling this function will fail an assert if any bits that
are not expected are set. In combination with CI this should allow for
this feature to to stay up to date as new architectural features are
added.

If a call for MPAM3_EL3 is made when MPAM is not enabled, the call
will return INVALID_PARAM, while if it is FEAT_STATE_CHECK, it will
return zero. This should be fairly consistent with feature detection.

The bitmask is meant to be interpreted as the logical AND of the
relevant ID registers. It would be permissible for this to return 1
while the ID returns 0. Despite this, this implementation takes steps
not to. In the general case, the two should match exactly.

Finally, it is not entirely clear whether this call replies to SMC32
requests. However, it will not, as the return values are all 64 bits.

[1]: https://developer.arm.com/documentation/den0028/galp1/?lang=en

Co-developed-by: Charlie Bareham <charlie.bareham@arm.com>
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I1a74e7d0b3459b1396961b8fa27f84e3f0ad6a6f

show more ...


# 5d8c7218 31-Dec-2024 Manish V Badarkhe <manish.badarkhe@arm.com>

Merge "fix(cert-create): add default keysize to Brainpool ECDSA" into integration


# 0da16fe3 18-Sep-2024 Maxime Méré <maxime.mere@foss.st.com>

fix(cert-create): add default keysize to Brainpool ECDSA

By default, the ECDSA Brainpool regular and ECDSA Brainpool twisted
algorithms support 256-bit sized keys. Not defining this leads to
an erro

fix(cert-create): add default keysize to Brainpool ECDSA

By default, the ECDSA Brainpool regular and ECDSA Brainpool twisted
algorithms support 256-bit sized keys. Not defining this leads to
an error indicating that '256' is not a valid key size for ECDSA
Brainpool. KEY_SIZES matrix must have a value in its table to avoid
problems when KEY_SIZE is defined.

Signed-off-by: Maxime Méré <maxime.mere@foss.st.com>
Change-Id: I34886659315f59a9582dcee1d92d0e24d4a4138e

show more ...


12345678910>>...16