| 7bb01fb2 | 08-Mar-2017 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Add version 2 of xlat tables library
The folder lib/xlat_tables_v2 has been created to store a new version of the translation tables library for further modifications in patches to follow. At the mo
Add version 2 of xlat tables library
The folder lib/xlat_tables_v2 has been created to store a new version of the translation tables library for further modifications in patches to follow. At the moment it only contains a basic implementation that supports static regions.
This library allows different translation tables to be modified by using different 'contexts'. For now, the implementation defaults to the translation tables used by the current image, but it is possible to modify other tables than the ones in use.
Added a new API to print debug information for the current state of the translation tables, rather than printing the information while the tables are being created. This allows subsequent debug printing of the xlat tables after they have been changed, which will be useful when dynamic regions are implemented in a patch to follow.
The common definitions stored in `xlat_tables.h` header have been moved to a new file common to both versions, `xlat_tables_defs.h`.
All headers related to the translation tables library have been moved to a the subfolder `xlat_tables`.
Change-Id: Ia55962c33e0b781831d43a548e505206dffc5ea9 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| 61531a27 | 14-Feb-2017 |
Soby Mathew <soby.mathew@arm.com> |
AArch32: Fix normal memory bakery compilation
This patch fixes a compilation issue with bakery locks when PSCI library is compiled with USE_COHERENT_MEM = 0 build option.
Change-Id: Ic7f6cf9f2bb37f
AArch32: Fix normal memory bakery compilation
This patch fixes a compilation issue with bakery locks when PSCI library is compiled with USE_COHERENT_MEM = 0 build option.
Change-Id: Ic7f6cf9f2bb37f8a946eafbee9cbc3bf0dc7e900 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
show more ...
|
| bea7caff | 02-Mar-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #853 from vwadekar/tegra-changes-from-downstream-v3
Tegra changes from downstream v3 |
| b0408e87 | 05-Jan-2017 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
PSCI: Optimize call paths if all participants are cache-coherent
The current PSCI implementation can apply certain optimizations upon the assumption that all PSCI participants are cache-coherent.
PSCI: Optimize call paths if all participants are cache-coherent
The current PSCI implementation can apply certain optimizations upon the assumption that all PSCI participants are cache-coherent.
- Skip performing cache maintenance during power-up.
- Skip performing cache maintenance during power-down:
At present, on the power-down path, CPU driver disables caches and MMU, and performs cache maintenance in preparation for powering down the CPU. This means that PSCI must perform additional cache maintenance on the extant stack for correct functioning.
If all participating CPUs are cache-coherent, CPU driver would neither disable MMU nor perform cache maintenance. The CPU being powered down, therefore, remain cache-coherent throughout all PSCI call paths. This in turn means that PSCI cache maintenance operations are not required during power down.
- Choose spin locks instead of bakery locks:
The current PSCI implementation must synchronize both cache-coherent and non-cache-coherent participants. Mutual exclusion primitives are not guaranteed to function on non-coherent memory. For this reason, the current PSCI implementation had to resort to bakery locks.
If all participants are cache-coherent, the implementation can enable MMU and data caches early, and substitute bakery locks for spin locks. Spin locks make use of architectural mutual exclusion primitives, and are lighter and faster.
The optimizations are applied when HW_ASSISTED_COHERENCY build option is enabled, as it's expected that all PSCI participants are cache-coherent in those systems.
Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| a10d3632 | 06-Jan-2017 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
PSCI: Introduce cache and barrier wrappers
The PSCI implementation performs cache maintenance operations on its data structures to ensure their visibility to both cache-coherent and non-cache-cohere
PSCI: Introduce cache and barrier wrappers
The PSCI implementation performs cache maintenance operations on its data structures to ensure their visibility to both cache-coherent and non-cache-coherent participants. These cache maintenance operations can be skipped if all PSCI participants are cache-coherent. When HW_ASSISTED_COHERENCY build option is enabled, we assume PSCI participants are cache-coherent.
For usage abstraction, this patch introduces wrappers for PSCI cache maintenance and barrier operations used for state coordination: they are effectively NOPs when HW_ASSISTED_COHERENCY is enabled, but are applied otherwise.
Also refactor local state usage and associated cache operations to make it clearer.
Change-Id: I77f17a90cba41085b7188c1345fe5731c99fad87 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| 3eac92d2 | 06-May-2016 |
Varun Wadekar <vwadekar@nvidia.com> |
cpus: denver: remove barrier from denver_enable_dco()
This patch removes unnecessary `isb` from the enable DCO sequence as there is no need to synchronize this operation.
Change-Id: I0191e684bbc7fd
cpus: denver: remove barrier from denver_enable_dco()
This patch removes unnecessary `isb` from the enable DCO sequence as there is no need to synchronize this operation.
Change-Id: I0191e684bbc7fdba635c3afbc4e4ecd793b6f06f Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
show more ...
|
| c1a29754 | 28-Feb-2017 |
danh-arm <dan.handley@arm.com> |
Merge pull request #848 from douglas-raillard-arm/dr/improve_errata_doc
Clarify errata ERRATA_A53_836870 documentation |
| 9f1c5dd1 | 22-Feb-2016 |
Varun Wadekar <vwadekar@nvidia.com> |
cpus: denver: disable DCO operations from platform code
This patch moves the code to disable DCO operations out from common CPU files. This allows the platform code to call thsi API as and when requ
cpus: denver: disable DCO operations from platform code
This patch moves the code to disable DCO operations out from common CPU files. This allows the platform code to call thsi API as and when required. There are certain CPU power down states which require the DCO to be kept ON and platforms can decide selectively now.
Change-Id: Icb946fe2545a7d8c5903c420d1ee169c4921a2d1 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
show more ...
|
| 3fbe46d7 | 15-Feb-2017 |
Douglas Raillard <douglas.raillard@arm.com> |
Clarify errata ERRATA_A53_836870 documentation
The errata is enabled by default on r0p4, which is confusing given that we state we do not enable errata by default.
This patch clarifies this sentenc
Clarify errata ERRATA_A53_836870 documentation
The errata is enabled by default on r0p4, which is confusing given that we state we do not enable errata by default.
This patch clarifies this sentence by saying it is enabled in hardware by default.
Change-Id: I70a062d93e1da2416d5f6d5776a77a659da737aa Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| e956e228 | 03-Sep-2015 |
Varun Wadekar <vwadekar@nvidia.com> |
cpus: Add support for all Denver variants
This patch adds support for all variants of the Denver CPUs. The variants export their cpu_ops to allow all Denver platforms to run the Trusted Firmware sta
cpus: Add support for all Denver variants
This patch adds support for all variants of the Denver CPUs. The variants export their cpu_ops to allow all Denver platforms to run the Trusted Firmware stack.
Change-Id: I1488813ddfd506ffe363d8a32cda1b575e437035 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
show more ...
|
| 8da12f61 | 20-Feb-2017 |
danh-arm <dan.handley@arm.com> |
Merge pull request #843 from jeenu-arm/cas-lock
Introduce locking primitives using CAS instruction |
| 108e4df7 | 16-Feb-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #834 from douglas-raillard-arm/dr/use_dc_zva_zeroing
Use DC ZVA instruction to zero memory |
| c877b414 | 16-Jan-2017 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
Introduce locking primitives using CAS instruction
The ARMv8v.1 architecture extension has introduced support for far atomics, which includes compare-and-swap. Compare and Swap instruction is only a
Introduce locking primitives using CAS instruction
The ARMv8v.1 architecture extension has introduced support for far atomics, which includes compare-and-swap. Compare and Swap instruction is only available for AArch64.
Introduce build options to choose the architecture versions to target ARM Trusted Firmware:
- ARM_ARCH_MAJOR: selects the major version of target ARM Architecture. Default value is 8.
- ARM_ARCH_MINOR: selects the minor version of target ARM Architecture. Default value is 0.
When:
(ARM_ARCH_MAJOR > 8) || ((ARM_ARCH_MAJOR == 8) && (ARM_ARCH_MINOR >= 1)),
for AArch64, Compare and Swap instruction is used to implement spin locks. Otherwise, the implementation falls back to using load-/store-exclusive instructions.
Update user guide, and introduce a section in Firmware Design guide to summarize support for features introduced in ARMv8 Architecture Extensions.
Change-Id: I73096a0039502f7aef9ec6ab3ae36680da033f16 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| e5bbd16a | 31-Jan-2017 |
dp-arm <dimitris.papastamos@arm.com> |
PSCI: Do stat accounting for retention/standby states
Perform stat accounting for retention/standby states also when requested at multiple power levels.
Change-Id: I2c495ea7cdff8619bde323fb641cd844
PSCI: Do stat accounting for retention/standby states
Perform stat accounting for retention/standby states also when requested at multiple power levels.
Change-Id: I2c495ea7cdff8619bde323fb641cd84408eb5762 Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
show more ...
|
| 04c1db1e | 31-Jan-2017 |
dp-arm <dimitris.papastamos@arm.com> |
PSCI: Decouple PSCI stat residency calculation from PMF
This patch introduces the following three platform interfaces:
* void plat_psci_stat_accounting_start(const psci_power_state_t *state_info)
PSCI: Decouple PSCI stat residency calculation from PMF
This patch introduces the following three platform interfaces:
* void plat_psci_stat_accounting_start(const psci_power_state_t *state_info)
This is an optional hook that platforms can implement in order to perform accounting before entering a low power state. This typically involves capturing a timestamp.
* void plat_psci_stat_accounting_stop(const psci_power_state_t *state_info)
This is an optional hook that platforms can implement in order to perform accounting after exiting from a low power state. This typically involves capturing a timestamp.
* u_register_t plat_psci_stat_get_residency(unsigned int lvl, const psci_power_state_t *state_info, unsigned int last_cpu_index)
This is an optional hook that platforms can implement in order to calculate the PSCI stat residency.
If any of these interfaces are overridden by the platform, it is recommended that all of them are.
By default `ENABLE_PSCI_STAT` is disabled. If `ENABLE_PSCI_STAT` is set but `ENABLE_PMF` is not set then an alternative PSCI stat collection backend must be provided. If both are set, then default weak definitions of these functions are provided, using PMF to calculate the residency.
NOTE: Previously, platforms did not have to explicitly set `ENABLE_PMF` since this was automatically done by the top-level Makefile.
Change-Id: I17b47804dea68c77bc284df15ee1ccd66bc4b79b Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
show more ...
|
| 32f0d3c6 | 26-Jan-2017 |
Douglas Raillard <douglas.raillard@arm.com> |
Replace some memset call by zeromem
Replace all use of memset by zeromem when zeroing moderately-sized structure by applying the following transformation: memset(x, 0, sizeof(x)) => zeromem(x, sizeo
Replace some memset call by zeromem
Replace all use of memset by zeromem when zeroing moderately-sized structure by applying the following transformation: memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
As the Trusted Firmware is compiled with -ffreestanding, it forbids the compiler from using __builtin_memset and forces it to generate calls to the slow memset implementation. Zeromem is a near drop in replacement for this use case, with a more efficient implementation on both AArch32 and AArch64.
Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| 308d359b | 02-Dec-2016 |
Douglas Raillard <douglas.raillard@arm.com> |
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access.
Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated.
Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements).
Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses.
Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access
Usage guidelines: in most cases, zero_normalmem should be preferred.
There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes.
Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations.
Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC.
Fixes ARM-software/tf-issues#408
Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| 10bcd761 | 03-Jan-2017 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
Report errata workaround status to console
The errata reporting policy is as follows:
- If an errata workaround is enabled:
- If it applies (i.e. the CPU is affected by the errata), an INFO
Report errata workaround status to console
The errata reporting policy is as follows:
- If an errata workaround is enabled:
- If it applies (i.e. the CPU is affected by the errata), an INFO message is printed, confirming that the errata workaround has been applied.
- If it does not apply, a VERBOSE message is printed, confirming that the errata workaround has been skipped.
- If an errata workaround is not enabled, but would have applied had it been, a WARN message is printed, alerting that errata workaround is missing.
The CPU errata messages are printed by both BL1 (primary CPU only) and runtime firmware on debug builds, once for each CPU/errata combination.
Relevant output from Juno r1 console when ARM Trusted Firmware is built with PLAT=juno LOG_LEVEL=50 DEBUG=1:
VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied INFO: BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied WARNING: BL1: cortex_a57: errata workaround for 826974 was missing! WARNING: BL1: cortex_a57: errata workaround for 826977 was missing! WARNING: BL1: cortex_a57: errata workaround for 828024 was missing! WARNING: BL1: cortex_a57: errata workaround for 829520 was missing! WARNING: BL1: cortex_a57: errata workaround for 833471 was missing! ... VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied INFO: BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied WARNING: BL31: cortex_a57: errata workaround for 826974 was missing! WARNING: BL31: cortex_a57: errata workaround for 826977 was missing! WARNING: BL31: cortex_a57: errata workaround for 828024 was missing! WARNING: BL31: cortex_a57: errata workaround for 829520 was missing! WARNING: BL31: cortex_a57: errata workaround for 833471 was missing! ... VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied INFO: BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
Also update documentation.
Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| aa050a7b | 16-Jan-2017 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
stdlib: Import timingsafe_bcmp() from FreeBSD
Some side-channel attacks involve an attacker inferring something from the time taken for a memory compare operation to complete, for example when compa
stdlib: Import timingsafe_bcmp() from FreeBSD
Some side-channel attacks involve an attacker inferring something from the time taken for a memory compare operation to complete, for example when comparing hashes during image authentication. To mitigate this, timingsafe_bcmp() must be used for such operations instead of the standard memcmp().
This function executes in constant time and so doesn't leak any timing information to the caller.
Change-Id: I470a723dc3626a0ee6d5e3f7fd48d0a57b8aa5fd Signed-off-by: dp-arm <dimitris.papastamos@arm.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| 34438669 | 24-Jan-2017 |
danh-arm <dan.handley@arm.com> |
Merge pull request #818 from sandrine-bailleux-arm/sb/strnlen
Add strnlen() to local C library |
| d67879d3 | 24-Jan-2017 |
Sandrine Bailleux <sandrine.bailleux@arm.com> |
Add strnlen() to local C library
This code has been imported and slightly adapted from FreeBSD: https://github.com/freebsd/freebsd/blob/6253393ad8df55730481bf2aafd76bdd6182e2f5/lib/libc/string/strnl
Add strnlen() to local C library
This code has been imported and slightly adapted from FreeBSD: https://github.com/freebsd/freebsd/blob/6253393ad8df55730481bf2aafd76bdd6182e2f5/lib/libc/string/strnlen.c
Change-Id: Ie5ef5f92e6e904adb88f8628077fdf1d27470eb3 Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
show more ...
|
| 4abd2225 | 23-Jan-2017 |
danh-arm <dan.handley@arm.com> |
Merge pull request #800 from masahir0y/ifdef
Correct preprocessor conditionals |
| 3d8256b2 | 25-Dec-2016 |
Masahiro Yamada <yamada.masahiro@socionext.com> |
Use #ifdef for IMAGE_BL* instead of #if
One nasty part of ATF is some of boolean macros are always defined as 1 or 0, and the rest of them are only defined under certain conditions.
For the former
Use #ifdef for IMAGE_BL* instead of #if
One nasty part of ATF is some of boolean macros are always defined as 1 or 0, and the rest of them are only defined under certain conditions.
For the former group, "#if FOO" or "#if !FOO" must be used because "#ifdef FOO" is always true. (Options passed by $(call add_define,) are the cases.)
For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because checking the value of an undefined macro is strange.
Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like follows:
$(eval IMAGE := IMAGE_BL$(call uppercase,$(3)))
$(OBJ): $(2) @echo " CC $$<" $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@
This means, IMAGE_BL* is defined when building the corresponding image, but *undefined* for the other images.
So, IMAGE_BL* belongs to the latter group where we should use #ifdef or #ifndef.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
show more ...
|
| 7b94e4b9 | 23-Jan-2017 |
danh-arm <dan.handley@arm.com> |
Merge pull request #813 from antonio-nino-diaz-arm/an/libfdt
Update libfdt to version 1.4.2 |
| 55c70cb7 | 17-Jan-2017 |
David Cunado <david.cunado@arm.com> |
Correct system include order
NOTE - this is patch does not address all occurrences of system includes not being in alphabetical order, just this one case.
Change-Id: I3cd23702d69b1f60a4a9dd7fd4ae27
Correct system include order
NOTE - this is patch does not address all occurrences of system includes not being in alphabetical order, just this one case.
Change-Id: I3cd23702d69b1f60a4a9dd7fd4ae27418f15b7a3
show more ...
|