feat(tsp): cascade boot arguments to platformsEnable platforms to receive boot arguments passed to the TSP, allowingthem to make use of these parameters.This is in preparation for supporting Fir
feat(tsp): cascade boot arguments to platformsEnable platforms to receive boot arguments passed to the TSP, allowingthem to make use of these parameters.This is in preparation for supporting Firmware Handoff within the TSP.BREAKING CHANGE: The prototype for `tsp_early_platform_setup` has beenredefined. Platforms must update their implementations to match the newfunction signature.Change-Id: I4b5c6493bb62846aaa0d9e330d8aa06e6a0525a8Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>
show more ...
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKEDIntroduce the is_feat_bti_{supported, present}() helpers and replacechecks for ENABLE_BTI with it. Also factor out the setting ofSCTLR_EL3.BT o
feat(cpufeat): enable FEAT_BTI to FEAT_STATE_CHECKEDIntroduce the is_feat_bti_{supported, present}() helpers and replacechecks for ENABLE_BTI with it. Also factor out the setting ofSCTLR_EL3.BT out of the PAuth enablement and place it in the respectiveentrypoints where we initialise SCTLR_EL3. This makes PAuthself-contained and SCTLR_EL3 initialisation centralised.Change-Id: I0c0657ff1e78a9652cd2cf1603478283dc01f17bSigned-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
feat(tsp): add FF-A support to the TSPThis patch adds the FF-A programming model in the testsecure payload to ensure that it can be used to testthe following spec features.1. SP initialisation
feat(tsp): add FF-A support to the TSPThis patch adds the FF-A programming model in the testsecure payload to ensure that it can be used to testthe following spec features.1. SP initialisation on the primary and secondary cpus.2. An event loop to receive direct requests and respond with direct responses.3. Ability to receive messages that indicate power on and off of a cpu.4. Ability to handle a secure interrupt.Signed-off-by: Achin Gupta <achin.gupta@arm.com>Signed-off-by: Marc Bonnici <marc.bonnici@arm.com>Signed-off-by: Shruti <shruti.gupta@arm.com>Change-Id: I81cf744904d5cdc0b27862b5e4bc6f2cfe58a13a
fix(pie): invalidate data cache in the entire image range if PIE is enabledCurrently on image entry, the data cache in the RW address range isinvalidated before MMU is enabled to safeguard against
fix(pie): invalidate data cache in the entire image range if PIE is enabledCurrently on image entry, the data cache in the RW address range isinvalidated before MMU is enabled to safeguard against potentialstale data from previous firmware stage. If PIE is enabled however,RO sections including the GOT may be also modified during pie fixup.Therefore, to be on the safe side, invalidate the entire imageregion if PIE is enabled.Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>Change-Id: I7ee2a324fe4377b026e32f9ab842617ad4e09d89
Avoid the use of linker *_SIZE__ macrosThe use of end addresses is preferred over the size of sections.This was done for some AARCH64 files for PIE with commit [1],and some extra explanations can
Avoid the use of linker *_SIZE__ macrosThe use of end addresses is preferred over the size of sections.This was done for some AARCH64 files for PIE with commit [1],and some extra explanations can be found in its commit message.Align the missing AARCH64 files.For AARCH32 files, this is required to prepare PIE support introduction. [1] f1722b693d36 ("PIE: Use PC relative adrp/adr for symbol reference")Change-Id: I8f1c06580182b10c680310850f72904e58a54d7dSigned-off-by: Yann Gautier <yann.gautier@st.com>
TSP: Fix GCC 11.0.0 compilation error.This patch fixes the following compilation errorreported by aarch64-none-elf-gcc 11.0.0:bl32/tsp/tsp_main.c: In function 'tsp_smc_handler':bl32/tsp/tsp_mai
TSP: Fix GCC 11.0.0 compilation error.This patch fixes the following compilation errorreported by aarch64-none-elf-gcc 11.0.0:bl32/tsp/tsp_main.c: In function 'tsp_smc_handler':bl32/tsp/tsp_main.c:393:9: error: 'tsp_get_magic' accessing 32 bytes in a region of size 16 [-Werror=stringop-overflow=] 393 | tsp_get_magic(service_args); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~bl32/tsp/tsp_main.c:393:9: note: referencing argument 1 of type 'uint64_t *' {aka 'long long unsigned int *'}In file included from bl32/tsp/tsp_main.c:19:bl32/tsp/tsp_private.h:64:6: note: in a call to function 'tsp_get_magic' 64 | void tsp_get_magic(uint64_t args[4]); | ^~~~~~~~~~~~~by changing declaration of tsp_get_magic function fromvoid tsp_get_magic(uint64_t args[4]);touint128_t tsp_get_magic(void);which returns arguments directly in x0 and x1 registers.In bl32\tsp\tsp_main.c the current tsp_smc_handler()implementation calls tsp_get_magic(service_args);, where service_args array is declared asuint64_t service_args[2];and tsp_get_magic() in bl32\tsp\aarch64\tsp_request.Scopies only 2 registers in output buffer: /* Store returned arguments to the array */ stp x0, x1, [x4, #0]Change-Id: Ib34759fc5d7bb803e6c734540d91ea278270b330Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Increase type widths to satisfy width requirementsUsually, C has no problem up-converting types to larger bit sizes. MISRArule 10.7 requires that you not do this, or be very explicit about this.T
Increase type widths to satisfy width requirementsUsually, C has no problem up-converting types to larger bit sizes. MISRArule 10.7 requires that you not do this, or be very explicit about this.This resolves the following required rule: bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None> The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U | 0x3c0U" (32 bits) is less that the right hand operand "18446744073709547519ULL" (64 bits).This also resolves MISRA defects such as: bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)] In the expression "3U << 20", shifting more than 7 bits, the number of bits in the essential type of the left expression, "3U", is not allowed.Further, MISRA requires that all shifts don't overflow. The definition ofPAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.This fixes the violation by changing the definition to 1UL << 12. Sincethis uses 32bits, it should not create any issues for aarch32.This patch also contains a fix for a build failure in the sun50i_a64platform. Specifically, these misra fixes removed a single andinstruction, 92407e73 and x19, x19, #0xfffffffffrom the cm_setup_context function caused a relocation inpsci_cpus_on_start to require a linker-generated stub. This increased thesize of the .text section and caused an alignment later on to go over apage boundary and round up to the end of RAM before placing the .datasection. This sectionn is of non-zero size and therefore causes a linkerror.The fix included in this reorders the functions during link timewithout changing their ording with respect to alignment.Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
TSP: add PIE supportThis implementation simply mimics that of BL31.Change-Id: Ibbaa4ca012d38ac211c52b0b3e97449947160e07Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Prevent speculative execution past ERETEven though ERET always causes a jump to another address, aarch64 CPUsspeculatively execute following instructions as if the ERETinstruction was not a jump
Prevent speculative execution past ERETEven though ERET always causes a jump to another address, aarch64 CPUsspeculatively execute following instructions as if the ERETinstruction was not a jump instruction.The speculative execution does not cross privilege-levels (to the jumptarget as one would expect), but it continues on the kernel privilegelevel as if the ERET instruction did not change the control flow -thus execution anything that is accidentally linked after the ERETinstruction. Later, the results of this speculative execution arealways architecturally discarded, however they can leak data usingmicroarchitectural side channels. This speculative execution is veryreliable (seems to be unconditional) and it manages to complete evenrelatively performance-heavy operations (e.g. multiple dependentfetches from uncached memory).This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2https://github.com/OP-TEE/optee_os/commit/abfd092aa19f9c0251e3d5551e2d68a9ebcfec8aIt is demonstrated in a SafeSide example:https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.cSigned-off-by: Anthony Steinhauser <asteinhauser@google.com>Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
Refactor ARMv8.3 Pointer Authentication support codeThis patch provides the following features and makes modificationslisted below:- Individual APIAKey key generation for each CPU.- New key gene
Refactor ARMv8.3 Pointer Authentication support codeThis patch provides the following features and makes modificationslisted below:- Individual APIAKey key generation for each CPU.- New key generation on every BL31 warm boot and TSP CPU On event.- Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure.- `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure.- Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls.- `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`.- Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END.- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values.- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Add support for Branch Target IdentificationThis patch adds the functionality needed for platforms to provideBranch Target Identification (BTI) extension, introduced to AArch64in Armv8.5-A by add
Add support for Branch Target IdentificationThis patch adds the functionality needed for platforms to provideBranch Target Identification (BTI) extension, introduced to AArch64in Armv8.5-A by adding BTI instruction used to mark valid targetsfor indirect branches. The patch sets new GP bit [50] to the stage 1Translation Table Block and Page entries to denote guarded EL3 codepages which will cause processor to trap instructions in protectedpages trying to perform an indirect branch to any instruction otherthan BTI.BTI feature is selected by BRANCH_PROTECTION option which supersedesthe previous ENABLE_PAUTH used for Armv8.3-A Pointer Authenticationand is disabled by default. Enabling BTI requires compiler supportand was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.The assembly macros and helpers are modified to accommodate the BTIinstruction.This is an experimental feature.Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3is now made as an internal flag and BRANCH_PROTECTION flag should beused instead to enable Pointer Authentication.Note. USE_LIBROM=1 option is currently not supported.Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362eSigned-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Apply stricter speculative load restrictionThe SCTLR.DSSBS bit is zero by default thus disabling speculative loads.However, we also explicitly set it to zero for BL2 and TSP images wheneach image
Apply stricter speculative load restrictionThe SCTLR.DSSBS bit is zero by default thus disabling speculative loads.However, we also explicitly set it to zero for BL2 and TSP images wheneach image initialises its context. This is done to ensure that theimage environment is initialised in a safe state, regardless of thereset value of the bit.Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
TSP: Enable pointer authentication supportThe size increase after enabling options related to ARMv8.3-PAuth is:+----------------------------+-------+-------+-------+--------+|
TSP: Enable pointer authentication supportThe size increase after enabling options related to ARMv8.3-PAuth is:+----------------------------+-------+-------+-------+--------+| | text | bss | data | rodata |+----------------------------+-------+-------+-------+--------+| CTX_INCLUDE_PAUTH_REGS = 1 | +40 | +0 | +0 | +0 || | 0.4% | | | |+----------------------------+-------+-------+-------+--------+| ENABLE_PAUTH = 1 | +352 | +0 | +16 | +0 || | 3.1% | | 15.8% | |+----------------------------+-------+-------+-------+--------+Results calculated with the following build configuration: make PLAT=fvp SPD=tspd DEBUG=1 \ SDEI_SUPPORT=1 \ EL3_EXCEPTION_HANDLING=1 \ TSP_NS_INTR_ASYNC_PREEMPT=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Sanitise includes across codebaseEnforce full include path for includes. Deprecate old paths.The following folders inside include/lib have been left unchanged:- include/lib/cpus/${ARCH}- inclu
Sanitise includes across codebaseEnforce full include path for includes. Deprecate old paths.The following folders inside include/lib have been left unchanged:- include/lib/cpus/${ARCH}- include/lib/el3_runtime/${ARCH}The reason for this change is that having a global namespace forincludes isn't a good idea. It defeats one of the advantages of havingfolders and it introduces problems that are sometimes subtle (becauseyou may not know the header you are actually including if there are twoof them).For example, this patch had to be created because two headers werecalled the same way: e0ea0928d5b7 ("Fix gpio includes of mt8173 platformto avoid collision."). More recently, this patch has had similarproblems: 46f9b2c3a282 ("drivers: add tzc380 support").This problem was introduced in commit 4ecca33988b9 ("Move include andsource files to logical locations"). At that time, there weren't toomany headers so it wasn't a real issue. However, time has shown thatthis creates problems.Platforms that want to preserve the way they include headers may add theremoved paths to PLAT_INCLUDES, but this is discouraged.Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8fSigned-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Add end_vector_entry assembler macroCheck_vector_size checks if the size of the vector fitsin the size reserved for it. This check creates problems inthe Clang assembler. A new macro, end_vector_
Add end_vector_entry assembler macroCheck_vector_size checks if the size of the vector fitsin the size reserved for it. This check creates problems inthe Clang assembler. A new macro, end_vector_entry, is addedand check_vector_size is deprecated.This new macro fills the current exception vector until the nextexception vector. If the size of the current vector is biggerthan 32 instructions then it gives an error.Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
TSP: Enable cache along with MMUPreviously, data caches were disabled while enabling MMU only because ofactive stack. Now that we can enable MMU without using stack, we canenable both MMU and dat
TSP: Enable cache along with MMUPreviously, data caches were disabled while enabling MMU only because ofactive stack. Now that we can enable MMU without using stack, we canenable both MMU and data caches at the same time.Change-Id: I73f3b8bae5178610e17e9ad06f81f8f6f97734a6Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Merge pull request #1054 from jwerner-chromium/JW_crash_x30Fix x30 reporting for unhandled exceptions
Fix x30 reporting for unhandled exceptionsSome error paths that lead to a crash dump will overwrite the value inthe x30 register by calling functions with the no_ret macro, whichresolves to a BL
Fix x30 reporting for unhandled exceptionsSome error paths that lead to a crash dump will overwrite the value inthe x30 register by calling functions with the no_ret macro, whichresolves to a BL instruction. This is not very useful and not what thereader would expect, since a crash dump should usually show allregisters in the state they were in when the exception happened. Thispatch replaces the offending function calls with a B instruction topreserve the value in x30.Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2aSigned-off-by: Julius Werner <jwerner@chromium.org>
Add new alignment parameter to func assembler macroAssembler programmers are used to being able to define functions with aspecific aligment with a pattern like this: .align X myfunction:H
Add new alignment parameter to func assembler macroAssembler programmers are used to being able to define functions with aspecific aligment with a pattern like this: .align X myfunction:However, this pattern is subtly broken when instead of a direct labellike 'myfunction:', you use the 'func myfunction' macro that's standardin Trusted Firmware. Since the func macro declares a new section for thefunction, the .align directive written above it actually applies to the*previous* section in the assembly file, and the function it wassupposed to apply to is linked with default alignment.An extreme case can be seen in Rockchip's plat_helpers.S which containsthis code: [...] endfunc plat_crash_console_putc .align 16 func platform_cpu_warmboot [...]This assembles into the following plat_helpers.o: Sections: Idx Name Size [...] Algn 9 .text.plat_crash_console_putc 00010000 [...] 2**16 10 .text.platform_cpu_warmboot 00000080 [...] 2**3As can be seen, the *previous* function actually got the alignmentconstraint, and it is also 64KB big even though it contains only twoinstructions, because the .align directive at the end of its sectionforces the assembler to insert a giant sled of NOPs. The function weactually wanted to align has the default constraint. This code onlyworks at all because the linker just happens to put the two functionsright behind each other when linking the final image, and since the endof plat_crash_console_putc is aligned the start of platform_cpu_warmbootwill also be. But it still wastes almost 64KB of image spaceunnecessarily, and it will break under certain circumstances (e.g. ifthe plat_crash_console_putc function becomes unused and its section getsgarbage-collected out).There's no real way to fix this with the existing func macro. Code like func myfunc .align Xhappens to do the right thing, but is still not really correct code(because the function label is inserted before the .align directive, sothe assembler is technically allowed to insert padding at the beginningof the function which would then get executed as instructions if thefunction was called). Therefore, this patch adds a new parameter with adefault value to the func macro that allows overriding its alignment.Also fix up all existing instances of this dangerous antipattern.Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10Signed-off-by: Julius Werner <jwerner@chromium.org>
Merge pull request #925 from dp-arm/dp/spdxUse SPDX license identifiers
Use SPDX license identifiersTo make software license auditing simpler, use SPDX[0] licenseidentifiers instead of duplicating the license text in every file.NOTE: Files that have been imported by
Use SPDX license identifiersTo make software license auditing simpler, use SPDX[0] licenseidentifiers instead of duplicating the license text in every file.NOTE: Files that have been imported by FreeBSD have not been modified.[0]: https://spdx.org/Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761aSigned-off-by: dp-arm <dimitris.papastamos@arm.com>
Update terminology: standard SMC to yielding SMCSince Issue B (November 2016) of the SMC Calling Convention documentstandard SMC calls are renamed to yielding SMC calls to help avoidconfusion wit
Update terminology: standard SMC to yielding SMCSince Issue B (November 2016) of the SMC Calling Convention documentstandard SMC calls are renamed to yielding SMC calls to help avoidconfusion with the standard service SMC range, which remains unchanged.http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdfThis patch adds a new define for yielding SMC call type and deprecatesthe current standard SMC call type. The tsp is migrated to use this newterminology and, additionally, the documentation and code comments areupdated to use this new terminology.Change-Id: I0d7cc0224667ee6c050af976745f18c55906a793Signed-off-by: David Cunado <david.cunado@arm.com>
Add support for GCC stack protectionIntroduce new build option ENABLE_STACK_PROTECTOR. It enablescompilation of all BL images with one of the GCC -fstack-protector-*options.A new platform funct
Add support for GCC stack protectionIntroduce new build option ENABLE_STACK_PROTECTOR. It enablescompilation of all BL images with one of the GCC -fstack-protector-*options.A new platform function plat_get_stack_protector_canary() is introduced.It returns a value that is used to initialize the canary for stackcorruption detection. Returning a random value will prevent an attackerfrom predicting the value and greatly increase the effectiveness of theprotection.A message is printed at the ERROR level when a stack corruption isdetected.To be effective, the global data must be stored at an addresslower than the base of the stacks. Failure to do so would allow anattacker to overwrite the canary as part of an attack which would voidthe protection.FVP implementation of plat_get_stack_protector_canary is weak asthere is no real source of entropy on the FVP. It therefore relies on atimer's value, which could be predictable.Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
Simplify translation tables headers dependenciesThe files affected by this patch don't really depend on `xlat_tables.h`.By changing the included file it becomes easier to switch between thetwo ve
Simplify translation tables headers dependenciesThe files affected by this patch don't really depend on `xlat_tables.h`.By changing the included file it becomes easier to switch between thetwo versions of the translation tables library.Change-Id: Idae9171c490e0865cb55883b19eaf942457c4cccSigned-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Introduce unified API to zero memoryIntroduce zeromem_dczva function on AArch64 that can handle unalignedaddresses and make use of DC ZVA instruction to zero a whole block at atime. This zeroing
Introduce unified API to zero memoryIntroduce zeromem_dczva function on AArch64 that can handle unalignedaddresses and make use of DC ZVA instruction to zero a whole block at atime. This zeroing takes place directly in the cache to speed it upwithout doing external memory access.Remove the zeromem16 function on AArch64 and replace it with an alias tozeromem. This zeromem16 function is now deprecated.Remove the 16-bytes alignment constraint on __BSS_START__ infirmware-design.md as it is now not mandatory anymore (it used to complywith zeromem16 requirements).Change the 16-bytes alignment constraints in SP min's linker script to a8-bytes alignment constraint as the AArch32 zeromem implementation is nowmore efficient on 8-bytes aligned addresses.Introduce zero_normalmem and zeromem helpers in platform agnostic headerthat are implemented this way:* AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem* AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data accessUsage guidelines: in most cases, zero_normalmem should be preferred.There are 2 scenarios where zeromem (or memset) must be used instead:* Code that must run with MMU disabled (which means all memory is considered device memory for data accesses).* Code that fills device memory with null bytes.Optionally, the following rule can be applied if performance isimportant:* Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC.Fixes ARM-software/tf-issues#408Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
123