| #
bb00ea5b |
| 27-Apr-2018 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
TSP: Enable cache along with MMU
Previously, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and dat
TSP: Enable cache along with MMU
Previously, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and data caches at the same time.
Change-Id: I73f3b8bae5178610e17e9ad06f81f8f6f97734a6 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| #
2458e37a |
| 22-Aug-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #1053 from jwerner-chromium/JW_func_align
Add new alignment parameter to func assembler macro
|
| #
64726e6d |
| 01-Aug-2017 |
Julius Werner <jwerner@chromium.org> |
Add new alignment parameter to func assembler macro
Assembler programmers are used to being able to define functions with a specific aligment with a pattern like this:
.align X myfunction:
H
Add new alignment parameter to func assembler macro
Assembler programmers are used to being able to define functions with a specific aligment with a pattern like this:
.align X myfunction:
However, this pattern is subtly broken when instead of a direct label like 'myfunction:', you use the 'func myfunction' macro that's standard in Trusted Firmware. Since the func macro declares a new section for the function, the .align directive written above it actually applies to the *previous* section in the assembly file, and the function it was supposed to apply to is linked with default alignment.
An extreme case can be seen in Rockchip's plat_helpers.S which contains this code:
[...] endfunc plat_crash_console_putc
.align 16 func platform_cpu_warmboot [...]
This assembles into the following plat_helpers.o:
Sections: Idx Name Size [...] Algn 9 .text.plat_crash_console_putc 00010000 [...] 2**16 10 .text.platform_cpu_warmboot 00000080 [...] 2**3
As can be seen, the *previous* function actually got the alignment constraint, and it is also 64KB big even though it contains only two instructions, because the .align directive at the end of its section forces the assembler to insert a giant sled of NOPs. The function we actually wanted to align has the default constraint. This code only works at all because the linker just happens to put the two functions right behind each other when linking the final image, and since the end of plat_crash_console_putc is aligned the start of platform_cpu_warmboot will also be. But it still wastes almost 64KB of image space unnecessarily, and it will break under certain circumstances (e.g. if the plat_crash_console_putc function becomes unused and its section gets garbage-collected out).
There's no real way to fix this with the existing func macro. Code like
func myfunc .align X
happens to do the right thing, but is still not really correct code (because the function label is inserted before the .align directive, so the assembler is technically allowed to insert padding at the beginning of the function which would then get executed as instructions if the function was called). Therefore, this patch adds a new parameter with a default value to the func macro that allows overriding its alignment.
Also fix up all existing instances of this dangerous antipattern.
Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10 Signed-off-by: Julius Werner <jwerner@chromium.org>
show more ...
|
| #
f132b4a0 |
| 04-May-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #925 from dp-arm/dp/spdx
Use SPDX license identifiers
|
| #
82cb2c1a |
| 03-May-2017 |
dp-arm <dimitris.papastamos@arm.com> |
Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file.
NOTE: Files that have been imported by
Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file.
NOTE: Files that have been imported by FreeBSD have not been modified.
[0]: https://spdx.org/
Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
show more ...
|
| #
4b427bd4 |
| 02-May-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #919 from davidcunado-arm/dc/smc_yielding_generic
Update terminology: standard SMC to yielding SMC
|
| #
16292f54 |
| 05-Apr-2017 |
David Cunado <david.cunado@arm.com> |
Update terminology: standard SMC to yielding SMC
Since Issue B (November 2016) of the SMC Calling Convention document standard SMC calls are renamed to yielding SMC calls to help avoid confusion wit
Update terminology: standard SMC to yielding SMC
Since Issue B (November 2016) of the SMC Calling Convention document standard SMC calls are renamed to yielding SMC calls to help avoid confusion with the standard service SMC range, which remains unchanged.
http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf
This patch adds a new define for yielding SMC call type and deprecates the current standard SMC call type. The tsp is migrated to use this new terminology and, additionally, the documentation and code comments are updated to use this new terminology.
Change-Id: I0d7cc0224667ee6c050af976745f18c55906a793 Signed-off-by: David Cunado <david.cunado@arm.com>
show more ...
|
| #
ed756252 |
| 06-Apr-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #886 from dp-arm/dp/stack-protector
Add support for GCC stack protection
|
| #
51faada7 |
| 24-Feb-2017 |
Douglas Raillard <douglas.raillard@arm.com> |
Add support for GCC stack protection
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options.
A new platform funct
Add support for GCC stack protection
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options.
A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection.
A message is printed at the ERROR level when a stack corruption is detected.
To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection.
FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable.
Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| #
28ee754d |
| 16-Mar-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #856 from antonio-nino-diaz-arm/an/dynamic-xlat
Introduce version 2 of the translation tables library
|
| #
d50ece03 |
| 20-Feb-2017 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Simplify translation tables headers dependencies
The files affected by this patch don't really depend on `xlat_tables.h`. By changing the included file it becomes easier to switch between the two ve
Simplify translation tables headers dependencies
The files affected by this patch don't really depend on `xlat_tables.h`. By changing the included file it becomes easier to switch between the two versions of the translation tables library.
Change-Id: Idae9171c490e0865cb55883b19eaf942457c4ccc Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
108e4df7 |
| 16-Feb-2017 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #834 from douglas-raillard-arm/dr/use_dc_zva_zeroing
Use DC ZVA instruction to zero memory
|
| #
308d359b |
| 02-Dec-2016 |
Douglas Raillard <douglas.raillard@arm.com> |
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing
Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access.
Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated.
Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements).
Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses.
Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access
Usage guidelines: in most cases, zero_normalmem should be preferred.
There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes.
Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations.
Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC.
Fixes ARM-software/tf-issues#408
Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| #
cef7b3ce |
| 23-Dec-2016 |
davidcunado-arm <david.cunado@arm.com> |
Merge pull request #798 from douglas-raillard-arm/dr/fix_std_smc_after_suspend
Abort preempted TSP STD SMC after PSCI CPU suspend
|
| #
3df6012a |
| 24-Nov-2016 |
Douglas Raillard <douglas.raillard@arm.com> |
Abort preempted TSP STD SMC after PSCI CPU suspend
Standard SMC requests that are handled in the secure-world by the Secure Payload can be preempted by interrupts that must be handled in the normal
Abort preempted TSP STD SMC after PSCI CPU suspend
Standard SMC requests that are handled in the secure-world by the Secure Payload can be preempted by interrupts that must be handled in the normal world. When the TSP is preempted the secure context is stored and control is passed to the normal world to handle the non-secure interrupt. Once completed the preempted secure context is restored. When restoring the preempted context, the dispatcher assumes that the TSP preempted context is still stored as the SECURE context by the context management library.
However, PSCI power management operations causes synchronous entry into TSP. This overwrites the preempted SECURE context in the context management library. When restoring back the SECURE context, the Secure Payload crashes because this context is not the preempted context anymore.
This patch avoids corruption of the preempted SECURE context by aborting any preempted SMC during PSCI power management calls. The abort_std_smc_entry hook of the TSP is called when aborting the SMC request.
It also exposes this feature as a FAST SMC callable from normal world to abort preempted SMC with FID TSP_FID_ABORT.
Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
show more ...
|
| #
1b5fa6ef |
| 12-Dec-2016 |
danh-arm <dan.handley@arm.com> |
Merge pull request #774 from jeenu-arm/no-return-macro
Define and use no_ret macro where no return is expected
|
| #
a806dad5 |
| 30-Nov-2016 |
Jeenu Viswambharan <jeenu.viswambharan@arm.com> |
Define and use no_ret macro where no return is expected
There are many instances in ARM Trusted Firmware where control is transferred to functions from which return isn't expected. Such jumps are ma
Define and use no_ret macro where no return is expected
There are many instances in ARM Trusted Firmware where control is transferred to functions from which return isn't expected. Such jumps are made using 'bl' instruction to provide the callee with the location from which it was jumped to. Additionally, debuggers infer the caller by examining where 'lr' register points to. If a 'bl' of the nature described above falls at the end of an assembly function, 'lr' will be left pointing to a location outside of the function range. This misleads the debugger back trace.
This patch defines a 'no_ret' macro to be used when jumping to functions from which return isn't expected. The macro ensures to use 'bl' instruction for the jump, and also, for debug builds, places a 'nop' instruction immediately thereafter (unless instructed otherwise) so as to leave 'lr' pointing within the function range.
Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0 Co-authored-by: Douglas Raillard <douglas.raillard@arm.com> Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
show more ...
|
| #
63a6d09a |
| 16-Mar-2016 |
danh-arm <dan.handley@arm.com> |
Merge pull request #550 from antonio-nino-diaz-arm/an/dead_loops
Remove all non-configurable dead loops
|
| #
1c3ea103 |
| 01-Feb-2016 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Remove all non-configurable dead loops
Added a new platform porting function plat_panic_handler, to allow platforms to handle unexpected error situations. It must be implemented in assembly as it ma
Remove all non-configurable dead loops
Added a new platform porting function plat_panic_handler, to allow platforms to handle unexpected error situations. It must be implemented in assembly as it may be called before the C environment is initialized. A default implementation is provided, which simply spins.
Corrected all dead loops in generic code to call this function instead. This includes the dead loop that occurs at the end of the call to panic().
All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have been removed.
Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
show more ...
|
| #
4ca473db |
| 09-Dec-2015 |
danh-arm <dan.handley@arm.com> |
Merge pull request #456 from soby-mathew/sm/gicv3-tsp-plat-changes-v2
Modify TSP and ARM standard platforms for new GIC drivers v2
|
| #
63b8440f |
| 13-Nov-2015 |
Soby Mathew <soby.mathew@arm.com> |
TSP: Allow preemption of synchronous S-EL1 interrupt handling
Earlier the TSP only ever expected to be preempted during Standard SMC processing. If a S-EL1 interrupt triggered while in the normal wo
TSP: Allow preemption of synchronous S-EL1 interrupt handling
Earlier the TSP only ever expected to be preempted during Standard SMC processing. If a S-EL1 interrupt triggered while in the normal world, it will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1 interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt was preempted by another higher priority pending interrupt which should be handled in EL3 e.g. Group0 interrupt in GICv3.
With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the return code from the `tsp_common_int_handler` in addition to 0 (interrupt successfully handled) and in both cases it issues an SMC with id `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back to normal world. In case a higher priority EL3 interrupt was pending, the execution will be routed to EL3 where interrupt will be handled. On return back to normal world, the pending S-EL1 interrupt which was preempted will get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`.
Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
show more ...
|
| #
02446137 |
| 03-Sep-2015 |
Soby Mathew <soby.mathew@arm.com> |
Enable use of FIQs and IRQs as TSP interrupts
On a GICv2 system, interrupts that should be handled in the secure world are typically signalled as FIQs. On a GICv3 system, these interrupts are signal
Enable use of FIQs and IRQs as TSP interrupts
On a GICv2 system, interrupts that should be handled in the secure world are typically signalled as FIQs. On a GICv3 system, these interrupts are signalled as IRQs instead. The mechanism for handling both types of interrupts is the same in both cases. This patch enables the TSP to run on a GICv3 system by:
1. adding support for handling IRQs in the exception handling code. 2. removing use of "fiq" in the names of data structures, macros and functions.
The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the former build flag is defined, it will be used to define the value for the new build flag. The documentation is also updated accordingly.
Change-Id: I1807d371f41c3656322dd259340a57649833065e
show more ...
|
| #
a6ef882c |
| 22-Sep-2015 |
Achin Gupta <achin.gupta@arm.com> |
Merge pull request #394 from achingupta/ag/ccn_driver
Support for ARM CoreLink CCN interconnects
|
| #
54dc71e7 |
| 11-Sep-2015 |
Achin Gupta <achin.gupta@arm.com> |
Make generic code work in presence of system caches
On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This mean
Make generic code work in presence of system caches
On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This means that such a flush or clean operation could result in the data being pushed out to the system cache rather than main memory. Another CPU could access this data before it enables its data cache or MMU. Such accesses could be serviced from the main memory instead of the system cache. If the data in the sysem cache has not yet been flushed or evicted to main memory then there could be a loss of coherency. The only mechanism to guarantee that the main memory will be updated is to use cache maintenance operations to the PoC by MVA(See section D3.4.11 (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
This patch removes the reliance of Trusted Firmware on the flush by set/way operation to ensure visibility of data in the main memory. Cache maintenance operations by MVA are now used instead. The following are the broad category of changes:
1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is initialised. This ensures that any stale cache lines at any level of cache are removed.
2. Updates to global data in runtime firmware (BL31) by the primary CPU are made visible to secondary CPUs using a cache clean operation by MVA.
3. Cache maintenance by set/way operations are only used prior to power down.
NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
Fixes ARM-software/tf-issues#205
Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
show more ...
|
| #
432b9905 |
| 17-Aug-2015 |
Achin Gupta <achin.gupta@arm.com> |
Merge pull request #361 from achingupta/for_sm/psci_proto_v5
For sm/psci proto v5
|