| #
3ee8e4d8 |
| 18-Aug-2020 |
Alexei Fedorov <Alexei.Fedorov@arm.com> |
Merge "runtime_exceptions: Update AT speculative workaround" into integration
|
| #
3b8456bd |
| 23-Jul-2020 |
Manish V Badarkhe <Manish.Badarkhe@arm.com> |
runtime_exceptions: Update AT speculative workaround
As per latest mailing communication [1], we decided to update AT speculative workaround implementation in order to disable page table walk for lo
runtime_exceptions: Update AT speculative workaround
As per latest mailing communication [1], we decided to update AT speculative workaround implementation in order to disable page table walk for lower ELs(EL1 or EL0) immediately after context switching to EL3 from lower ELs.
Previous implementation of AT speculative workaround is available here: 45aecff00
AT speculative workaround is updated as below: 1. Avoid saving and restoring of SCTLR and TCR registers for EL1 in context save and restore routine respectively. 2. On EL3 entry, save SCTLR and TCR registers for EL1. 3. On EL3 entry, update EL1 system registers to disable stage 1 page table walk for lower ELs (EL1 and EL0) and enable EL1 MMU. 4. On EL3 exit, restore SCTLR and TCR registers for EL1 which are saved in step 2.
[1]: https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5 Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
show more ...
|
| #
611efd96 |
| 19-May-2020 |
Sandrine Bailleux <sandrine.bailleux@arm.com> |
Merge "Fix compilation error when ENABLE_PIE=1" into integration
|
| #
1a04b2e5 |
| 17-May-2020 |
Varun Wadekar <vwadekar@nvidia.com> |
Fix compilation error when ENABLE_PIE=1
This patch fixes compilation errors when ENABLE_PIE=1.
<snip> bl31/aarch64/bl31_entrypoint.S: Assembler messages: bl31/aarch64/bl31_entrypoint.S:61: Error: i
Fix compilation error when ENABLE_PIE=1
This patch fixes compilation errors when ENABLE_PIE=1.
<snip> bl31/aarch64/bl31_entrypoint.S: Assembler messages: bl31/aarch64/bl31_entrypoint.S:61: Error: invalid operand (*UND* section) for `~' bl31/aarch64/bl31_entrypoint.S:61: Error: invalid immediate Makefile:1079: recipe for target 'build/tegra/t194/debug/bl31/bl31_entrypoint.o' failed <snip>
Verified by setting 'ENABLE_PIE=1' for Tegra platform builds.
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com> Change-Id: Ifd184f89b86b4360fda86a6ce83fd8495f930bbc
show more ...
|
| #
8a0a8199 |
| 02-Jan-2020 |
Alexei Fedorov <Alexei.Fedorov@arm.com> |
Merge "bl31: Split into two separate memory regions" into integration
|
| #
f8578e64 |
| 18-Oct-2018 |
Samuel Holland <samuel@sholland.org> |
bl31: Split into two separate memory regions
Some platforms are extremely memory constrained and must split BL31 between multiple non-contiguous areas in SRAM. Allow the NOBITS sections (.bss, stack
bl31: Split into two separate memory regions
Some platforms are extremely memory constrained and must split BL31 between multiple non-contiguous areas in SRAM. Allow the NOBITS sections (.bss, stacks, page tables, and coherent memory) to be placed in a separate region of RAM from the loaded firmware image.
Because the NOBITS region may be at a lower address than the rest of BL31, __RW_{START,END}__ and __BL31_{START,END}__ cannot include this region, or el3_entrypoint_common would attempt to invalidate the dcache for the entire address space. New symbols __NOBITS_{START,END}__ are added when SEPARATE_NOBITS_REGION is enabled, and the dcached for the NOBITS region is invalidated separately.
Signed-off-by: Samuel Holland <samuel@sholland.org> Change-Id: Idedfec5e4dbee77e94f2fdd356e6ae6f4dc79d37
show more ...
|
| #
79999040 |
| 12-Dec-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge "PIE: make call to GDT relocation fixup generalized" into integration
|
| #
da90359b |
| 26-Nov-2019 |
Manish Pandey <manish.pandey2@arm.com> |
PIE: make call to GDT relocation fixup generalized
When a Firmware is complied as Position Independent Executable it needs to request GDT fixup by passing size of the memory region to el3_entrypoint
PIE: make call to GDT relocation fixup generalized
When a Firmware is complied as Position Independent Executable it needs to request GDT fixup by passing size of the memory region to el3_entrypoint_common macro. The Global descriptor table fixup will be done early on during cold boot process of primary core.
Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be compiled as PIE, it can simply pass fixup size to the common el3 entrypoint macro to fixup GDT.
The reason for this patch was to overcome the bug introduced by SHA 330ead806 which called fixup routine for each core causing re-initializing of global pointers thus overwriting any changes done by the previous core.
Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055 Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
show more ...
|
| #
80003d86 |
| 07-Oct-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge "Explicitly disable the SPME bit in MDCR_EL3" into integration
|
| #
2a7adf25 |
| 03-Oct-2019 |
Petre-Ionut Tudor <petre-ionut.tudor@arm.com> |
Explicitly disable the SPME bit in MDCR_EL3
Currently the MDCR_EL3 initialisation implicitly disables MDCR_EL3.SPME by using mov_imm.
This patch makes the SPME bit more visible by explicitly disabl
Explicitly disable the SPME bit in MDCR_EL3
Currently the MDCR_EL3 initialisation implicitly disables MDCR_EL3.SPME by using mov_imm.
This patch makes the SPME bit more visible by explicitly disabling it and documenting its use in different versions of the architecture.
Signed-off-by: Petre-Ionut Tudor <petre-ionut.tudor@arm.com> Change-Id: I221fdf314f01622f46ac5aa43388f59fa17a29b3
show more ...
|
| #
34c4f86a |
| 03-Oct-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge "Add missing support for BL2_AT_EL3 in XIP memory" into integration
|
| #
0a12302c |
| 27-May-2019 |
Lionel Debieve <lionel.debieve@st.com> |
Add missing support for BL2_AT_EL3 in XIP memory
Add the missing flag for aarch32 XIP memory mode. It was previously added in aarch64 only. Minor: Correct the aarch64 missing flag.
Signed-off-by: L
Add missing support for BL2_AT_EL3 in XIP memory
Add the missing flag for aarch32 XIP memory mode. It was previously added in aarch64 only. Minor: Correct the aarch64 missing flag.
Signed-off-by: Lionel Debieve <lionel.debieve@st.com> Change-Id: Iac0a7581a1fd580aececa75f97deb894858f776f
show more ...
|
| #
f52f73b3 |
| 12-Sep-2019 |
Soby Mathew <soby.mathew@arm.com> |
Merge "Invalidate dcache build option for bl2 entry at EL3" into integration
|
| #
b90f207a |
| 20-Aug-2019 |
Hadi Asyrafi <muhammad.hadi.asyrafi.abdul.halim@intel.com> |
Invalidate dcache build option for bl2 entry at EL3
Some of the platform (ie. Agilex) make use of CCU IPs which will only be initialized during bl2_el3_early_platform_setup. Any operation to the cac
Invalidate dcache build option for bl2 entry at EL3
Some of the platform (ie. Agilex) make use of CCU IPs which will only be initialized during bl2_el3_early_platform_setup. Any operation to the cache beforehand will crash the platform. Hence, this will provide an option to skip the data cache invalidation upon bl2 entry at EL3
Signed-off-by: Hadi Asyrafi <muhammad.hadi.asyrafi.abdul.halim@intel.com> Change-Id: I2c924ed0589a72d0034714c31be8fe57237d1f06
show more ...
|
| #
30560911 |
| 23-Aug-2019 |
Paul Beesley <paul.beesley@arm.com> |
Merge "AArch64: Disable Secure Cycle Counter" into integration
|
| #
e290a8fc |
| 13-Aug-2019 |
Alexei Fedorov <Alexei.Fedorov@arm.com> |
AArch64: Disable Secure Cycle Counter
This patch fixes an issue when secure world timing information can be leaked because Secure Cycle Counter is not disabled. For ARMv8.5 the counter gets disabled
AArch64: Disable Secure Cycle Counter
This patch fixes an issue when secure world timing information can be leaked because Secure Cycle Counter is not disabled. For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR_EL0 register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR_EL0.DP bit. 'include\aarch64\arch.h' header file was tided up and new ARMv8.5-PMU related definitions were added.
Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
show more ...
|
| #
57bc6424 |
| 27-Feb-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1829 from antonio-nino-diaz-arm/an/pauth
Add Pointer Authentication (ARMv8.3-PAuth) support to the TF
|
| #
5283962e |
| 31-Jan-2019 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Add ARMv8.3-PAuth registers to CPU context
ARMv8.3-PAuth adds functionality that supports address authentication of the contents of a register before that register is used as the target of an indire
Add ARMv8.3-PAuth registers to CPU context
ARMv8.3-PAuth adds functionality that supports address authentication of the contents of a register before that register is used as the target of an indirect branch, or as a load.
This feature is supported only in AArch64 state.
This feature is mandatory in ARMv8.3 implementations.
This feature adds several registers to EL1. A new option called CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save them during Non-secure <-> Secure world switches. This option must be enabled if the hardware has the registers or the values will be leaked during world switches.
To prevent leaks, this patch also disables pointer authentication in the Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will be trapped in EL3.
Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
c8b96e4a |
| 27-Feb-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1831 from antonio-nino-diaz-arm/an/sccd
Disable processor Cycle Counting in Secure state
|
| #
ed4fc6f0 |
| 18-Feb-2019 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Disable processor Cycle Counting in Secure state
In a system with ARMv8.5-PMU implemented:
- If EL3 is using AArch32, setting MDCR_EL3.SCCD to 1 disables counting in Secure state in PMCCNTR.
- I
Disable processor Cycle Counting in Secure state
In a system with ARMv8.5-PMU implemented:
- If EL3 is using AArch32, setting MDCR_EL3.SCCD to 1 disables counting in Secure state in PMCCNTR.
- If EL3 is using AArch64, setting SDCR.SCCD to 1 disables counting in Secure state in PMCCNTR_EL0.
So far this effect has been achieved by setting PMCR_EL0.DP (in AArch64) or PMCR.DP (in AArch32) to 1 instead, but this isn't considered secure as any EL can change that value.
Change-Id: I82cbb3e48f2e5a55c44d9c4445683c5881ef1f6f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|
| #
9a207532 |
| 04-Jan-2019 |
Antonio Niño Díaz <antonio.ninodiaz@arm.com> |
Merge pull request #1726 from antonio-nino-diaz-arm/an/includes
Sanitise includes across codebase
|
| #
f5478ded |
| 17-Dec-2018 |
Antonio Nino Diaz <antonio.ninodiaz@arm.com> |
Reorganize architecture-dependent header files
The architecture dependant header files in include/lib/${ARCH} and include/common/${ARCH} have been moved to /include/arch/${ARCH}.
Change-Id: I96f30f
Reorganize architecture-dependent header files
The architecture dependant header files in include/lib/${ARCH} and include/common/${ARCH} have been moved to /include/arch/${ARCH}.
Change-Id: I96f30fdb80b191a51448ddf11b1d4a0624c03394 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
show more ...
|