| 745c129a | 09-Jul-2024 |
Andre Przywara <andre.przywara@arm.com> |
feat(rmmd): add RMM_RESERVE_MEMORY SMC handler
At the moment any memory required by an R-EL2 manager (RMM) needs to be known at compile time: that sets the size of the .data and .bss segments. Some
feat(rmmd): add RMM_RESERVE_MEMORY SMC handler
At the moment any memory required by an R-EL2 manager (RMM) needs to be known at compile time: that sets the size of the .data and .bss segments. Some resources depend on the particular machine this will be running on, the prime example is TF-RMM's granule array, which needs to know the maximum memory supported beforehand. Other data structures might depend on the number of CPU cores.
To provide more flexibility, but keep the memory footprint as small as possible, let's introduce some memory reservation SMC. Any RMM implementation can ask EL3 for some memory, and would get the physical address of a usable chunk of memory back. This must happen at RMM boot time, so before the RMM concluded the boot phase with the RMM_BOOT_COMPLETE SMC call. Also there is no provision to free memory again, this would not be needed for the use case of sizing platform resources, and avoids the complexity of a full-fledged memory allocator.
Add the new RMM_RESERVE_MEMORY command to the implementation defined RMM-EL3 SMC interface, both in code and documentation. The actual memory reservation is made a platform implementation, but a simple implementation is provided, which is used for the FVP platform already: it will just pick the next matching chunk of memory from the top end of the RMM carveout. This way the memory reservation will grow down from the end of the carveout, in a stack-like fashion, until it reaches the end of the RMM payload, located at the beginning of the carveout. Since secondary cores might also reserve memory at boot time, there is a spinlock to protect the simple allocation algorithm. Other platforms can choose to provide a more sophisticated reservation algorithm, for instance one taking NUMA locality into account.
This patch just provides the call, at this point there is no obligation to use the feature, although future TF-RMM versions would rely on it.
Change-Id: I096ac8870ee38f44e18850779fcae829a43a8fd1 Signed-off-by: Andre Przywara <andre.przywara@arm.com>
show more ...
|
| 89d979ce | 12-Jun-2025 |
Andre Przywara <andre.przywara@arm.com> |
feat(rmmd): add per-CPU activation token
To accommodate Live Firmware Activation (LFA), the RMM needs to preserve some state, between an old and the new copy of itself. The state which needs to be p
feat(rmmd): add per-CPU activation token
To accommodate Live Firmware Activation (LFA), the RMM needs to preserve some state, between an old and the new copy of itself. The state which needs to be preserved and its organisation would be completely under control of the RMM; it will be different between different RMM implementations and even between releases.
To keep the interface small, generic and robust, introduce an "activation token", which is an opaque 64-bit value to gets passed to each RMM as part of the boot/init phase. On the first initialisation, after a cold boot, this value would be initialised to 0. The RMM is expected to pass the actual value (for instance a pointer to a persistent data structure) back to BL31 as an additional argument of the RMM_BOOT_COMPLETE SMC call. On subsequent live activations, this updated token value gets passed to the (updated) RMM init routines, using the respective CPU registers.
Add an activation_token member to the (per-CPU) RMM context, and update its value with the value passed via the x2 register, at the RMM_BOOT_COMPLETE SMC call. Then pass that value into RMM either via x4 (on the primary core) or via x1 (on secondary cores). How the value is used or updated on the RMM side is of no further concern to BL31, it just passes the opaque value around. The TRP seems to be very jealous about the values in the first three registers, so let it ignore the value of x1 on a warmboot, to avoid a panic.
Change-Id: Ie8d96a046b74adb00e2ca5ce3b8458465bacf2b2 Signed-off-by: Andre Przywara <andre.przywara@arm.com>
show more ...
|
| 4e5247c1 | 08-Apr-2025 |
Yeoreum Yun <yeoreum.yun@arm.com> |
feat(el3-spmc): deliver TPM event log via hob list
Add MM_TPM_EVENT_LOG hob type and deliver tpm meaured event logs passed via secure transfer list to secure partition with hob list in SPMC_AT_EL3.
feat(el3-spmc): deliver TPM event log via hob list
Add MM_TPM_EVENT_LOG hob type and deliver tpm meaured event logs passed via secure transfer list to secure partition with hob list in SPMC_AT_EL3.
So that secure partition could get the meausred event log by TF-A.
Change-Id: I14f7f8cb8f8f54e07a13f40748ca551bcd265a51 Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
show more ...
|
| bb9fc8c0 | 05-Feb-2025 |
Jay Monkman <jmonkman@google.com> |
fix(el3-spmc): fixed x8-x17 register handling for FFA 1.2
Changed spmd_smc_switch_state to return all 18 registers for 64 bit calls, and set x8-17 to zero if necessary.
BREAKING CHANGE: Zeroes or f
fix(el3-spmc): fixed x8-x17 register handling for FFA 1.2
Changed spmd_smc_switch_state to return all 18 registers for 64 bit calls, and set x8-17 to zero if necessary.
BREAKING CHANGE: Zeroes or forwards a different set of registers, depending on the FF-A version of the source and destination. E.g. a call from a v1.1 caller to a v1.2 destination will zero out the extended registers, which is different from the old behavior of forwarding everything to EL2 SPMC, but only x0-x7 to the EL3 SPMC.
Change-Id: Ic31755af0fbb117b0ed74565fba9decebab353c4 Signed-off-by: Jay Monkman <jmonkman@google.com> Signed-off-by: Andrei Homescu <ahomescu@google.com>
show more ...
|