Lines Matching refs:the
4 Trusted Firmware-A (TF-A) implements a subset of the Trusted Board Boot
8 The TBB sequence starts when the platform is powered on and runs up
9 to the stage where it hands-off control to firmware running in the normal
10 world in DRAM. This is the cold boot path.
12 TF-A also implements the `PSCI`_ as a runtime service. PSCI is the interface
15 access TF-A runtime services via the Arm SMC (Secure Monitor Call) instruction.
16 The SMC instruction must be used as mandated by the SMC Calling Convention
20 in either security state. The details of the interrupt management framework
23 TF-A also implements a library for setting up and managing the translation
30 The descriptions in this chapter are for the Arm TrustZone architecture.
31 For changes to the firmware design for the `Arm Confidential Compute
32 Architecture (Arm CCA)`_ please refer to the chapter :ref:`Realm Management
38 The cold boot path starts when the platform is physically turned on. If
39 ``COLD_BOOT_SINGLE_CPU=0``, one of the CPUs released from reset is chosen as the
40 primary CPU, and the remaining CPUs are considered secondary CPUs. The primary
42 executed by the primary CPU, other than essential CPU initialization executed by
44 the primary CPU has performed enough initialization to boot them.
46 Refer to the :ref:`CPU Reset` for more information on the effect of the
49 The cold boot path in this implementation of TF-A depends on the execution
66 combination of the following types of memory regions. Each bootloader stage uses
71 - Regions accessible from only the secure state. For example, trusted SRAM and
72 ROM. The FVPs also implement the trusted DRAM which is statically
73 configured. Additionally, the Base FVPs and Juno development platform
74 configure the TrustZone Controller (TZC) to create a region in the DRAM
75 which is accessible only from the secure state.
77 The sections below provide the following details:
80 - initialization and execution of the first three stages during cold boot
81 - specification of the EL3 Runtime Software (BL31 for AArch64 and BL32 for
83 Firmware in place of the provided BL1 and BL2
88 Each of the Boot Loader stages may be dynamically configured if required by the
94 An example is the "dtb-registry" node, which contains the information about
95 the other device tree configurations (load-address, size, image_id).
97 stages and also by the Normal World Rich OS.
106 The Arm development platforms use the Flattened Device Tree format for the
109 Each Boot Loader stage can pass up to 4 arguments via registers to the next
110 stage. BL2 passes the list of the next images to execute to the *EL3 Runtime
111 Software* (BL31 for AArch64 and BL32 for AArch32) via `arg0`. All the other
112 arguments are platform defined. The Arm development platforms use the following
115 - BL1 passes the address of a meminfo_t structure to BL2 via ``arg1``. This
116 structure contains the memory layout available to BL2.
117 - When dynamic configuration files are present, the firmware configuration for
118 the next Boot Loader stage is populated in the first available argument and
119 the generic hardware configuration is passed the next available argument.
127 to BL31. Note, ``arg0`` is used to pass the list of executable images.
130 - For other BL3x images, if the firmware configuration file is loaded by
133 - In case SPMC_AT_EL3 is enabled, populate the BL32 image base, size and max
134 limit in the entry point information, since there is no platform function
136 ``arg4`` since the generic code uses ``arg1`` for stashing the SP manifest
139 - In case of the Arm FVP platform, FW_CONFIG address passed in ``arg1`` to
140 BL31/SP_MIN, and the SOC_FW_CONFIG and HW_CONFIG details are retrieved
146 This stage begins execution from the platform's reset vector at EL3. The reset
150 On the Arm development platforms, BL1 code starts execution from the reset
151 vector defined by the constant ``BL1_RO_BASE``. The BL1 data section is copied
152 to the top of trusted SRAM as defined by the constant ``BL1_RW_BASE``.
160 boot and a cold boot. This is done using platform-specific mechanisms (see the
161 ``plat_get_my_entrypoint()`` function in the :ref:`Porting Guide`). In the case
163 entrypoint. In the case of a cold boot, the secondary CPUs are placed in a safe
164 platform-specific state (see the ``plat_secondary_cold_boot_setup()`` function in
165 the :ref:`Porting Guide`) while the primary CPU executes the remaining cold boot
166 path as described in the following sections.
168 This step only applies when ``PROGRAMMABLE_RESET_ADDRESS=0``. Refer to the
169 :ref:`CPU Reset` for more information on the effect of the
181 a status code in the general purpose register ``X0/R0`` and call the
182 ``plat_report_exception()`` function (see the :ref:`Porting Guide`). The
220 The ``plat_report_exception()`` implementation on the Arm FVP port programs
221 the Versatile Express System LED register in the following format to
222 indicate the occurrence of an unexpected exception:
229 SYS_LED[7:3] - Exception Class (Sync/Async & origin). This is the value
230 of the status code
232 A write to the LED register reflects in the System LEDs (S6LED0..7) in the
233 CLCD window of the FVP.
235 BL1 does not expect to receive any exceptions other than the SMC exception.
236 For the latter, BL1 installs a simple stub. The stub expects to receive a
237 limited set of SMC types (determined by their function IDs in the general
242 - All SMCs listed in section "BL1 SMC Interface" in the :ref:`Firmware Update (FWU)`
250 BL1 calls the ``reset_handler`` macro/function which in turn calls the CPU
251 specific reset handler function (see the section: "CPU specific operations
257 On Arm platforms, BL1 performs the following platform initializations:
259 - Enable the Trusted Watchdog.
260 - Initialize the console.
261 - Configure the Interconnect to enable hardware coherency.
262 - Enable the MMU and map the memory it needs to access.
263 - Configure any required platform storage to load the next bootloader image
265 - If the BL1 dynamic configuration file, ``TB_FW_CONFIG``, is available, then
266 load it to the platform defined address and make it available to BL2 via
268 - Configure the system timer and program the `CNTFRQ_EL0` for use by NS-BL1U
276 required or to proceed with the normal boot process. If the platform code
277 returns ``BL2_IMAGE_ID`` then the normal boot sequence is executed as described
278 in the next section, else BL1 assumes that :ref:`Firmware Update (FWU)` is
279 required and execution passes to the first image in the
281 of the next image by calling ``bl1_plat_get_image_desc()``. The image descriptor
282 contains an ``entry_point_info_t`` structure, which BL1 uses to initialize the
283 execution state of the next image.
288 In the normal boot flow, BL1 execution continues as follows:
290 #. BL1 prints the following string from the primary CPU to indicate successful
291 execution of the BL1 stage:
298 platform-specific base address. Prior to the load, BL1 invokes
299 ``bl1_plat_handle_pre_image_load()`` which allows the platform to update or
300 use the image information. If the BL2 image file is not present or if
301 there is not enough free trusted SRAM the following error message is
310 populate the necessary arguments for BL2, which may also include the memory
311 layout. Further description of the memory layout can be found later
314 #. BL1 passes control to the BL2 image at Secure EL1 (for AArch64) or at
328 For AArch64, BL2 performs the minimal architectural initialization required
330 access to Floating Point and Advanced SIMD registers by setting the
333 For AArch32, the minimal architectural initialization required for subsequent
340 On Arm platforms, BL2 performs the following platform initializations:
342 - Initialize the console.
345 - Enable the MMU and map the memory it needs to access.
347 - Reserve some memory for passing information to the next bootloader image
349 - Define the extents of memory available for loading each subsequent
357 BL2 generic code loads the images based on the list of loadable images
358 provided by the platform. BL2 passes the list of executable images
359 provided by the platform to the next handover BL image.
361 The list of loadable images provided by the platform may also contain
363 needed in the ``bl2_plat_handle_post_image_load()`` function. These
365 by updating the corresponding entrypoint information in this function.
371 reset and system control. BL2 loads the optional SCP_BL2 image from platform
373 handling of SCP_BL2 is platform specific. For example, on the Juno Arm
374 development platform port the image is transferred into SCP's internal memory
375 using the Boot Over MHU (BOM) protocol after being loaded in the trusted SRAM
376 memory. The SCP executes SCP_BL2 and signals to the Application Processor (AP)
382 BL2 loads the EL3 Runtime Software image from platform storage into a platform-
383 specific address in trusted SRAM. If there is not enough memory to load the
389 BL2 loads the optional BL32 image from platform storage into a platform-
390 specific region of secure memory. The image executes in the secure world. BL2
391 relies on BL31 to pass control to the BL32 image, if present. Hence, BL2
392 populates a platform-specific area of memory with the entrypoint/load-address
393 of the BL32 image. The value of the Saved Processor Status Register (``SPSR``)
394 for entry into BL32 is not determined by BL2, it is initialized by the
401 BL2 loads the BL33 image (e.g. UEFI or other test or boot software) from
402 platform storage into non-secure memory as defined by the platform.
406 memory with the entrypoint and Saved Program Status Register (``SPSR``) of the
407 normal world software image. The entrypoint is the load address of the BL33
408 image. The ``SPSR`` is determined as specified in Section 5.13 of the
409 `PSCI`_. This information is passed to the EL3 Runtime Software.
416 #. BL2 passes control back to BL1 by raising an SMC, providing BL1 with the
417 BL31 entrypoint. The exception is handled by the SMC exception handler
420 #. BL1 turns off the MMU and flushes the caches. It clears the
421 ``SCTLR_EL3.M/I/C`` bits, flushes the data cache to the point of coherency
422 and invalidates the TLBs.
424 #. BL1 passes control to BL31 at the specified entrypoint at EL3.
429 Some platforms have a non-TF-A Boot ROM that expects the next boot stage
434 when the build flag RESET_TO_BL2 is enabled.
437 #. BL2 includes the reset code and the mailbox mechanism to differentiate
438 cold boot and warm boot. It runs at EL3 doing the arch
441 #. BL2 does not receive the meminfo information from BL1 anymore. This
442 information can be passed by the Boot ROM or be internal to the
445 #. Since BL2 executes at EL3, BL2 jumps directly to the next image,
446 instead of invoking the RUN_IMAGE SMC call.
449 We assume 3 different types of BootROM support on the platform:
451 #. The Boot ROM always jumps to the same address, for both cold
454 linker script defines the symbols __TEXT_RESIDENT_START__ and
455 __TEXT_RESIDENT_END__ that allows the platform to configure
456 correctly the memory map.
457 #. The platform has some mechanism to indicate the jump address to the
458 Boot ROM. Platform code can then program the jump address with
460 #. The platform has some mechanism to program the reset address using
461 the PROGRAMMABLE_RESET_ADDRESS feature. Platform code can then
462 program the reset address with psci_warmboot_entrypoint during
463 cold boot, bypassing the boot ROM for warm boot.
465 In the last 2 cases, no part of BL2 needs to remain resident at
466 runtime. In the first 2 cases, we expect the Boot ROM to be able to
470 This functionality can be tested with FVP loading the image directly
471 in memory and changing the address where the system jumps at reset.
477 With this configuration, FVP is like a platform of the first case,
478 where the Boot ROM jumps always to the same address. For simplification,
499 BL31 initializes the per-CPU data framework, which provides a cache of
504 It then replaces the exception vectors populated by BL1 with its own. BL31
506 is the only mechanism to access the runtime services implemented by BL31 (PSCI
507 for example). BL31 checks each SMC for validity as specified by the
508 `SMC Calling Convention`_ before passing control to the required SMC
511 BL31 programs the ``CNTFRQ_EL0`` register with the clock frequency of the system
512 counter, which is provided by the platform.
520 On Arm platforms, this consists of the following:
522 - Initialize the console.
523 - Configure the Interconnect to enable hardware coherency.
524 - Enable the MMU and map the memory it needs to access.
525 - Initialize the generic interrupt controller.
526 - Initialize the power controller device.
527 - Detect the system topology.
532 BL31 is responsible for initializing the runtime services. One of them is PSCI.
534 As part of the PSCI initializations, BL31 detects the system topology. It also
535 initializes the data structures that implement the state machine used to track
536 the state of power domain nodes. The state can be one of ``OFF``, ``RUN`` or
537 ``RETENTION``. All secondary CPUs are initially in the ``OFF`` state. The cluster
538 that the primary CPU belongs to is ``ON``; any other cluster is ``OFF``. It also
539 initializes the locks that protect them. BL31 accesses the state of a CPU or
540 cluster immediately after reset and before the data cache is enabled in the
545 detail in the "EL3 runtime services framework" section below.
547 Details about the status of the PSCI implementation are provided in the
556 once the runtime services are fully initialized. BL31 invokes such a
560 Details on BL32 initialization and the SPD's role are described in the
566 EL3 Runtime Software initializes the EL2 or EL1 processor context for normal-
568 the non-secure execution state. EL3 Runtime Software uses the entrypoint
569 information provided by BL2 to jump to the Non-trusted firmware image (BL33)
570 at the highest available Exception Level (EL2 if available, otherwise EL1).
576 would like to use TF-A BL31 for the EL3 Runtime Software. To enable this
578 interface between the Trusted Boot Firmware and BL31.
580 Future changes to the BL31 interface will be done in a backwards compatible
587 This function must only be called by the primary CPU.
589 On entry to this function the calling primary CPU must be executing in AArch64
599 X0 and X1 can be used to pass information from the Trusted Boot Firmware to the
610 Use of the X0 and X1 parameters
615 used by the common BL31 code.
617 The convention is that ``X0`` conveys information regarding the BL31, BL32 and
618 BL33 images from the Trusted Boot firmware and ``X1`` can be used for other
626 This information is required until the start of execution of BL33. This
628 the platform code in BL31, or provided in a platform defined memory location
629 by the Trusted Boot firmware, or passed from the Trusted Boot Firmware via the
631 the CPU caches if it is provided by an earlier boot stage and then accessed by
632 BL31 platform code before the caches are enabled.
635 ``X0`` and the Arm development platforms interpret this in the BL31 platform
641 BL31 does not depend on the enabled state of the MMU, data caches or
645 Data structures used in the BL31 cold boot interface
648 In the cold boot flow, ``entry_point_info`` is used to represent the execution
649 state of an image; that is, the state of general purpose registers, PC, and
678 evolution of the structures and the firmware images. For example, a version of
679 BL31 that can interpret the BL3x image information from different versions of
682 more details about the firmware images.
684 To support these scenarios the structures are versioned and sized, which enables
691 uint8_t type; /* type of the structure */
697 In `entry_point_info`, Bits 0 and 5 of ``attr`` field are used to encode the
698 security state; in other words, whether the image is to be executed in Secure,
702 code that allocates and populates these structures must set the header fields
703 appropriately, the ``SET_PARAM_HEAD()`` macro is defined to simplify this
710 the platform power management code with a Warm boot initialization
711 entry-point, to be invoked by the CPU immediately after the reset handler.
712 On entry to the Warm boot initialization function the calling CPU must be in
722 The PSCI implementation will initialize the processor state and ensure that the
730 documented and stable interface between the Trusted Boot Firmware and the
733 Future changes to the entrypoint interface will be done in a backwards
740 This function must only be called by the primary CPU.
742 On entry to this function the calling primary CPU must be executing in AArch32
750 R0 and R1 are used to pass information from the Trusted Boot Firmware to the
758 Use of the R0 and R1 parameters
761 The parameters are platform specific and the convention is that ``R0`` conveys
762 information regarding the BL3x images from the Trusted Boot firmware and ``R1``
770 the AArch32 EL3 Runtime Software, or provided in a platform defined memory
771 location by the Trusted Boot firmware, or passed from the Trusted Boot Firmware
772 via the Cold boot Initialization parameters. This data may need to be cleaned
773 out of the CPU caches if it is provided by an earlier boot stage and then
774 accessed by AArch32 EL3 Runtime Software before the caches are enabled.
776 When using AArch32 EL3 Runtime Software, the Arm development platforms pass a
783 AArch32 EL3 Runtime Software must not depend on the enabled state of the MMU,
791 of ``bl31_params``. The ``bl_params`` structure is based on the convention
799 If TF-A BL1 is used and the PROGRAMMABLE_RESET_ADDRESS build flag is false,
800 then AArch32 EL3 Runtime Software must ensure that BL1 branches to the warm
801 boot entrypoint by arranging for the BL1 platform function,
804 In this case, the warm boot entrypoint must be in AArch32 EL3, little-endian
813 ``psci_warmboot_entrypoint()`` function. In that case, the platform must fulfil
814 the pre-requisites mentioned in the :ref:`Porting Guide`.
819 Software executing in the non-secure state and in the secure state at exception
820 levels lower than EL3 will request runtime services using the Secure Monitor
821 Call (SMC) instruction. These requests will follow the convention described in
822 the SMC Calling Convention PDD (`SMCCC`_). The `SMCCC`_ assigns function
826 The EL3 runtime services framework enables the development of services by
828 The following sections describe the framework which facilitates the
832 The design of the runtime services depends heavily on the concepts and
833 definitions described in the `SMCCC`_, in particular SMC Function IDs, Owning
834 Entity Numbers (OEN), Fast and Yielding calls, and the SMC32 and SMC64 calling
839 not all been instantiated in the current implementation.
843 This service is for management of the entire system. The Power State
844 Coordination Interface (`PSCI`_) is the first set of standard service calls
850 it also requires a *Secure Monitor* at EL3 to switch the EL1 processor
851 context between the normal world (EL1/EL2) and trusted world (Secure-EL1).
853 `SMCCC`_ provides for such SMCs with the Trusted OS Call and Trusted
856 The interface between the EL3 Runtime Software and the Secure-EL1 Payload is
857 not defined by the `SMCCC`_ or any other standard. As a result, each
859 service - within TF-A this service is referred to as the Secure-EL1 Payload
863 (TSPD). Details of SPD design and TSP/TSPD operation are described in the
874 described in the `SMCCC`_.
879 A runtime service is registered using the ``DECLARE_RT_SVC()`` macro, specifying
880 the name of the service, the range of OENs covered, the type of service and
881 …dler functions. This macro instantiates a ``const struct rt_svc_desc`` for the service with these …
883 the framework to find all service descriptors included into BL31.
885 The specific service for a SMC Function is selected based on the OEN and call
886 type of the Function ID, and the framework uses that information in the service
887 descriptor to identify the handler for the SMC Call.
889 The service descriptors do not include information to identify the precise set
890 of SMC function identifiers supported by this service implementation, the
891 security state from which such calls are valid nor the capability to support
893 to these aspects of a SMC call is the responsibility of the service
894 implementation, the framework is focused on integration of services from
895 different providers and minimizing the time taken by the framework before the
898 Details of the parameters, requirements and behavior of the initialization and
899 call handling functions are provided in the following sections.
904 ``runtime_svc_init()`` in ``runtime_svc.c`` initializes the runtime services
905 framework running on the primary CPU during cold boot as part of the BL31
908 Initialization involves validating each of the declared runtime service
909 descriptors, calling the service initialization function and populating the
910 index used for runtime lookup of the service.
912 The BL31 linker script collects all of the declared service descriptors into a
913 single array and defines symbols that allow the framework to locate and traverse
914 the array, and determine its size.
918 not check descriptors for the following error conditions, and may behave in an
922 #. Multiple descriptors for the same range of OENs and ``call_type``
925 Once validated, the service ``init()`` callback is invoked. This function carries
927 function is only invoked on the primary CPU during cold boot. If the service
931 error and the service is ignored: this does not cause the firmware to halt.
933 The OEN and call type fields present in the SMC Function ID cover a total of
935 OENs, e.g. SMCs to call a Trusted OS function. To optimize the lookup of a
936 service handler, the framework uses an array of 128 indices that map every
937 distinct OEN/call-type combination either to one of the declared services or to
938 indicate the service is not handled. This ``rt_svc_descs_indices[]`` array is
939 populated for all of the OENs covered by a service after the service ``init()``
943 The following figure shows how the ``rt_svc_descs_indices[]`` index maps the SMC
944 Function ID call type and OEN onto a specific service handler in the
954 When the EL3 runtime services framework receives a Secure Monitor Call, the SMC
955 Function ID is passed in W0 from the lower exception level (as per the
956 `SMCCC`_). If the calling register width is AArch32, it is invalid to invoke an
957 SMC Function which indicates the SMC64 calling convention: such calls are
958 ignored and return the Unknown SMC Function Identifier result code ``0xFFFFFFFF``
961 Bit[31] (fast/yielding call) and bits[29:24] (owning entity number) of the SMC
962 Function ID are combined to index into the ``rt_svc_descs_indices[]`` array. The
963 resulting value might indicate a service that has no handler, in this case the
964 framework will also report an Unknown SMC Function ID. Otherwise, the value is
965 used as a further index into the ``rt_svc_descs[]`` array to locate the required
968 The service's ``handle()`` callback is provided with five of the SMC parameters
969 directly, the others are saved into memory for retrieval (if needed) by the
970 handler. The handler is also provided with an opaque ``handle`` for use with the
972 manipulation. The ``flags`` parameter indicates the security state of the caller
973 and the state of the SVE hint bit per the SMCCCv1.3. The framework finally sets
974 up the execution stack for the handler, and invokes the services ``handle()``
977 On return from the handler the result registers are populated in X0-X7 as needed
978 before restoring the stack and CPU state and returning from the original SMC.
983 Please refer to the :ref:`Exception Handling Framework` document.
990 The PSCI v1.1 specification categorizes APIs as optional and mandatory. All the
992 `PSCI`_ are implemented. The table lists the PSCI v1.1 APIs and their support
996 requires the platform to export a part of the implementation. Hence the level
997 of support of the mandatory APIs depends upon the support exported by the
998 platform port as well. The Juno and FVP (all variants) platforms export all the
1048 registered with the generic PSCI code to be supported.
1051 hooks to be registered with the generic PSCI code to be supported.
1055 integrating the PSCI library for EL3 Runtime Software can be found
1062 the ``USE_DSU_DRIVER`` build flag to enable the DSU driver.
1065 using ``dsu_driver_init()`` and for preserving the context of DSU
1068 To support the DSU driver, platforms must define the ``plat_dsu_data``
1077 the Trusted OS is coupled with a companion runtime service in the BL31
1078 firmware. This service is responsible for the initialisation of the Trusted
1079 OS and all communications with it. The Trusted OS is the BL32 stage of the
1083 TF-A uses a more general term for the BL32 software that runs at Secure-EL1 -
1084 the *Secure-EL1 Payload* - as it is not always a Trusted OS.
1088 production system using the Runtime Services Framework. On such a system, the
1089 Test BL32 image and service are replaced by the Trusted OS and its dispatcher
1090 service. The TF-A build system expects that the dispatcher will define the
1091 build flag ``NEED_BL32`` to enable it to include the BL32 in the build either
1092 as a binary or to compile from source depending on whether the ``BL32`` build
1096 communication with the normal-world software running in EL1/EL2. Communication
1097 is initiated by the normal-world software
1099 - either directly through a Fast SMC (as defined in the `SMCCC`_)
1102 informs the TSPD about the requested power management operation. This allows
1103 the TSP to prepare for or respond to the power state change
1107 - Initializing the TSP
1109 - Routing requests and responses between the secure and the non-secure
1110 states during the two types of communications just described
1116 the BL32 image. It needs access to the information passed by BL2 to BL31 to do
1123 which returns a reference to the ``entry_point_info`` structure corresponding to
1124 the image which will be run in the specified security state. The SPD uses this
1125 API to get entry point information for the SECURE image, BL32.
1127 In the absence of a BL32 image, BL31 passes control to the normal world
1128 bootloader image (BL33). When the BL32 image is present, it is typical
1129 that the SPD wants control to be passed to BL32 first and then later to BL33.
1131 To do this the SPD has to register a BL32 initialization function during
1132 initialization of the SPD service. The BL32 initialization function has this
1139 and is registered using the ``bl31_register_bl32_init()`` function.
1141 TF-A supports two approaches for the SPD to pass control to BL32 before
1142 returning through EL3 and running the non-trusted firmware (BL33):
1144 #. In the BL32 setup function, use ``bl31_set_next_image_type()`` to
1145 request that the exit from ``bl31_main()`` is to the BL32 entrypoint in
1146 Secure-EL1. BL31 will exit to BL32 using the asynchronous method by
1149 When the BL32 has completed initialization at Secure-EL1, it returns to
1150 BL31 by issuing an SMC, using a Function ID allocated to the SPD. On
1151 receipt of this SMC, the SPD service handler should switch the CPU context
1152 from trusted to normal world and use the ``bl31_set_next_image_type()`` and
1153 ``bl31_prepare_next_image_entry()`` functions to set up the initial return to
1154 the normal world firmware BL33. On return from the handler the framework
1159 invoke a 'world-switch synchronous call' to Secure-EL1 to run the BL32
1166 On completion BL32 returns control to BL31 via a SMC, and on receipt the
1167 SPD service handler invokes the synchronous call return mechanism to return
1168 to the BL32 initialization function. On return from this function,
1169 ``bl31_main()`` will set up the return to the normal world firmware BL33 and
1170 continue the boot process in the normal world.
1176 location in memory where the handler is stored is called the exception vector.
1177 For ARM architecture, exception vectors are stored in a table, called the exception
1180 Each EL (except EL0) has its own vector table, VBAR_ELn register stores the base
1187 a programmer may place at specific points in a program, to check the state of
1188 processor flags at these points in the code.
1208 Applies to all the exceptions in both AArch64/AArch32 mode of lower EL.
1210 Before handling any lower EL exception, we synchronize the errors at EL3 entry to ensure
1213 current EL) any time after PSTATE.A is unmasked. This is wrong because the error originated
1216 To solve this problem, synchronize the errors at EL3 entry and check for any pending
1224 - FFH : Handle the synchronized error first using **handle_pending_async_ea()** after
1225 that continue with original exception. It is the only scenario where EL3 is capable
1234 BL31 implements a scheme for reporting the processor state when an unhandled
1235 exception is encountered. The reporting mechanism attempts to preserve all the
1237 reports the general purpose, EL3, Secure EL1 and some EL2 state registers.
1240 the per-CPU pointer cache. The implementation attempts to minimise the memory
1241 required for this feature. The file ``crash_reporting.S`` contains the
1351 actions very early after a CPU is released from reset in both the cold and warm
1352 boot paths. This is done by calling the ``reset_handler`` macro/function in both
1353 the BL1 and BL31 images. It in turn calls the platform and CPU specific reset
1358 platform specific reset handler can be found in the :ref:`Porting Guide` (see
1359 the``plat_reset_handler()`` function).
1362 reset handling behavior is required between the first and the subsequent
1363 invocations of the reset handling code, this should be detected at runtime.
1364 In other words, the reset handler should be able to detect whether an action has
1366 e.g. skip the action the second time, or undo/redo it.
1374 interrupts on the platform. To this end, the platform is expected to provide the
1375 GIC driver (either GICv2 or GICv3, as selected by the platform) with the
1376 interrupt configuration during the driver initialisation.
1379 properties. In this scheme, in both GICv2 and GICv3 driver data structures, the
1381 element of the array specifies the interrupt number and its attributes
1382 (priority, group, configuration). Each element of the array shall be populated
1383 by the macro ``INTR_PROP_DESC()``. The macro takes the following arguments:
1400 Certain aspects of the Armv8-A architecture are implementation defined,
1403 implements a framework which categorises the common implementation defined
1415 Each of the above categories fulfils a different requirement.
1417 #. allows any processor specific initialization before the caches and MMU
1419 the intra-cluster coherency domain etc.
1421 #. allows each processor to implement the power down sequence mandated in
1424 #. allows a processor to provide additional information to the developer
1425 in the event of a crash, for example Cortex-A53 has registers which
1426 can expose the data cache contents.
1428 #. allows a processor to define a function that inspects and reports the status
1431 Please note that only 2. is mandated by the TRM.
1436 the CPU errata workarounds to be applied for each CPU type during reset
1438 can be found in the :ref:`Arm CPU Specific Build Macros` document.
1440 The CPU specific operations framework depends on the ``cpu_ops`` structure which
1441 needs to be exported for each type of CPU in the platform. It is defined in
1442 ``include/lib/cpus/aarch64/cpu_macros.S`` and has the following fields : ``midr``,
1448 exports the ``cpu_ops`` for Cortex-A53 CPU. According to the platform
1449 configuration, these CPU specific files must be included in the build by
1450 the platform makefile. The generic CPU specific operations framework code exists
1457 the Procedure Call Standard (PCS) in their internals. This is done to ensure
1458 calling these functions from outside the file doesn't unexpectedly corrupt
1459 registers in the very early environment and to help the internals to be easier
1460 to understand. Please see the :ref:`firmware_design_cpu_errata_implementation`
1468 | x16, x17 | do not use (used by the linker) |
1482 After a reset, the state of the CPU when it calls generic reset handler is:
1486 The BL entrypoint code first invokes the ``plat_reset_handler()`` to allow
1487 the platform to perform any system initialization required and any system
1489 the current CPU midr, finds the matching ``cpu_ops`` entry in the ``cpu_ops``
1490 array and returns it. Note that only the part number and implementer fields
1491 in midr are used to find the matching ``cpu_ops`` entry. The ``reset_func()`` in
1492 the returned ``cpu_ops`` is then invoked which executes the required reset
1493 handling for that CPU and also any errata workarounds enabled by the platform.
1495 It should be defined using the ``cpu_reset_func_{start,end}`` macros and its
1496 body may only clobber x0 to x14 with x14 being the cpu_rev parameter. The cpu
1497 file should also include a call to ``cpu_reset_prologue`` at the start of the
1503 During the BL31 initialization sequence, the pointer to the matching ``cpu_ops``
1509 request, determines the highest power level at which to execute power down
1510 sequence for a particular CPU. It uses the ``prepare_cpu_pwr_dwn()`` function to
1511 pick the right power down handler for the requested level. The function
1513 retrieves ``cpu_pwr_down_ops`` array, and indexes into the required level. If the
1514 requested power level is higher than what a CPU driver supports, the handler
1517 At runtime the platform hooks for power down are invoked by the PSCI service to
1522 the observation that events like GIC wakeups have a high likelihood of happening
1523 while the core is in the middle of its powerdown sequence (at ``wfi``). Older
1525 the work and latency involved, the newer cores will "give up" mid way through if
1526 no context has been lost yet. This is possible as the powerdown operation is
1529 To cater for this possibility, the powerdown hook will be called a second time
1530 after a wakeup. The expectation is that the first call will operate as before,
1531 while the second call will undo anything the first call did. This should be done
1532 statelessly, for example by toggling the relevant bits.
1537 If the crash reporting is enabled in BL31, when a crash occurs, the crash
1538 reporting framework calls ``do_cpu_reg_dump`` which retrieves the matching
1540 ``cpu_ops`` is invoked, which then returns the CPU specific register values to
1541 be reported and a pointer to the ASCII list of register names in a format
1542 expected by the crash reporting framework.
1557 Each erratum has a build flag in ``lib/cpus/cpu-ops.mk`` of the form:
1566 applied and reported at runtime (either by status reporting or the errata ABI).
1574 It will be called with the CPU revision as its first parameter (x0) and
1581 It should obtain the cpu revision (with ``cpu_get_rev_var``), call its
1582 revision checker, and perform the mitigation, should the erratum apply.
1586 #. Register itself to the framework
1590 where the ``errata_flag`` is the enable flag in ``cpu-ops.mk`` described
1593 See the next section on how to do this easily.
1597 CVEs have the format ``CVE_<year>_<number>``. To fit them in the framework, the
1598 ``erratum_id`` for the checker and the workaround functions become the
1599 ``number`` part of its name and the ``ERRATUM(<number>)`` part of the
1600 registration should instead be ``CVE(<year>, <number>)``. In the extremely
1601 unlikely scenario where a CVE and an erratum numbers clash, the CVE number
1608 AArch32 uses the legacy convention. The checker function has the format
1609 ``check_errata_<erratum_id>`` and the workaround has the format
1610 ``errata_<cpu_number>_<erratum_id>_wa`` where ``cpu_number`` is the shortform
1611 letter and number name of the CPU.
1613 For CVEs the ``erratum_id`` also becomes ``cve_<year>_<number>``.
1620 ``include/lib/cpus/aarch64/cpu_macros.S`` and the preferred way to implement
1624 in some arbitrary register, would have an implementation for the Cortex-A77,
1636 In a debug build of TF-A, on a CPU that comes out of reset, both BL1 and the
1638 errata status reporting function. It will read the ``errata_entries`` list of
1642 Reporting the status of errata workaround is for informational purpose only; it
1650 - the static contents of the image. These are data actually stored in the
1651 binary on the disk. In the ELF terminology, they are called ``PROGBITS``
1654 - the run-time contents of the image. These are data that don't occupy any
1655 space in the binary on the disk. The ELF binary just contains some
1656 metadata indicating where these data will be stored at run-time and the
1658 In the ELF terminology, they are called ``NOBITS`` sections.
1660 All PROGBITS sections are grouped together at the beginning of the image,
1662 governed by the linker scripts. This ensures that the raw binary images are
1664 sections then the resulting binary file would contain zero bytes in place of
1665 this NOBITS section, making the image unnecessarily bigger. Smaller images
1666 allow faster loading from the FIP to the main memory.
1677 linker scripts export some symbols into the program symbol table. Their values
1679 figure out the image memory layout.
1681 Linker symbols follow the following naming convention in TF-A.
1690 constraint on the section's end address then ``__<SECTION>_END__`` corresponds
1691 to the end address of the section's actual contents, rounded up to the right
1692 boundary. Refer to the value of ``__<SECTION>_UNALIGNED_END__`` to know the
1693 actual end address of the section's contents.
1703 alignment constraint on the section's end address then ``__<SECTION>_SIZE__``
1704 corresponds to the size of the section's actual contents, rounded up to the
1705 … ``__<SECTION>_SIZE__ = __<SECTION>_END__ - _<SECTION>_START__``. Refer to the value of ``__<SECTI…
1706 to know the actual size of the section's contents.
1714 Some of the linker symbols are mandatory as TF-A code relies on them to be
1715 defined. They are listed in the following subsections. Some of them must be
1720 used by any code but they help in understanding the bootloader images' memory
1721 layout as they are easy to spot in the link map files.
1726 All BL images share the following requirements:
1730 - The MMU setup code needs to know the extents of the coherent and read-only
1731 memory regions to set the right memory attributes. When
1732 ``SEPARATE_CODE_AND_RODATA=1``, it needs to know more specifically how the
1754 BL1 being the ROM image, it has additional requirements. BL1 resides in ROM and
1763 - ``__DATA_ROM_START__`` Start address of the ``.data`` section in ROM. Must be
1765 - ``__DATA_RAM_START__`` Address in RAM where the ``.data`` section should be
1767 - ``__DATA_SIZE__`` Size of the ``.data`` section (in ROM or RAM).
1771 How to choose the right base addresses for each bootloader stage image
1776 locations and the base addresses of each image must be chosen carefully such
1777 that images don't overlap each other in an undesired way. As the code grows,
1778 the base addresses might need adjustments to cope with the new memory layout.
1780 The memory layout is completely specific to the platform and so there is no
1781 general recipe for choosing the right base addresses for each bootloader image.
1782 However, there are tools to aid in understanding the memory layout. These are
1783 the link map files: ``build/<platform>/<build-type>/bl<x>/bl<x>.map``, with ``<x>``
1784 being the stage bootloader. They provide a detailed view of the memory usage of
1785 each image. Among other useful information, they provide the end address of
1793 For each bootloader image, the platform code must provide its start address
1794 as well as a limit address that it must not overstep. The latter is used in the
1795 linker scripts to check that the image doesn't grow past that address. If that
1796 happens, the linker will issue a message similar to the following:
1802 Additionally, if the platform memory layout implies some image overlaying like
1803 on FVP, BL31 and TSP need to know the limit address that their PROGBITS
1806 TF-A does not provide any mechanism to verify at boot time that the memory
1808 The platform must specify the memory available in the system for all the
1811 For example, in the case of BL1 loading BL2, ``bl1_plat_sec_mem_layout()`` will
1812 return the region defined by the platform where BL1 intends to load BL2. The
1813 ``load_image()`` function performs bounds check for the image size based on the
1814 base and maximum image size provided by the platforms. Platforms must take
1815 this behaviour into account when defining the base/size for each of the images.
1820 The following list describes the memory layout on the Arm development platforms:
1823 Firmware and the platform's power controller. This is located at the base of
1824 Trusted SRAM. The amount of Trusted SRAM available to load the bootloader
1825 images is reduced by the size of the shared memory.
1827 The shared memory is used to store the CPUs' entrypoint mailbox. On Juno,
1828 this is also used for the MHU payload when passing messages to and from the
1832 and also the dynamic firmware configurations.
1834 - On FVP, BL1 is originally sitting in the Trusted ROM at address ``0x0``. On
1836 data are relocated to the top of Trusted SRAM at runtime.
1841 is loaded at the top of the Trusted SRAM, such that its NOBITS sections will
1843 remain valid only until execution reaches the EL3 Runtime Software entry
1846 - On Juno, SCP_BL2 is loaded temporarily into the EL3 Runtime Software memory
1847 region and transferred to the SCP before being overwritten by EL3 Runtime
1850 - BL32 (for AArch64) can be loaded in one of the following locations:
1854 - Secure region of DRAM (top 16MB of DRAM configured by the TrustZone
1860 The location of the BL32 image will result in different memory maps. This is
1861 illustrated for both FVP and Juno in the following diagrams, using the TSP as
1865 Loading the BL32 image in TZC secured DRAM doesn't change the memory
1866 layout of the other images in Trusted SRAM.
1878 ``bl2_mem_params_descs`` contains parameters passed from BL2 to next the
1884 (These diagrams only cover the AArch64 case)
2107 been added to the storage layer and allows a package to be read from supported
2116 terminated by an end marker entry, and since the size of the ToC is 0 bytes,
2117 the offset equals the total size of the FIP file. All ToC entries describe some
2118 payload data that has been appended to the end of the binary package. With the
2119 information provided in the ToC entry the corresponding payload data can be
2142 The ToC header and entry formats are described in the header file
2143 ``include/tools_share/firmware_image_package.h``. This file is used by both the
2146 The ToC header has the following fields:
2150 `name`: The name of the ToC. This is currently used to validate the header.
2151 `serial_number`: A non-zero number provided by the creation tool
2157 A ToC entry has the following fields:
2164 the requested image name into the corresponding UUID when accessing the
2166 `offset_address`: The offset address at which the corresponding payload data
2167 can be found. The offset is calculated from the ToC base address.
2168 `size`: The size of the corresponding payload data in bytes.
2177 added to the tool as required.
2185 non-volatile platform storage. For the Arm development platforms, this is
2188 Bootloader images are loaded according to the platform policy as specified by
2189 the function ``plat_get_image_source()``. For the Arm development platforms, this
2190 means the platform will attempt to load images from a Firmware Image Package
2191 located at the start of NOR FLASH0.
2206 in the translation tables. The translation granule size in TF-A is 4KB. This
2207 is the smallest possible size of the coherent memory region.
2213 the Device nGnRE attributes when the MMU is turned on. Hence, at the expense of
2217 The alternative to the above approach is to allocate the susceptible data
2219 approach requires the data structures to be designed so that it is possible to
2220 work around the issue of mismatched memory attributes by performing software
2223 Disabling the use of coherent memory in TF-A
2226 It might be desirable to avoid the cost of allocating coherent memory on
2228 memory in firmware images through the build flag ``USE_COHERENT_MEM``.
2229 This flag is enabled by default. It can be disabled to choose the second
2232 The below sections analyze the data structures allocated in the coherent memory
2233 region and the changes required to allocate them in normal memory.
2238 The ``psci_non_cpu_pd_nodes`` data structure stores the platform's power domain
2240 structure is allocated in the coherent memory region in TF-A because it can be
2247 * Index of the first CPU power domain node level 0 which has this node
2253 * Number of CPU power domains which are siblings of the domain indexed
2254 * by 'cpu_start_idx' i.e. all the domains in the range 'cpu_start_idx
2260 * Index of the parent power domain node.
2268 /* For indexing the psci_lock array*/
2272 In order to move this data structure to normal memory, the use of each of its
2275 them from coherent memory involves only doing a clean and invalidate of the
2295 * Bit[0] : choosing. This field is set when the CPU is
2297 * Bits[1 - 15] : number. This is the bakery number allocated.
2302 It is a characteristic of Lamport's Bakery algorithm that the volatile per-CPU
2303 fields can be read by all CPUs but only written to by the owning CPU.
2305 Depending upon the data cache line size, the per-CPU fields of the
2308 CPUs with mismatched memory attributes. Since these fields are a part of the
2310 safeguard against the resulting coherency issues. As a result, simple software
2312 the following example.
2315 local cache line which contains a copy of the fields for other CPUs as well. Now
2316 CPU1 updates its per-CPU field of the ``bakery_lock_t`` structure with data cache
2318 its field in any other cache line in the system. This operation will invalidate
2319 the update made by CPU0 as well.
2321 To use bakery locks when ``USE_COHERENT_MEM`` is disabled, the lock data structure
2322 has been redesigned. The changes utilise the characteristic of Lamport's Bakery
2323 algorithm mentioned earlier. The bakery_lock structure only allocates the memory
2324 for a single CPU. The macro ``DEFINE_BAKERY_LOCK`` allocates all the bakery locks
2325 needed for a CPU into a section ``.bakery_lock``. The linker allocates the memory
2326 for other cores by using the total size allocated for the bakery_lock section
2328 perform software cache maintenance on the lock data structure without running
2339 * Bit[0] : choosing. This field is set when the CPU is
2341 * Bits[1 - 15] : number. This is the bakery number allocated.
2347 the combination of corresponding ``bakery_info_t`` structures for all CPUs in the
2348 system represents the complete bakery lock. The view in memory for a system
2389 operation on Lock_N, the corresponding ``bakery_info_t`` in both CPU0 and CPU1
2399 Removal of the coherent memory region leads to the additional software overhead
2400 of performing cache maintenance for the affected data structures. However, since
2401 the memory where the data structures are allocated is cacheable, the overhead is
2407 - Multiple cache line reads for each lock operation, since the bakery locks
2411 Measurements indicate that when bakery locks are allocated in Normal memory, the
2413 in Device memory the same is 2 micro seconds. The measurements were done on the
2419 ``USE_COHERENT_MEM`` and needs to use bakery locks in the porting layer, it can
2420 optionally define macro ``PLAT_PERCPU_BAKERY_LOCK_SIZE`` (see the
2421 :ref:`Porting Guide`). Refer to the reference platform code for examples.
2426 In the Armv8-A VMSA, translation table entries include fields that define the
2427 properties of the target memory region, such as its access permissions. The
2452 The 2KB alignment for the exception vectors is an architectural
2456 read-write permissions, whereas the code and read-only data below are configured
2459 However, the read-only data are not aligned on a page boundary. They are
2460 contiguous to the code. Therefore, the end of the code section and the beginning
2461 of the read-only data one might share a memory page. This forces both to be
2462 mapped with the same memory attributes. As the code needs to be executable, this
2463 means that the read-only data stored on the same memory page as the code are
2467 TF provides the build flag ``SEPARATE_CODE_AND_RODATA`` to isolate the code and
2469 of the access permissions for the code and read-only data. In this case,
2470 platform code gets a finer-grained view of the image layout and can
2471 appropriately map the code region as executable and the read-only data as
2475 between the code and read-only data to ensure the segregation of the two. To
2476 limit the memory cost, this flag also changes the memory layout such that the
2498 With this more condensed memory layout, the separation of read-only data will
2499 add zero or one page to the memory footprint of each BL image. Each platform
2500 should consider the trade-off between memory footprint and security.
2511 The following macros are provided by the framework:
2514 the event name, which must be a valid C identifier. All calls to
2515 ``REGISTER_PUBSUB_EVENT`` macro must be placed in the file
2519 subscribed handlers and calling them in turn. The handlers will be passed the
2522 - ``PUBLISH_EVENT(event)``: Like ``PUBLISH_EVENT_ARG``, except that the value
2525 - ``SUBSCRIBE_TO_EVENT(event, handler)``: Registers the ``handler`` to
2526 subscribe to ``event``. The handler will be executed whenever the ``event``
2532 iteration. This macro can be used for those patterns that none of the
2545 There may be arbitrary number of handlers registered to the same event. The
2553 Note that publishing an event on a PE blocks until all the subscribed handlers
2554 finish executing on the PE.
2558 renamed, or have their semantics altered in the future. Platforms may however
2566 - Define the event ``foo`` in the ``pubsub_events.h``.
2572 - Depending on the nature of event, use one of ``PUBLISH_EVENT_*()`` macros to
2573 publish the event at the appropriate path and time of execution.
2592 Reclaiming the BL31 initialization code
2595 A significant amount of the code used for the initialization of BL31 is never
2596 needed again after boot time. In order to reduce the runtime memory
2597 footprint, the memory used for this code can be reclaimed after initialization
2602 within the BL image for later reclamation by the platform. The platform can
2603 specify the filter and the memory region for this init section in BL31 via the
2604 plat.ld.S linker script. For example, on the FVP, this section is placed
2605 overlapping the secondary CPU stacks so that after the cold boot is done, this
2606 memory can be reclaimed for the stacks. The init memory section is initially
2608 completed, the FVP changes the attributes of this section to ``RW``,
2610 are changed within the ``bl31_plat_runtime_setup`` platform hook. The init
2612 boot initialization and it is upto the platform to make the decision.
2614 Please note that this will disable inlining for any functions with the __init
2627 By default, the global physical counter is used for the timestamp
2634 A PMF timestamp is uniquely identified across the system via the
2645 service name and a service identifier. Both the service name and
2646 identifier are unique within the system as a whole.
2654 To register a PMF service, the ``PMF_REGISTER_SERVICE()`` macro from ``pmf.h``
2655 is used. The arguments required are the service name, the service ID,
2656 the total number of local timestamps to be captured and a set of flags.
2658 The ``flags`` field can be specified as a bitwise-OR of the following values:
2663 PMF_DUMP_ENABLE: The timestamp is dumped on the serial console.
2668 retrieve a particular timestamp for the given service at runtime.
2671 from within TF-A. In order to retrieve timestamps from outside of TF-A, the
2673 accepts the same set of arguments as the ``PMF_REGISTER_SERVICE()``
2683 Having registered the service, the ``PMF_CAPTURE_TIMESTAMP()`` macro can be
2684 used to capture a timestamp at the location where it is used. The macro
2685 takes the service name, a local timestamp identifier and a flag as arguments.
2688 instructs PMF to do cache maintenance following the capture. Cache
2689 maintenance is required if any of the service's timestamps are captured
2692 To capture a timestamp in assembly code, the caller should use
2694 calculate the address of where the timestamp would be stored. The
2695 caller should then read ``CNTPCT_EL0`` register to obtain the timestamp
2696 and store it at the determined address for later retrieval.
2703 These macros accept the CPU's MPIDR value, or its ordinal position
2718 smc_fid: Holds the SMC identifier which is either `PMF_SMC_GET_TIMESTAMP_32`
2719 when the caller of the SMC is running in AArch32 mode
2720 or `PMF_SMC_GET_TIMESTAMP_64` when the caller is running in AArch64 mode.
2722 x2: The `mpidr` of the CPU for which the timestamp has to be retrieved.
2723 This can be the `mpidr` of a different core to the one initiating
2724 the SMC. In that case, service specific cache maintenance may be
2725 required to ensure the updated copy of the timestamp is returned.
2727 `PMF_CACHE_MAINT` is passed, then the PMF code will perform a
2728 cache invalidate before reading the timestamp. This ensures
2740 #. ``pmf_smc.c`` contains the SMC handling for registered PMF services.
2742 #. ``pmf.h`` contains the public interface to Performance Measurement Framework.
2753 section lists the usage of Architecture Extensions, and build flags
2763 - Determine the architecture extension support in TF-A build: All the mandatory
2769 by probing the compiler and checking what's supported by the compiler and what's best
2773 The build system requires that the platform provides a valid numeric value based on
2780 For details on the Architecture Extension and available features, please refer
2781 to the respective Architecture Extension Supplement.
2790 spinlocks. The ``USE_SPINLOCK_CAS`` build option when set to 1 selects the
2791 spinlock implementation using the ARMv8.1-LSE Compare and Swap instruction.
2793 the option is only available to AArch64 builds.
2798 - The presence of ARMv8.2-TTCNP is detected at runtime. When it is present, the
2800 Processing Elements in the same Inner Shareable domain use the same
2808 the Non-secure world so that lower ELs are allowed to use them without
2811 In order to enable the Secure world to use it, ``CTX_INCLUDE_PAUTH_REGS``
2813 to the context that is saved when doing a world switch.
2818 BL2, BL31, and the TSP if it is used.
2821 of the value of these build flags if the CPU supports it.
2823 If ``ARM_ARCH_MAJOR == 8`` and ``ARM_ARCH_MINOR >= 3`` the code footprint of
2824 enabling PAuth is lower because the compiler will use the optimized
2825 PAuth instructions rather than the backwards-compatible ones.
2844 There are several Armv7-A extensions available. Obviously the TrustZone
2845 extension is mandatory to support the TF-A bootloader and runtime services.
2857 the toolchain target architecture directive.
2859 Platform may choose to not define straight the toolchain target architecture
2870 TF-A code is logically divided between the three boot loader stages mentioned
2871 in the previous sections. The code is also divided into the following
2872 categories (present as directories in the source code):
2875 the platform.
2883 reside in the ``services/spd`` directory (e.g. ``services/spd/tspd``).
2885 Each boot loader stage uses code from one or more of the above mentioned
2886 categories. Based upon the above, the code layout looks like this:
2902 defined by the build system. This enables TF-A to compile certain code only
2905 All assembler files have the ``.S`` extension. The linker source files for each
2906 boot stage have the extension ``.ld.S``. These are processed by GCC to create the
2907 linker scripts which have the extension ``.ld``.
2909 FDTs provide a description of the hardware platform and are used by the Linux
2910 kernel at boot time. These can be found in the ``fdts`` directory.