165401337SJens Wiklander/* SPDX-License-Identifier: BSD-2-Clause */ 265401337SJens Wiklander/* 3*2cd578baSJens Wiklander * Copyright (c) 2015-2025, Linaro Limited 4809fa817SBalint Dobszay * Copyright (c) 2021-2023, Arm Limited 565401337SJens Wiklander */ 665401337SJens Wiklander 765401337SJens Wiklander#include <platform_config.h> 865401337SJens Wiklander 965401337SJens Wiklander#include <arm.h> 1059724f22SJens Wiklander#include <arm64_macros.S> 1165401337SJens Wiklander#include <asm.S> 1265401337SJens Wiklander#include <generated/asm-defines.h> 1365401337SJens Wiklander#include <keep.h> 14*2cd578baSJens Wiklander#include <kernel/asan.h> 15b5ec8152SJens Wiklander#include <kernel/thread.h> 167e399f9bSJens Wiklander#include <kernel/thread_private.h> 1759724f22SJens Wiklander#include <kernel/thread_private_arch.h> 18460c9735SJens Wiklander#include <mm/core_mmu.h> 1965401337SJens Wiklander#include <sm/optee_smc.h> 2065401337SJens Wiklander#include <sm/teesmc_opteed.h> 2165401337SJens Wiklander#include <sm/teesmc_opteed_macros.h> 2265401337SJens Wiklander 2365401337SJens Wiklander /* 2465401337SJens Wiklander * Setup SP_EL0 and SPEL1, SP will be set to SP_EL0. 25528dabb2SJerome Forissier * SP_EL0 is assigned: 26528dabb2SJerome Forissier * stack_tmp + (cpu_id + 1) * stack_tmp_stride - STACK_TMP_GUARD 2765401337SJens Wiklander * SP_EL1 is assigned thread_core_local[cpu_id] 2865401337SJens Wiklander */ 2965401337SJens Wiklander .macro set_sp 3065401337SJens Wiklander bl __get_core_pos 3165401337SJens Wiklander cmp x0, #CFG_TEE_CORE_NB_CORE 3265401337SJens Wiklander /* Unsupported CPU, park it before it breaks something */ 3365401337SJens Wiklander bge unhandled_cpu 34528dabb2SJerome Forissier add x0, x0, #1 35845ecd82SJerome Forissier adr_l x1, stack_tmp_stride 3665401337SJens Wiklander ldr w1, [x1] 3765401337SJens Wiklander mul x1, x0, x1 38528dabb2SJerome Forissier 39528dabb2SJerome Forissier /* x0 = stack_tmp - STACK_TMP_GUARD */ 40528dabb2SJerome Forissier adr_l x2, stack_tmp_rel 41528dabb2SJerome Forissier ldr w0, [x2] 42528dabb2SJerome Forissier add x0, x0, x2 43528dabb2SJerome Forissier 4465401337SJens Wiklander msr spsel, #0 4565401337SJens Wiklander add sp, x1, x0 4665401337SJens Wiklander bl thread_get_core_local 4765401337SJens Wiklander msr spsel, #1 4865401337SJens Wiklander mov sp, x0 4965401337SJens Wiklander msr spsel, #0 5065401337SJens Wiklander .endm 5165401337SJens Wiklander 52a0e8ffe9SJens Wiklander .macro read_feat_mte reg 53a0e8ffe9SJens Wiklander mrs \reg, id_aa64pfr1_el1 54a0e8ffe9SJens Wiklander ubfx \reg, \reg, #ID_AA64PFR1_EL1_MTE_SHIFT, #4 55a0e8ffe9SJens Wiklander .endm 56a0e8ffe9SJens Wiklander 576fa59c9aSSeonghyun Park .macro read_feat_pan reg 586fa59c9aSSeonghyun Park mrs \reg, id_mmfr3_el1 596fa59c9aSSeonghyun Park ubfx \reg, \reg, #ID_MMFR3_EL1_PAN_SHIFT, #4 606fa59c9aSSeonghyun Park .endm 616fa59c9aSSeonghyun Park 6265401337SJens Wiklander .macro set_sctlr_el1 6365401337SJens Wiklander mrs x0, sctlr_el1 6465401337SJens Wiklander orr x0, x0, #SCTLR_I 6565401337SJens Wiklander orr x0, x0, #SCTLR_SA 6665401337SJens Wiklander orr x0, x0, #SCTLR_SPAN 6765401337SJens Wiklander#if defined(CFG_CORE_RWDATA_NOEXEC) 6865401337SJens Wiklander orr x0, x0, #SCTLR_WXN 6965401337SJens Wiklander#endif 7065401337SJens Wiklander#if defined(CFG_SCTLR_ALIGNMENT_CHECK) 7165401337SJens Wiklander orr x0, x0, #SCTLR_A 7265401337SJens Wiklander#else 7365401337SJens Wiklander bic x0, x0, #SCTLR_A 7465401337SJens Wiklander#endif 75a0e8ffe9SJens Wiklander#ifdef CFG_MEMTAG 76a0e8ffe9SJens Wiklander read_feat_mte x1 77a0e8ffe9SJens Wiklander cmp w1, #1 78a0e8ffe9SJens Wiklander b.ls 111f 79a0e8ffe9SJens Wiklander orr x0, x0, #(SCTLR_ATA | SCTLR_ATA0) 80a0e8ffe9SJens Wiklander bic x0, x0, #SCTLR_TCF_MASK 81a0e8ffe9SJens Wiklander bic x0, x0, #SCTLR_TCF0_MASK 82a0e8ffe9SJens Wiklander111: 83a0e8ffe9SJens Wiklander#endif 8493dc6b29SJens Wiklander#if defined(CFG_TA_PAUTH) && defined(CFG_TA_BTI) 8593dc6b29SJens Wiklander orr x0, x0, #SCTLR_BT0 8693dc6b29SJens Wiklander#endif 8793dc6b29SJens Wiklander#if defined(CFG_CORE_PAUTH) && defined(CFG_CORE_BTI) 8893dc6b29SJens Wiklander orr x0, x0, #SCTLR_BT1 8993dc6b29SJens Wiklander#endif 9065401337SJens Wiklander msr sctlr_el1, x0 9165401337SJens Wiklander .endm 9265401337SJens Wiklander 93a0e8ffe9SJens Wiklander .macro init_memtag_per_cpu 94a0e8ffe9SJens Wiklander read_feat_mte x0 95a0e8ffe9SJens Wiklander cmp w0, #1 96a0e8ffe9SJens Wiklander b.ls 11f 97a0e8ffe9SJens Wiklander 98a0e8ffe9SJens Wiklander#ifdef CFG_TEE_CORE_DEBUG 99a0e8ffe9SJens Wiklander /* 100a0e8ffe9SJens Wiklander * This together with GCR_EL1.RRND = 0 will make the tags 101a0e8ffe9SJens Wiklander * acquired with the irg instruction deterministic. 102a0e8ffe9SJens Wiklander */ 103a0e8ffe9SJens Wiklander mov_imm x0, 0xcafe00 104a0e8ffe9SJens Wiklander msr rgsr_el1, x0 105a0e8ffe9SJens Wiklander /* Avoid tag = 0x0 and 0xf */ 106a0e8ffe9SJens Wiklander mov x0, #0 107a0e8ffe9SJens Wiklander#else 108a0e8ffe9SJens Wiklander /* 109a0e8ffe9SJens Wiklander * Still avoid tag = 0x0 and 0xf as we use that tag for 110a0e8ffe9SJens Wiklander * everything which isn't explicitly tagged. Setting 111a0e8ffe9SJens Wiklander * GCR_EL1.RRND = 1 to allow an implementation specific 112a0e8ffe9SJens Wiklander * method of generating the tags. 113a0e8ffe9SJens Wiklander */ 114a0e8ffe9SJens Wiklander mov x0, #GCR_EL1_RRND 115a0e8ffe9SJens Wiklander#endif 116a0e8ffe9SJens Wiklander orr x0, x0, #1 117a0e8ffe9SJens Wiklander orr x0, x0, #(1 << 15) 118a0e8ffe9SJens Wiklander msr gcr_el1, x0 119a0e8ffe9SJens Wiklander 120a0e8ffe9SJens Wiklander /* 121a0e8ffe9SJens Wiklander * Enable the tag checks on the current CPU. 122a0e8ffe9SJens Wiklander * 123a0e8ffe9SJens Wiklander * Depends on boot_init_memtag() having cleared tags for 124a0e8ffe9SJens Wiklander * TEE core memory. Well, not really, addresses with the 125a0e8ffe9SJens Wiklander * tag value 0b0000 will use unchecked access due to 126a0e8ffe9SJens Wiklander * TCR_TCMA0. 127a0e8ffe9SJens Wiklander */ 128a0e8ffe9SJens Wiklander mrs x0, tcr_el1 129a0e8ffe9SJens Wiklander orr x0, x0, #TCR_TBI0 130a0e8ffe9SJens Wiklander orr x0, x0, #TCR_TCMA0 131a0e8ffe9SJens Wiklander msr tcr_el1, x0 132a0e8ffe9SJens Wiklander 133a0e8ffe9SJens Wiklander mrs x0, sctlr_el1 134a0e8ffe9SJens Wiklander orr x0, x0, #SCTLR_TCF_SYNC 135a0e8ffe9SJens Wiklander orr x0, x0, #SCTLR_TCF0_SYNC 136a0e8ffe9SJens Wiklander msr sctlr_el1, x0 137a0e8ffe9SJens Wiklander 138a0e8ffe9SJens Wiklander isb 139a0e8ffe9SJens Wiklander11: 140a0e8ffe9SJens Wiklander .endm 141a0e8ffe9SJens Wiklander 142449b5f25SJens Wiklander .macro init_pauth_secondary_cpu 14393dc6b29SJens Wiklander msr spsel, #1 14493dc6b29SJens Wiklander ldp x0, x1, [sp, #THREAD_CORE_LOCAL_KEYS] 14593dc6b29SJens Wiklander msr spsel, #0 14693dc6b29SJens Wiklander write_apiakeyhi x0 14793dc6b29SJens Wiklander write_apiakeylo x1 14893dc6b29SJens Wiklander mrs x0, sctlr_el1 14993dc6b29SJens Wiklander orr x0, x0, #SCTLR_ENIA 15093dc6b29SJens Wiklander msr sctlr_el1, x0 1511a019e04SJason Li isb 15293dc6b29SJens Wiklander .endm 15393dc6b29SJens Wiklander 1546fa59c9aSSeonghyun Park .macro init_pan 1556fa59c9aSSeonghyun Park read_feat_pan x0 1566fa59c9aSSeonghyun Park cmp x0, #0 1576fa59c9aSSeonghyun Park b.eq 1f 1586fa59c9aSSeonghyun Park mrs x0, sctlr_el1 1596fa59c9aSSeonghyun Park bic x0, x0, #SCTLR_SPAN 1606fa59c9aSSeonghyun Park msr sctlr_el1, x0 1616fa59c9aSSeonghyun Park write_pan_enable 1626fa59c9aSSeonghyun Park 1: 1636fa59c9aSSeonghyun Park .endm 1646fa59c9aSSeonghyun Park 16565401337SJens WiklanderFUNC _start , : 166f978f183SJens Wiklander /* 167f332e77cSJens Wiklander * Temporary copy of boot argument registers, will be passed to 168f332e77cSJens Wiklander * boot_save_args() further down. 169f978f183SJens Wiklander */ 170f1f431c7SJens Wiklander mov x19, x0 171f332e77cSJens Wiklander mov x20, x1 172f332e77cSJens Wiklander mov x21, x2 173f332e77cSJens Wiklander mov x22, x3 17465401337SJens Wiklander 17565401337SJens Wiklander adr x0, reset_vect_table 17665401337SJens Wiklander msr vbar_el1, x0 17765401337SJens Wiklander isb 17865401337SJens Wiklander 1796fa59c9aSSeonghyun Park#ifdef CFG_PAN 1806fa59c9aSSeonghyun Park init_pan 1816fa59c9aSSeonghyun Park#endif 1826fa59c9aSSeonghyun Park 18365401337SJens Wiklander set_sctlr_el1 18465401337SJens Wiklander isb 18565401337SJens Wiklander 18665401337SJens Wiklander#ifdef CFG_WITH_PAGER 18765401337SJens Wiklander /* 18865401337SJens Wiklander * Move init code into correct location and move hashes to a 18965401337SJens Wiklander * temporary safe location until the heap is initialized. 19065401337SJens Wiklander * 19165401337SJens Wiklander * The binary is built as: 19265401337SJens Wiklander * [Pager code, rodata and data] : In correct location 19365401337SJens Wiklander * [Init code and rodata] : Should be copied to __init_start 19465401337SJens Wiklander * [struct boot_embdata + data] : Should be saved before 19565401337SJens Wiklander * initializing pager, first uint32_t tells the length of the data 19665401337SJens Wiklander */ 19765401337SJens Wiklander adr x0, __init_start /* dst */ 19865401337SJens Wiklander adr x1, __data_end /* src */ 19965401337SJens Wiklander adr x2, __init_end 20065401337SJens Wiklander sub x2, x2, x0 /* init len */ 20165401337SJens Wiklander ldr w4, [x1, x2] /* length of hashes etc */ 20265401337SJens Wiklander add x2, x2, x4 /* length of init and hashes etc */ 20365401337SJens Wiklander /* Copy backwards (as memmove) in case we're overlapping */ 20465401337SJens Wiklander add x0, x0, x2 /* __init_start + len */ 20565401337SJens Wiklander add x1, x1, x2 /* __data_end + len */ 2065727b6afSJens Wiklander adr_l x3, boot_cached_mem_end 20765401337SJens Wiklander str x0, [x3] 20865401337SJens Wiklander adr x2, __init_start 20965401337SJens Wiklandercopy_init: 21065401337SJens Wiklander ldp x3, x4, [x1, #-16]! 21165401337SJens Wiklander stp x3, x4, [x0, #-16]! 21265401337SJens Wiklander cmp x0, x2 21365401337SJens Wiklander b.gt copy_init 21465401337SJens Wiklander#else 21565401337SJens Wiklander /* 21665401337SJens Wiklander * The binary is built as: 21765401337SJens Wiklander * [Core, rodata and data] : In correct location 218d461c892SJens Wiklander * [struct boot_embdata + data] : Should be moved to right before 219d461c892SJens Wiklander * __vcore_free_end, the first uint32_t tells the length of the 220d461c892SJens Wiklander * struct + data 22165401337SJens Wiklander */ 22265401337SJens Wiklander adr_l x1, __data_end /* src */ 22365401337SJens Wiklander ldr w2, [x1] /* struct boot_embdata::total_len */ 224d461c892SJens Wiklander /* dst */ 225d461c892SJens Wiklander adr_l x0, __vcore_free_end 226d461c892SJens Wiklander sub x0, x0, x2 227d461c892SJens Wiklander /* round down to beginning of page */ 228d461c892SJens Wiklander bic x0, x0, #(SMALL_PAGE_SIZE - 1) 229d461c892SJens Wiklander adr_l x3, boot_embdata_ptr 23065401337SJens Wiklander str x0, [x3] 231d461c892SJens Wiklander 232d461c892SJens Wiklander /* Copy backwards (as memmove) in case we're overlapping */ 233d461c892SJens Wiklander add x1, x1, x2 234d461c892SJens Wiklander add x2, x0, x2 235d461c892SJens Wiklander adr_l x3, boot_cached_mem_end 236d461c892SJens Wiklander str x2, [x3] 23765401337SJens Wiklander 23865401337SJens Wiklandercopy_init: 23965401337SJens Wiklander ldp x3, x4, [x1, #-16]! 240d461c892SJens Wiklander stp x3, x4, [x2, #-16]! 241d461c892SJens Wiklander cmp x2, x0 24265401337SJens Wiklander b.gt copy_init 24365401337SJens Wiklander#endif 24465401337SJens Wiklander 24565401337SJens Wiklander /* 24665401337SJens Wiklander * Clear .bss, this code obviously depends on the linker keeping 24765401337SJens Wiklander * start/end of .bss at least 8 byte aligned. 24865401337SJens Wiklander */ 24965401337SJens Wiklander adr_l x0, __bss_start 25065401337SJens Wiklander adr_l x1, __bss_end 25165401337SJens Wiklanderclear_bss: 25265401337SJens Wiklander str xzr, [x0], #8 25365401337SJens Wiklander cmp x0, x1 25465401337SJens Wiklander b.lt clear_bss 25565401337SJens Wiklander 256b76b2296SJerome Forissier#ifdef CFG_NS_VIRTUALIZATION 25765401337SJens Wiklander /* 25865401337SJens Wiklander * Clear .nex_bss, this code obviously depends on the linker keeping 25965401337SJens Wiklander * start/end of .bss at least 8 byte aligned. 26065401337SJens Wiklander */ 26130bfe0d4SJens Wiklander adr_l x0, __nex_bss_start 26230bfe0d4SJens Wiklander adr_l x1, __nex_bss_end 26365401337SJens Wiklanderclear_nex_bss: 26465401337SJens Wiklander str xzr, [x0], #8 26565401337SJens Wiklander cmp x0, x1 26665401337SJens Wiklander b.lt clear_nex_bss 26765401337SJens Wiklander#endif 26865401337SJens Wiklander 2690d928692SJens Wiklander#if defined(CFG_CORE_PHYS_RELOCATABLE) 2700d928692SJens Wiklander /* 2710d928692SJens Wiklander * Save the base physical address, it will not change after this 2720d928692SJens Wiklander * point. 2730d928692SJens Wiklander */ 2740d928692SJens Wiklander adr_l x2, core_mmu_tee_load_pa 2750d928692SJens Wiklander adr x1, _start /* Load address */ 2760d928692SJens Wiklander str x1, [x2] 2770d928692SJens Wiklander 2780d928692SJens Wiklander mov_imm x0, TEE_LOAD_ADDR /* Compiled load address */ 2790d928692SJens Wiklander sub x0, x1, x0 /* Relocatation offset */ 2800d928692SJens Wiklander 2810d928692SJens Wiklander cbz x0, 1f 2820d928692SJens Wiklander bl relocate 2830d928692SJens Wiklander1: 2840d928692SJens Wiklander#endif 2850d928692SJens Wiklander 286*2cd578baSJens Wiklander#ifdef CFG_CORE_SANITIZE_KADDRESS 287*2cd578baSJens Wiklander /* Initialize the entire shadow area with no access */ 288*2cd578baSJens Wiklander adr_l x0, __asan_shadow_start /* start */ 289*2cd578baSJens Wiklander adr_l x1, __asan_shadow_end /* limit */ 290*2cd578baSJens Wiklander mov x2, #ASAN_DATA_RED_ZONE 291*2cd578baSJens Wiklander1: str x2, [x0], #8 292*2cd578baSJens Wiklander cmp x0, x1 293*2cd578baSJens Wiklander bls 1b 294*2cd578baSJens Wiklander 295*2cd578baSJens Wiklander#if !defined(CFG_DYN_CONFIG) 296*2cd578baSJens Wiklander /* Mark the entire stack area as OK */ 297*2cd578baSJens Wiklander mov_imm x2, CFG_ASAN_SHADOW_OFFSET 298*2cd578baSJens Wiklander adr_l x0, __nozi_stack_start /* start */ 299*2cd578baSJens Wiklander lsr x0, x0, #ASAN_BLOCK_SHIFT 300*2cd578baSJens Wiklander add x0, x0, x2 301*2cd578baSJens Wiklander adr_l x1, __nozi_stack_end /* limit */ 302*2cd578baSJens Wiklander lsr x1, x1, #ASAN_BLOCK_SHIFT 303*2cd578baSJens Wiklander add x1, x1, x2 304*2cd578baSJens Wiklander mov w2, #0 305*2cd578baSJens Wiklander1: strb w2, [x0], #1 306*2cd578baSJens Wiklander cmp x0, x1 307*2cd578baSJens Wiklander bls 1b 308*2cd578baSJens Wiklander#endif 309*2cd578baSJens Wiklander#endif 310*2cd578baSJens Wiklander 31165401337SJens Wiklander /* Setup SP_EL0 and SP_EL1, SP will be set to SP_EL0 */ 312bb538722SAlvin Chang#if defined(CFG_DYN_CONFIG) 31359724f22SJens Wiklander /* 31459724f22SJens Wiklander * Point SP_EL0 to a temporary stack with the at the end of mapped 31559724f22SJens Wiklander * core memory. 31659724f22SJens Wiklander * Point SP_EL1 a temporary struct thread_core_local before the 31759724f22SJens Wiklander * temporary stack. 31859724f22SJens Wiklander */ 31959724f22SJens Wiklander adr_l x0, boot_embdata_ptr 32059724f22SJens Wiklander ldr x0, [x0] 32159724f22SJens Wiklander sub x1, x0, #THREAD_BOOT_INIT_TMP_ALLOC 32259724f22SJens Wiklander 32359724f22SJens Wiklander /* Clear the allocated struct thread_core_local */ 32459724f22SJens Wiklander add x2, x1, #THREAD_CORE_LOCAL_SIZE 32559724f22SJens Wiklander1: stp xzr, xzr, [x2, #-16]! 32659724f22SJens Wiklander cmp x2, x1 32759724f22SJens Wiklander bgt 1b 32859724f22SJens Wiklander 32959724f22SJens Wiklander mov x2, #THREAD_ID_INVALID 33059724f22SJens Wiklander str x2, [x1, #THREAD_CORE_LOCAL_CURR_THREAD] 33159724f22SJens Wiklander mov w2, #THREAD_CLF_TMP 33259724f22SJens Wiklander str w2, [x1, #THREAD_CORE_LOCAL_FLAGS] 33359724f22SJens Wiklander sub x0, x0, #(__STACK_CANARY_SIZE / 2) 33459724f22SJens Wiklander str x0, [x1, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 33559724f22SJens Wiklander sub x2, x0, #(THREAD_BOOT_INIT_TMP_ALLOC / 2) 33659724f22SJens Wiklander str x2, [x1, #THREAD_CORE_LOCAL_ABT_STACK_VA_END] 33759724f22SJens Wiklander msr spsel, #1 33859724f22SJens Wiklander mov sp, x1 33959724f22SJens Wiklander msr spsel, #0 34059724f22SJens Wiklander mov sp, x0 34159724f22SJens Wiklander /* 34259724f22SJens Wiklander * Record a single core, to be changed later before secure world 34359724f22SJens Wiklander * boot is done. 34459724f22SJens Wiklander */ 34559724f22SJens Wiklander adr_l x2, thread_core_local 34659724f22SJens Wiklander str x1, [x2] 34759724f22SJens Wiklander adr_l x2, thread_core_count 34859724f22SJens Wiklander mov x0, #1 34959724f22SJens Wiklander str x0, [x2] 35059724f22SJens Wiklander#else 35165401337SJens Wiklander set_sp 35265401337SJens Wiklander 353758c3687SJens Wiklander /* Initialize thread_core_local[current_cpu_id] for early boot */ 354b5ec8152SJens Wiklander bl thread_get_abt_stack 355b5ec8152SJens Wiklander mov x1, sp 356b5ec8152SJens Wiklander msr spsel, #1 357b5ec8152SJens Wiklander str x1, [sp, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 358b5ec8152SJens Wiklander str x0, [sp, #THREAD_CORE_LOCAL_ABT_STACK_VA_END] 359b5ec8152SJens Wiklander mov x0, #THREAD_ID_INVALID 360b5ec8152SJens Wiklander str x0, [sp, #THREAD_CORE_LOCAL_CURR_THREAD] 361b5ec8152SJens Wiklander mov w0, #THREAD_CLF_TMP 362b5ec8152SJens Wiklander str w0, [sp, #THREAD_CORE_LOCAL_FLAGS] 363b5ec8152SJens Wiklander msr spsel, #0 36459724f22SJens Wiklander#endif 365b166fabfSJerome Forissier 36665401337SJens Wiklander /* Enable aborts now that we can receive exceptions */ 36765401337SJens Wiklander msr daifclr, #DAIFBIT_ABT 36865401337SJens Wiklander 36965401337SJens Wiklander /* 37065401337SJens Wiklander * Invalidate dcache for all memory used during initialization to 37165401337SJens Wiklander * avoid nasty surprices when the cache is turned on. We must not 37265401337SJens Wiklander * invalidate memory not used by OP-TEE since we may invalidate 37365401337SJens Wiklander * entries used by for instance ARM Trusted Firmware. 37465401337SJens Wiklander */ 37565401337SJens Wiklander adr_l x0, __text_start 3765727b6afSJens Wiklander adr_l x1, boot_cached_mem_end 3775727b6afSJens Wiklander ldr x1, [x1] 37865401337SJens Wiklander sub x1, x1, x0 37965401337SJens Wiklander bl dcache_cleaninv_range 38065401337SJens Wiklander 38165401337SJens Wiklander /* Enable Console */ 38265401337SJens Wiklander bl console_init 38365401337SJens Wiklander 384f332e77cSJens Wiklander mov x0, x19 385f332e77cSJens Wiklander mov x1, x20 386f332e77cSJens Wiklander mov x2, x21 387f332e77cSJens Wiklander mov x3, x22 388f332e77cSJens Wiklander mov x4, xzr 389f332e77cSJens Wiklander bl boot_save_args 390e1602654SJens Wiklander 391d461c892SJens Wiklander#ifdef CFG_WITH_PAGER 392d461c892SJens Wiklander adr_l x0, __init_end /* pointer to boot_embdata */ 393d461c892SJens Wiklander ldr w1, [x0] /* struct boot_embdata::total_len */ 394d461c892SJens Wiklander add x0, x0, x1 395d461c892SJens Wiklander add x0, x0, #0xfff /* round up */ 396d461c892SJens Wiklander bic x0, x0, #0xfff /* to next page */ 397d461c892SJens Wiklander mov_imm x1, (TEE_RAM_PH_SIZE + TEE_RAM_START) 398d461c892SJens Wiklander mov x2, x1 399d461c892SJens Wiklander#else 400d461c892SJens Wiklander adr_l x0, __vcore_free_start 401d461c892SJens Wiklander adr_l x1, boot_embdata_ptr 402d461c892SJens Wiklander ldr x1, [x1] 403bb538722SAlvin Chang#ifdef CFG_DYN_CONFIG 40459724f22SJens Wiklander sub x1, x1, #THREAD_BOOT_INIT_TMP_ALLOC 40559724f22SJens Wiklander#endif 406d461c892SJens Wiklander adr_l x2, __vcore_free_end; 407d461c892SJens Wiklander#endif 408d461c892SJens Wiklander bl boot_mem_init 409d461c892SJens Wiklander 410a0e8ffe9SJens Wiklander#ifdef CFG_MEMTAG 411a0e8ffe9SJens Wiklander /* 412a0e8ffe9SJens Wiklander * If FEAT_MTE2 is available, initializes the memtag callbacks. 413a0e8ffe9SJens Wiklander * Tags for OP-TEE core memory are then cleared to make it safe to 414a0e8ffe9SJens Wiklander * enable MEMTAG below. 415a0e8ffe9SJens Wiklander */ 416a0e8ffe9SJens Wiklander bl boot_init_memtag 417a0e8ffe9SJens Wiklander#endif 418a0e8ffe9SJens Wiklander 41965401337SJens Wiklander#ifdef CFG_CORE_ASLR 42065401337SJens Wiklander bl get_aslr_seed 42172f437a7SJens Wiklander#ifdef CFG_CORE_ASLR_SEED 42272f437a7SJens Wiklander mov_imm x0, CFG_CORE_ASLR_SEED 42372f437a7SJens Wiklander#endif 42465401337SJens Wiklander#else 42565401337SJens Wiklander mov x0, #0 42665401337SJens Wiklander#endif 42765401337SJens Wiklander 42865401337SJens Wiklander adr x1, boot_mmu_config 42965401337SJens Wiklander bl core_init_mmu_map 43065401337SJens Wiklander 43165401337SJens Wiklander#ifdef CFG_CORE_ASLR 43265401337SJens Wiklander /* 433c79fb6d4SJens Wiklander * Process relocation information again updating for the virtual 434c79fb6d4SJens Wiklander * map offset. We're doing this now before MMU is enabled as some 435c79fb6d4SJens Wiklander * of the memory will become write protected. 43665401337SJens Wiklander */ 437c79fb6d4SJens Wiklander ldr x0, boot_mmu_config + CORE_MMU_CONFIG_MAP_OFFSET 4380d928692SJens Wiklander cbz x0, 1f 43965401337SJens Wiklander /* 4405727b6afSJens Wiklander * Update boot_cached_mem_end address with load offset since it was 44165401337SJens Wiklander * calculated before relocation. 44265401337SJens Wiklander */ 4435727b6afSJens Wiklander adr_l x5, boot_cached_mem_end 44465401337SJens Wiklander ldr x6, [x5] 44565401337SJens Wiklander add x6, x6, x0 44665401337SJens Wiklander str x6, [x5] 4470d928692SJens Wiklander adr x1, _start /* Load address */ 44865401337SJens Wiklander bl relocate 4490d928692SJens Wiklander1: 45065401337SJens Wiklander#endif 45165401337SJens Wiklander 45265401337SJens Wiklander bl __get_core_pos 45365401337SJens Wiklander bl enable_mmu 45465401337SJens Wiklander#ifdef CFG_CORE_ASLR 455bb538722SAlvin Chang#if defined(CFG_DYN_CONFIG) 45659724f22SJens Wiklander /* 45759724f22SJens Wiklander * thread_core_local holds only one core and thread_core_count is 1 45859724f22SJens Wiklander * so SP_EL1 points to the updated pointer for thread_core_local. 45959724f22SJens Wiklander */ 46059724f22SJens Wiklander msr spsel, #1 46159724f22SJens Wiklander mov x1, sp 46259724f22SJens Wiklander msr spsel, #0 46359724f22SJens Wiklander adr_l x0, thread_core_local 46459724f22SJens Wiklander str x1, [x0] 46559724f22SJens Wiklander#endif 46659724f22SJens Wiklander 46765401337SJens Wiklander /* 468abb35419SJens Wiklander * Update recorded end_va. This must be done before calling into C 469abb35419SJens Wiklander * code to make sure that the stack pointer matches what we have in 470abb35419SJens Wiklander * thread_core_local[]. 471b5ec8152SJens Wiklander */ 472b5ec8152SJens Wiklander adr_l x0, boot_mmu_config 473b5ec8152SJens Wiklander ldr x0, [x0, #CORE_MMU_CONFIG_MAP_OFFSET] 474b5ec8152SJens Wiklander msr spsel, #1 475b5ec8152SJens Wiklander ldr x1, [sp, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 476b5ec8152SJens Wiklander add x1, x1, x0 477b5ec8152SJens Wiklander str x1, [sp, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 478b5ec8152SJens Wiklander ldr x1, [sp, #THREAD_CORE_LOCAL_ABT_STACK_VA_END] 479b5ec8152SJens Wiklander add x1, x1, x0 480b5ec8152SJens Wiklander str x1, [sp, #THREAD_CORE_LOCAL_ABT_STACK_VA_END] 481b5ec8152SJens Wiklander msr spsel, #0 482abb35419SJens Wiklander 483abb35419SJens Wiklander /* Update relocations recorded with boot_mem_add_reloc() */ 484abb35419SJens Wiklander adr_l x0, boot_mmu_config 485abb35419SJens Wiklander ldr x0, [x0, #CORE_MMU_CONFIG_MAP_OFFSET] 486abb35419SJens Wiklander bl boot_mem_relocate 487b5ec8152SJens Wiklander /* 48865401337SJens Wiklander * Reinitialize console, since register_serial_console() has 48965401337SJens Wiklander * previously registered a PA and with ASLR the VA is different 49065401337SJens Wiklander * from the PA. 49165401337SJens Wiklander */ 49265401337SJens Wiklander bl console_init 49365401337SJens Wiklander#endif 49465401337SJens Wiklander 495b2f99d20SOlivier Deprez#ifdef CFG_MEMTAG 496b2f99d20SOlivier Deprez bl boot_clear_memtag 497b2f99d20SOlivier Deprez#endif 498b2f99d20SOlivier Deprez 499b76b2296SJerome Forissier#ifdef CFG_NS_VIRTUALIZATION 500fb2b1fd8SRuchika Gupta /* 501fb2b1fd8SRuchika Gupta * Initialize partition tables for each partition to 502fb2b1fd8SRuchika Gupta * default_partition which has been relocated now to a different VA 503fb2b1fd8SRuchika Gupta */ 504fb2b1fd8SRuchika Gupta bl core_mmu_set_default_prtn_tbl 505fb2b1fd8SRuchika Gupta#endif 506fb2b1fd8SRuchika Gupta 50759ac3801SJens Wiklander bl boot_init_primary_early 508a0e8ffe9SJens Wiklander 509a0e8ffe9SJens Wiklander#ifdef CFG_MEMTAG 510a0e8ffe9SJens Wiklander init_memtag_per_cpu 511a0e8ffe9SJens Wiklander#endif 512b5ec8152SJens Wiklander bl boot_init_primary_late 51359724f22SJens Wiklander 514bb538722SAlvin Chang#if defined(CFG_DYN_CONFIG) 51559724f22SJens Wiklander bl __get_core_pos 51659724f22SJens Wiklander 51759724f22SJens Wiklander /* 51859724f22SJens Wiklander * Switch to the new thread_core_local and thread_core_count and 51959724f22SJens Wiklander * keep the pointer to the new thread_core_local in x1. 52059724f22SJens Wiklander */ 52159724f22SJens Wiklander adr_l x1, __thread_core_count_new 52259724f22SJens Wiklander ldr x1, [x1] 52359724f22SJens Wiklander adr_l x2, thread_core_count; 52459724f22SJens Wiklander str x1, [x2] 52559724f22SJens Wiklander adr_l x1, __thread_core_local_new 52659724f22SJens Wiklander ldr x1, [x1] 52759724f22SJens Wiklander adr_l x2, thread_core_local 52859724f22SJens Wiklander str x1, [x2] 52959724f22SJens Wiklander 53059724f22SJens Wiklander /* 53159724f22SJens Wiklander * Update SP_EL0 to use the new tmp stack and update SP_EL1 to 53259724f22SJens Wiklander * point the new thread_core_local and clear 53359724f22SJens Wiklander * thread_core_local[0].stackcheck_recursion now that the stack 53459724f22SJens Wiklander * pointer matches recorded information. 53559724f22SJens Wiklander */ 53659724f22SJens Wiklander mov x2, #THREAD_CORE_LOCAL_SIZE 53759724f22SJens Wiklander /* x3 = x2 * x0 + x1 */ 53859724f22SJens Wiklander madd x3, x2, x0, x1 53959724f22SJens Wiklander ldr x0, [x3, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 54059724f22SJens Wiklander mov sp, x0 54159724f22SJens Wiklander msr spsel, #1 54259724f22SJens Wiklander mov sp, x3 54359724f22SJens Wiklander msr spsel, #0 54459724f22SJens Wiklander#endif 54559724f22SJens Wiklander 546b76b2296SJerome Forissier#ifndef CFG_NS_VIRTUALIZATION 547f332e77cSJens Wiklander mov x23, sp 54859ac3801SJens Wiklander adr_l x0, threads 54991d4649dSJens Wiklander ldr x0, [x0] 55059ac3801SJens Wiklander ldr x0, [x0, #THREAD_CTX_STACK_VA_END] 55159ac3801SJens Wiklander mov sp, x0 5521d88c0c0SJerome Forissier bl thread_get_core_local 553f332e77cSJens Wiklander mov x24, x0 554f332e77cSJens Wiklander str wzr, [x24, #THREAD_CORE_LOCAL_FLAGS] 555809fa817SBalint Dobszay#endif 556b0da0d59SJens Wiklander bl boot_init_primary_runtime 55793dc6b29SJens Wiklander#ifdef CFG_CORE_PAUTH 558449b5f25SJens Wiklander adr_l x0, threads 55991d4649dSJens Wiklander ldr x0, [x0] 560449b5f25SJens Wiklander ldp x1, x2, [x0, #THREAD_CTX_KEYS] 561449b5f25SJens Wiklander write_apiakeyhi x1 562449b5f25SJens Wiklander write_apiakeylo x2 563449b5f25SJens Wiklander mrs x0, sctlr_el1 564449b5f25SJens Wiklander orr x0, x0, #SCTLR_ENIA 565449b5f25SJens Wiklander msr sctlr_el1, x0 566449b5f25SJens Wiklander isb 56793dc6b29SJens Wiklander#endif 568faf09045SJens Wiklander bl boot_init_primary_final 56993dc6b29SJens Wiklander 570b76b2296SJerome Forissier#ifndef CFG_NS_VIRTUALIZATION 5711d88c0c0SJerome Forissier mov x0, #THREAD_CLF_TMP 572f332e77cSJens Wiklander str w0, [x24, #THREAD_CORE_LOCAL_FLAGS] 573f332e77cSJens Wiklander mov sp, x23 574449b5f25SJens Wiklander#ifdef CFG_CORE_PAUTH 575449b5f25SJens Wiklander ldp x0, x1, [x24, #THREAD_CORE_LOCAL_KEYS] 576449b5f25SJens Wiklander write_apiakeyhi x0 577449b5f25SJens Wiklander write_apiakeylo x1 578449b5f25SJens Wiklander isb 579449b5f25SJens Wiklander#endif 58059ac3801SJens Wiklander#endif 58165401337SJens Wiklander 58245507d10SKhoa Hoang#ifdef _CFG_CORE_STACK_PROTECTOR 58345507d10SKhoa Hoang /* Update stack canary value */ 584b89b3da2SVincent Chuang sub sp, sp, #0x10 585b89b3da2SVincent Chuang mov x0, sp 586b89b3da2SVincent Chuang mov x1, #1 587b89b3da2SVincent Chuang mov x2, #0x8 588b89b3da2SVincent Chuang bl plat_get_random_stack_canaries 589b89b3da2SVincent Chuang ldr x0, [sp] 59045507d10SKhoa Hoang adr_l x5, __stack_chk_guard 59145507d10SKhoa Hoang str x0, [x5] 592b89b3da2SVincent Chuang add sp, sp, #0x10 59345507d10SKhoa Hoang#endif 59445507d10SKhoa Hoang 59565401337SJens Wiklander /* 59665401337SJens Wiklander * In case we've touched memory that secondary CPUs will use before 59765401337SJens Wiklander * they have turned on their D-cache, clean and invalidate the 59865401337SJens Wiklander * D-cache before exiting to normal world. 59965401337SJens Wiklander */ 60065401337SJens Wiklander adr_l x0, __text_start 6015727b6afSJens Wiklander adr_l x1, boot_cached_mem_end 6025727b6afSJens Wiklander ldr x1, [x1] 60365401337SJens Wiklander sub x1, x1, x0 60465401337SJens Wiklander bl dcache_cleaninv_range 60565401337SJens Wiklander 60665401337SJens Wiklander /* 60765401337SJens Wiklander * Clear current thread id now to allow the thread to be reused on 60865401337SJens Wiklander * next entry. Matches the thread_init_boot_thread in 60965401337SJens Wiklander * boot.c. 61065401337SJens Wiklander */ 611b76b2296SJerome Forissier#ifndef CFG_NS_VIRTUALIZATION 61265401337SJens Wiklander bl thread_clr_boot_thread 61365401337SJens Wiklander#endif 61465401337SJens Wiklander 6151b302ac0SJens Wiklander#ifdef CFG_CORE_FFA 6161b302ac0SJens Wiklander adr x0, cpu_on_handler 6171b302ac0SJens Wiklander /* 618c79fb6d4SJens Wiklander * Compensate for the virtual map offset since cpu_on_handler() is 6191b302ac0SJens Wiklander * called with MMU off. 6201b302ac0SJens Wiklander */ 621c79fb6d4SJens Wiklander ldr x1, boot_mmu_config + CORE_MMU_CONFIG_MAP_OFFSET 6221b302ac0SJens Wiklander sub x0, x0, x1 623c64fa9c5SJens Wiklander bl thread_spmc_register_secondary_ep 6241b302ac0SJens Wiklander b thread_ffa_msg_wait 6251b302ac0SJens Wiklander#else 62665401337SJens Wiklander /* 627c79fb6d4SJens Wiklander * Pass the vector address returned from main_init Compensate for 628c79fb6d4SJens Wiklander * the virtual map offset since cpu_on_handler() is called with MMU 629c79fb6d4SJens Wiklander * off. 63065401337SJens Wiklander */ 631c79fb6d4SJens Wiklander ldr x0, boot_mmu_config + CORE_MMU_CONFIG_MAP_OFFSET 63265401337SJens Wiklander adr x1, thread_vector_table 63365401337SJens Wiklander sub x1, x1, x0 63465401337SJens Wiklander mov x0, #TEESMC_OPTEED_RETURN_ENTRY_DONE 63565401337SJens Wiklander smc #0 6360c9404e1SJens Wiklander /* SMC should not return */ 6370c9404e1SJens Wiklander panic_at_smc_return 6381b302ac0SJens Wiklander#endif 63965401337SJens WiklanderEND_FUNC _start 64065401337SJens WiklanderDECLARE_KEEP_INIT _start 64165401337SJens Wiklander 642d461c892SJens Wiklander#ifndef CFG_WITH_PAGER 6439aec76b6SJerome Forissier .section .identity_map.data 64465401337SJens Wiklander .balign 8 645d461c892SJens WiklanderLOCAL_DATA boot_embdata_ptr , : 646d461c892SJens Wiklander .skip 8 647d461c892SJens WiklanderEND_DATA boot_embdata_ptr 648d461c892SJens Wiklander#endif 649d461c892SJens Wiklander 6500d928692SJens Wiklander#if defined(CFG_CORE_ASLR) || defined(CFG_CORE_PHYS_RELOCATABLE) 65165401337SJens WiklanderLOCAL_FUNC relocate , : 6520d928692SJens Wiklander /* 6530d928692SJens Wiklander * x0 holds relocate offset 6540d928692SJens Wiklander * x1 holds load address 6550d928692SJens Wiklander */ 65665401337SJens Wiklander#ifdef CFG_WITH_PAGER 65765401337SJens Wiklander adr_l x6, __init_end 65865401337SJens Wiklander#else 659d461c892SJens Wiklander adr_l x6, boot_embdata_ptr 660d461c892SJens Wiklander ldr x6, [x6] 66165401337SJens Wiklander#endif 66265401337SJens Wiklander ldp w2, w3, [x6, #BOOT_EMBDATA_RELOC_OFFSET] 66365401337SJens Wiklander 66465401337SJens Wiklander add x2, x2, x6 /* start of relocations */ 66565401337SJens Wiklander add x3, x3, x2 /* end of relocations */ 66665401337SJens Wiklander 66765401337SJens Wiklander /* 66865401337SJens Wiklander * Relocations are not formatted as Rela64, instead they are in a 66965401337SJens Wiklander * compressed format created by get_reloc_bin() in 67065401337SJens Wiklander * scripts/gen_tee_bin.py 67165401337SJens Wiklander * 67265401337SJens Wiklander * All the R_AARCH64_RELATIVE relocations are translated into a 673460c9735SJens Wiklander * list of 32-bit offsets from TEE_LOAD_ADDR. At each address a 674460c9735SJens Wiklander * 64-bit value pointed out which increased with the load offset. 67565401337SJens Wiklander */ 67665401337SJens Wiklander 67765401337SJens Wiklander#ifdef CFG_WITH_PAGER 67865401337SJens Wiklander /* 67965401337SJens Wiklander * With pager enabled we can only relocate the pager and init 68065401337SJens Wiklander * parts, the rest has to be done when a page is populated. 68165401337SJens Wiklander */ 68265401337SJens Wiklander sub x6, x6, x1 68365401337SJens Wiklander#endif 68465401337SJens Wiklander 68565401337SJens Wiklander b 2f 68665401337SJens Wiklander /* Loop over the relocation addresses and process all entries */ 68765401337SJens Wiklander1: ldr w4, [x2], #4 68865401337SJens Wiklander#ifdef CFG_WITH_PAGER 68965401337SJens Wiklander /* Skip too large addresses */ 69065401337SJens Wiklander cmp x4, x6 69165401337SJens Wiklander b.ge 2f 69265401337SJens Wiklander#endif 69365401337SJens Wiklander add x4, x4, x1 69465401337SJens Wiklander ldr x5, [x4] 69565401337SJens Wiklander add x5, x5, x0 69665401337SJens Wiklander str x5, [x4] 69765401337SJens Wiklander 69865401337SJens Wiklander2: cmp x2, x3 69965401337SJens Wiklander b.ne 1b 70065401337SJens Wiklander 70165401337SJens Wiklander ret 70265401337SJens WiklanderEND_FUNC relocate 70365401337SJens Wiklander#endif 70465401337SJens Wiklander 70565401337SJens Wiklander/* 70665401337SJens Wiklander * void enable_mmu(unsigned long core_pos); 70765401337SJens Wiklander * 70865401337SJens Wiklander * This function depends on being mapped with in the identity map where 70965401337SJens Wiklander * physical address and virtual address is the same. After MMU has been 71065401337SJens Wiklander * enabled the instruction pointer will be updated to execute as the new 71165401337SJens Wiklander * offset instead. Stack pointers and the return address are updated. 71265401337SJens Wiklander */ 71365401337SJens WiklanderLOCAL_FUNC enable_mmu , : , .identity_map 71465401337SJens Wiklander adr x1, boot_mmu_config 71565401337SJens Wiklander load_xregs x1, 0, 2, 6 71665401337SJens Wiklander /* 71765401337SJens Wiklander * x0 = core_pos 71865401337SJens Wiklander * x2 = tcr_el1 71965401337SJens Wiklander * x3 = mair_el1 72065401337SJens Wiklander * x4 = ttbr0_el1_base 72165401337SJens Wiklander * x5 = ttbr0_core_offset 72265401337SJens Wiklander * x6 = load_offset 72365401337SJens Wiklander */ 72465401337SJens Wiklander msr tcr_el1, x2 72565401337SJens Wiklander msr mair_el1, x3 72665401337SJens Wiklander 72765401337SJens Wiklander /* 72865401337SJens Wiklander * ttbr0_el1 = ttbr0_el1_base + ttbr0_core_offset * core_pos 72965401337SJens Wiklander */ 73065401337SJens Wiklander madd x1, x5, x0, x4 73165401337SJens Wiklander msr ttbr0_el1, x1 73265401337SJens Wiklander msr ttbr1_el1, xzr 73365401337SJens Wiklander isb 73465401337SJens Wiklander 73565401337SJens Wiklander /* Invalidate TLB */ 73665401337SJens Wiklander tlbi vmalle1 73765401337SJens Wiklander 73865401337SJens Wiklander /* 73965401337SJens Wiklander * Make sure translation table writes have drained into memory and 74065401337SJens Wiklander * the TLB invalidation is complete. 74165401337SJens Wiklander */ 74265401337SJens Wiklander dsb sy 74365401337SJens Wiklander isb 74465401337SJens Wiklander 74565401337SJens Wiklander /* Enable the MMU */ 74665401337SJens Wiklander mrs x1, sctlr_el1 74765401337SJens Wiklander orr x1, x1, #SCTLR_M 74865401337SJens Wiklander msr sctlr_el1, x1 74965401337SJens Wiklander isb 75065401337SJens Wiklander 75165401337SJens Wiklander /* Update vbar */ 75265401337SJens Wiklander mrs x1, vbar_el1 75365401337SJens Wiklander add x1, x1, x6 75465401337SJens Wiklander msr vbar_el1, x1 75565401337SJens Wiklander isb 75665401337SJens Wiklander 75765401337SJens Wiklander /* Invalidate instruction cache and branch predictor */ 75865401337SJens Wiklander ic iallu 75965401337SJens Wiklander isb 76065401337SJens Wiklander 76165401337SJens Wiklander /* Enable I and D cache */ 76265401337SJens Wiklander mrs x1, sctlr_el1 76365401337SJens Wiklander orr x1, x1, #SCTLR_I 76465401337SJens Wiklander orr x1, x1, #SCTLR_C 76565401337SJens Wiklander msr sctlr_el1, x1 76665401337SJens Wiklander isb 76765401337SJens Wiklander 76865401337SJens Wiklander /* Adjust stack pointers and return address */ 76965401337SJens Wiklander msr spsel, #1 77065401337SJens Wiklander add sp, sp, x6 77165401337SJens Wiklander msr spsel, #0 77265401337SJens Wiklander add sp, sp, x6 77365401337SJens Wiklander add x30, x30, x6 77465401337SJens Wiklander 77565401337SJens Wiklander ret 77665401337SJens WiklanderEND_FUNC enable_mmu 77765401337SJens Wiklander 7789aec76b6SJerome Forissier .section .identity_map.data 77965401337SJens Wiklander .balign 8 78065401337SJens WiklanderDATA boot_mmu_config , : /* struct core_mmu_config */ 78165401337SJens Wiklander .skip CORE_MMU_CONFIG_SIZE 78265401337SJens WiklanderEND_DATA boot_mmu_config 78365401337SJens Wiklander 78465401337SJens WiklanderFUNC cpu_on_handler , : 78565401337SJens Wiklander mov x19, x0 78665401337SJens Wiklander mov x20, x1 78765401337SJens Wiklander mov x21, x30 78865401337SJens Wiklander 78965401337SJens Wiklander adr x0, reset_vect_table 79065401337SJens Wiklander msr vbar_el1, x0 79165401337SJens Wiklander isb 79265401337SJens Wiklander 79365401337SJens Wiklander set_sctlr_el1 79465401337SJens Wiklander isb 79565401337SJens Wiklander 7966fa59c9aSSeonghyun Park#ifdef CFG_PAN 7976fa59c9aSSeonghyun Park init_pan 7986fa59c9aSSeonghyun Park#endif 7996fa59c9aSSeonghyun Park 80065401337SJens Wiklander /* Enable aborts now that we can receive exceptions */ 80165401337SJens Wiklander msr daifclr, #DAIFBIT_ABT 80265401337SJens Wiklander 80365401337SJens Wiklander bl __get_core_pos 80465401337SJens Wiklander bl enable_mmu 80565401337SJens Wiklander 806bb538722SAlvin Chang#if defined(CFG_DYN_CONFIG) 80759724f22SJens Wiklander /* 80859724f22SJens Wiklander * Update SP_EL0 to use the new tmp stack and update SP_EL1 to 80959724f22SJens Wiklander * point the new thread_core_local. 81059724f22SJens Wiklander */ 81159724f22SJens Wiklander bl __get_core_pos 81259724f22SJens Wiklander adr_l x1, thread_core_local 81359724f22SJens Wiklander ldr x1, [x1] 81459724f22SJens Wiklander mov x2, #THREAD_CORE_LOCAL_SIZE 81559724f22SJens Wiklander /* x3 = x2 * x0 + x1 */ 81659724f22SJens Wiklander madd x3, x2, x0, x1 81759724f22SJens Wiklander ldr x0, [x3, #THREAD_CORE_LOCAL_TMP_STACK_VA_END] 81859724f22SJens Wiklander mov sp, x0 81959724f22SJens Wiklander msr spsel, #1 82059724f22SJens Wiklander mov sp, x3 82159724f22SJens Wiklander msr spsel, #0 82259724f22SJens Wiklander#else 82365401337SJens Wiklander /* Setup SP_EL0 and SP_EL1, SP will be set to SP_EL0 */ 82465401337SJens Wiklander set_sp 82559724f22SJens Wiklander#endif 82665401337SJens Wiklander 827a0e8ffe9SJens Wiklander#ifdef CFG_MEMTAG 828a0e8ffe9SJens Wiklander init_memtag_per_cpu 829a0e8ffe9SJens Wiklander#endif 83093dc6b29SJens Wiklander#ifdef CFG_CORE_PAUTH 831449b5f25SJens Wiklander init_pauth_secondary_cpu 83293dc6b29SJens Wiklander#endif 833a0e8ffe9SJens Wiklander 83465401337SJens Wiklander mov x0, x19 83565401337SJens Wiklander mov x1, x20 8361b302ac0SJens Wiklander#ifdef CFG_CORE_FFA 8371b302ac0SJens Wiklander bl boot_cpu_on_handler 8381b302ac0SJens Wiklander b thread_ffa_msg_wait 8391b302ac0SJens Wiklander#else 84065401337SJens Wiklander mov x30, x21 84165401337SJens Wiklander b boot_cpu_on_handler 8421b302ac0SJens Wiklander#endif 84365401337SJens WiklanderEND_FUNC cpu_on_handler 84465401337SJens WiklanderDECLARE_KEEP_PAGER cpu_on_handler 84565401337SJens Wiklander 84665401337SJens WiklanderLOCAL_FUNC unhandled_cpu , : 84765401337SJens Wiklander wfi 84865401337SJens Wiklander b unhandled_cpu 84965401337SJens WiklanderEND_FUNC unhandled_cpu 85065401337SJens Wiklander 851bb538722SAlvin Chang#if !defined(CFG_DYN_CONFIG) 852528dabb2SJerome ForissierLOCAL_DATA stack_tmp_rel , : 853528dabb2SJerome Forissier .word stack_tmp - stack_tmp_rel - STACK_TMP_GUARD 854528dabb2SJerome ForissierEND_DATA stack_tmp_rel 85559724f22SJens Wiklander#endif 856528dabb2SJerome Forissier 85765401337SJens Wiklander /* 85865401337SJens Wiklander * This macro verifies that the a given vector doesn't exceed the 85965401337SJens Wiklander * architectural limit of 32 instructions. This is meant to be placed 86065401337SJens Wiklander * immedately after the last instruction in the vector. It takes the 86165401337SJens Wiklander * vector entry as the parameter 86265401337SJens Wiklander */ 86365401337SJens Wiklander .macro check_vector_size since 86465401337SJens Wiklander .if (. - \since) > (32 * 4) 86565401337SJens Wiklander .error "Vector exceeds 32 instructions" 86665401337SJens Wiklander .endif 86765401337SJens Wiklander .endm 86865401337SJens Wiklander 86965401337SJens Wiklander .section .identity_map, "ax", %progbits 87065401337SJens Wiklander .align 11 87103bada66SRuchika GuptaLOCAL_FUNC reset_vect_table , :, .identity_map, , nobti 87265401337SJens Wiklander /* ----------------------------------------------------- 87365401337SJens Wiklander * Current EL with SP0 : 0x0 - 0x180 87465401337SJens Wiklander * ----------------------------------------------------- 87565401337SJens Wiklander */ 87665401337SJens WiklanderSynchronousExceptionSP0: 87765401337SJens Wiklander b SynchronousExceptionSP0 87865401337SJens Wiklander check_vector_size SynchronousExceptionSP0 87965401337SJens Wiklander 88065401337SJens Wiklander .align 7 88165401337SJens WiklanderIrqSP0: 88265401337SJens Wiklander b IrqSP0 88365401337SJens Wiklander check_vector_size IrqSP0 88465401337SJens Wiklander 88565401337SJens Wiklander .align 7 88665401337SJens WiklanderFiqSP0: 88765401337SJens Wiklander b FiqSP0 88865401337SJens Wiklander check_vector_size FiqSP0 88965401337SJens Wiklander 89065401337SJens Wiklander .align 7 89165401337SJens WiklanderSErrorSP0: 89265401337SJens Wiklander b SErrorSP0 89365401337SJens Wiklander check_vector_size SErrorSP0 89465401337SJens Wiklander 89565401337SJens Wiklander /* ----------------------------------------------------- 89665401337SJens Wiklander * Current EL with SPx: 0x200 - 0x380 89765401337SJens Wiklander * ----------------------------------------------------- 89865401337SJens Wiklander */ 89965401337SJens Wiklander .align 7 90065401337SJens WiklanderSynchronousExceptionSPx: 90165401337SJens Wiklander b SynchronousExceptionSPx 90265401337SJens Wiklander check_vector_size SynchronousExceptionSPx 90365401337SJens Wiklander 90465401337SJens Wiklander .align 7 90565401337SJens WiklanderIrqSPx: 90665401337SJens Wiklander b IrqSPx 90765401337SJens Wiklander check_vector_size IrqSPx 90865401337SJens Wiklander 90965401337SJens Wiklander .align 7 91065401337SJens WiklanderFiqSPx: 91165401337SJens Wiklander b FiqSPx 91265401337SJens Wiklander check_vector_size FiqSPx 91365401337SJens Wiklander 91465401337SJens Wiklander .align 7 91565401337SJens WiklanderSErrorSPx: 91665401337SJens Wiklander b SErrorSPx 91765401337SJens Wiklander check_vector_size SErrorSPx 91865401337SJens Wiklander 91965401337SJens Wiklander /* ----------------------------------------------------- 92065401337SJens Wiklander * Lower EL using AArch64 : 0x400 - 0x580 92165401337SJens Wiklander * ----------------------------------------------------- 92265401337SJens Wiklander */ 92365401337SJens Wiklander .align 7 92465401337SJens WiklanderSynchronousExceptionA64: 92565401337SJens Wiklander b SynchronousExceptionA64 92665401337SJens Wiklander check_vector_size SynchronousExceptionA64 92765401337SJens Wiklander 92865401337SJens Wiklander .align 7 92965401337SJens WiklanderIrqA64: 93065401337SJens Wiklander b IrqA64 93165401337SJens Wiklander check_vector_size IrqA64 93265401337SJens Wiklander 93365401337SJens Wiklander .align 7 93465401337SJens WiklanderFiqA64: 93565401337SJens Wiklander b FiqA64 93665401337SJens Wiklander check_vector_size FiqA64 93765401337SJens Wiklander 93865401337SJens Wiklander .align 7 93965401337SJens WiklanderSErrorA64: 94065401337SJens Wiklander b SErrorA64 94165401337SJens Wiklander check_vector_size SErrorA64 94265401337SJens Wiklander 94365401337SJens Wiklander /* ----------------------------------------------------- 94465401337SJens Wiklander * Lower EL using AArch32 : 0x0 - 0x180 94565401337SJens Wiklander * ----------------------------------------------------- 94665401337SJens Wiklander */ 94765401337SJens Wiklander .align 7 94865401337SJens WiklanderSynchronousExceptionA32: 94965401337SJens Wiklander b SynchronousExceptionA32 95065401337SJens Wiklander check_vector_size SynchronousExceptionA32 95165401337SJens Wiklander 95265401337SJens Wiklander .align 7 95365401337SJens WiklanderIrqA32: 95465401337SJens Wiklander b IrqA32 95565401337SJens Wiklander check_vector_size IrqA32 95665401337SJens Wiklander 95765401337SJens Wiklander .align 7 95865401337SJens WiklanderFiqA32: 95965401337SJens Wiklander b FiqA32 96065401337SJens Wiklander check_vector_size FiqA32 96165401337SJens Wiklander 96265401337SJens Wiklander .align 7 96365401337SJens WiklanderSErrorA32: 96465401337SJens Wiklander b SErrorA32 96565401337SJens Wiklander check_vector_size SErrorA32 96665401337SJens Wiklander 96765401337SJens WiklanderEND_FUNC reset_vect_table 968181f8492SRuchika Gupta 969181f8492SRuchika GuptaBTI(emit_aarch64_feature_1_and GNU_PROPERTY_AARCH64_FEATURE_1_BTI) 970