1===================== 2Booting AArch64 Linux 3===================== 4 5Author: Will Deacon <[email protected]> 6 7Date : 07 September 2012 8 9This document is based on the ARM booting document by Russell King and 10is relevant to all public releases of the AArch64 Linux kernel. 11 12The AArch64 exception model is made up of a number of exception levels 13(EL0 - EL3), with EL0, EL1 and EL2 having a secure and a non-secure 14counterpart. EL2 is the hypervisor level, EL3 is the highest priority 15level and exists only in secure mode. Both are architecturally optional. 16 17For the purposes of this document, we will use the term `boot loader` 18simply to define all software that executes on the CPU(s) before control 19is passed to the Linux kernel. This may include secure monitor and 20hypervisor code, or it may just be a handful of instructions for 21preparing a minimal boot environment. 22 23Essentially, the boot loader should provide (as a minimum) the 24following: 25 261. Setup and initialise the RAM 272. Setup the device tree 283. Decompress the kernel image 294. Call the kernel image 30 31 321. Setup and initialise RAM 33--------------------------- 34 35Requirement: MANDATORY 36 37The boot loader is expected to find and initialise all RAM that the 38kernel will use for volatile data storage in the system. It performs 39this in a machine dependent manner. (It may use internal algorithms 40to automatically locate and size all RAM, or it may use knowledge of 41the RAM in the machine, or any other method the boot loader designer 42sees fit.) 43 44For Arm Confidential Compute Realms this includes ensuring that all 45protected RAM has a Realm IPA state (RIPAS) of "RAM". 46 47 482. Setup the device tree 49------------------------- 50 51Requirement: MANDATORY 52 53The device tree blob (dtb) must be placed on an 8-byte boundary and must 54not exceed 2 megabytes in size. Since the dtb will be mapped cacheable 55using blocks of up to 2 megabytes in size, it must not be placed within 56any 2M region which must be mapped with any specific attributes. 57 58NOTE: versions prior to v4.2 also require that the DTB be placed within 59the 512 MB region starting at text_offset bytes below the kernel Image. 60 613. Decompress the kernel image 62------------------------------ 63 64Requirement: OPTIONAL 65 66The AArch64 kernel does not currently provide a decompressor and 67therefore requires decompression (gzip etc.) to be performed by the boot 68loader if a compressed Image target (e.g. Image.gz) is used. For 69bootloaders that do not implement this requirement, the uncompressed 70Image target is available instead. 71 72 734. Call the kernel image 74------------------------ 75 76Requirement: MANDATORY 77 78The decompressed kernel image contains a 64-byte header as follows:: 79 80 u32 code0; /* Executable code */ 81 u32 code1; /* Executable code */ 82 u64 text_offset; /* Image load offset, little endian */ 83 u64 image_size; /* Effective Image size, little endian */ 84 u64 flags; /* kernel flags, little endian */ 85 u64 res2 = 0; /* reserved */ 86 u64 res3 = 0; /* reserved */ 87 u64 res4 = 0; /* reserved */ 88 u32 magic = 0x644d5241; /* Magic number, little endian, "ARM\x64" */ 89 u32 res5; /* reserved (used for PE COFF offset) */ 90 91 92Header notes: 93 94- As of v3.17, all fields are little endian unless stated otherwise. 95 96- code0/code1 are responsible for branching to stext. 97 98- when booting through EFI, code0/code1 are initially skipped. 99 res5 is an offset to the PE header and the PE header has the EFI 100 entry point (efi_stub_entry). When the stub has done its work, it 101 jumps to code0 to resume the normal boot process. 102 103- Prior to v3.17, the endianness of text_offset was not specified. In 104 these cases image_size is zero and text_offset is 0x80000 in the 105 endianness of the kernel. Where image_size is non-zero image_size is 106 little-endian and must be respected. Where image_size is zero, 107 text_offset can be assumed to be 0x80000. 108 109- The flags field (introduced in v3.17) is a little-endian 64-bit field 110 composed as follows: 111 112 ============= =============================================================== 113 Bit 0 Kernel endianness. 1 if BE, 0 if LE. 114 Bit 1-2 Kernel Page size. 115 116 * 0 - Unspecified. 117 * 1 - 4K 118 * 2 - 16K 119 * 3 - 64K 120 Bit 3 Kernel physical placement 121 122 0 123 2MB aligned base should be as close as possible 124 to the base of DRAM, since memory below it is not 125 accessible via the linear mapping 126 1 127 2MB aligned base such that all image_size bytes 128 counted from the start of the image are within 129 the 48-bit addressable range of physical memory 130 Bits 4-63 Reserved. 131 ============= =============================================================== 132 133- When image_size is zero, a bootloader should attempt to keep as much 134 memory as possible free for use by the kernel immediately after the 135 end of the kernel image. The amount of space required will vary 136 depending on selected features, and is effectively unbound. 137 138The Image must be placed text_offset bytes from a 2MB aligned base 139address anywhere in usable system RAM and called there. The region 140between the 2 MB aligned base address and the start of the image has no 141special significance to the kernel, and may be used for other purposes. 142At least image_size bytes from the start of the image must be free for 143use by the kernel. 144NOTE: versions prior to v4.6 cannot make use of memory below the 145physical offset of the Image so it is recommended that the Image be 146placed as close as possible to the start of system RAM. 147 148If an initrd/initramfs is passed to the kernel at boot, it must reside 149entirely within a 1 GB aligned physical memory window of up to 32 GB in 150size that fully covers the kernel Image as well. 151 152Any memory described to the kernel (even that below the start of the 153image) which is not marked as reserved from the kernel (e.g., with a 154memreserve region in the device tree) will be considered as available to 155the kernel. 156 157Before jumping into the kernel, the following conditions must be met: 158 159- Quiesce all DMA capable devices so that memory does not get 160 corrupted by bogus network packets or disk data. This will save 161 you many hours of debug. 162 163- Primary CPU general-purpose register settings: 164 165 - x0 = physical address of device tree blob (dtb) in system RAM. 166 - x1 = 0 (reserved for future use) 167 - x2 = 0 (reserved for future use) 168 - x3 = 0 (reserved for future use) 169 170- CPU mode 171 172 All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError, 173 IRQ and FIQ). 174 The CPU must be in non-secure state, either in EL2 (RECOMMENDED in order 175 to have access to the virtualisation extensions), or in EL1. 176 177- Caches, MMUs 178 179 The MMU must be off. 180 181 The instruction cache may be on or off, and must not hold any stale 182 entries corresponding to the loaded kernel image. 183 184 The address range corresponding to the loaded kernel image must be 185 cleaned to the PoC. In the presence of a system cache or other 186 coherent masters with caches enabled, this will typically require 187 cache maintenance by VA rather than set/way operations. 188 System caches which respect the architected cache maintenance by VA 189 operations must be configured and may be enabled. 190 System caches which do not respect architected cache maintenance by VA 191 operations (not recommended) must be configured and disabled. 192 193- Architected timers 194 195 CNTFRQ must be programmed with the timer frequency and CNTVOFF must 196 be programmed with a consistent value on all CPUs. If entering the 197 kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where 198 available. 199 200- Coherency 201 202 All CPUs to be booted by the kernel must be part of the same coherency 203 domain on entry to the kernel. This may require IMPLEMENTATION DEFINED 204 initialisation to enable the receiving of maintenance operations on 205 each CPU. 206 207- System registers 208 209 All writable architected system registers at or below the exception 210 level where the kernel image will be entered must be initialised by 211 software at a higher exception level to prevent execution in an UNKNOWN 212 state. 213 214 For all systems: 215 - If EL3 is present: 216 217 - SCR_EL3.FIQ must have the same value across all CPUs the kernel is 218 executing on. 219 - The value of SCR_EL3.FIQ must be the same as the one present at boot 220 time whenever the kernel is executing. 221 222 - If EL3 is present and the kernel is entered at EL2: 223 224 - SCR_EL3.HCE (bit 8) must be initialised to 0b1. 225 226 For systems with a GICv3 interrupt controller to be used in v3 mode: 227 - If EL3 is present: 228 229 - ICC_SRE_EL3.Enable (bit 3) must be initialised to 0b1. 230 - ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1. 231 - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across 232 all CPUs the kernel is executing on, and must stay constant 233 for the lifetime of the kernel. 234 235 - If the kernel is entered at EL1: 236 237 - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1 238 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1. 239 240 - The DT or ACPI tables must describe a GICv3 interrupt controller. 241 242 For systems with a GICv3 interrupt controller to be used in 243 compatibility (v2) mode: 244 245 - If EL3 is present: 246 247 ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b0. 248 249 - If the kernel is entered at EL1: 250 251 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0. 252 253 - The DT or ACPI tables must describe a GICv2 interrupt controller. 254 255 For CPUs with pointer authentication functionality: 256 257 - If EL3 is present: 258 259 - SCR_EL3.APK (bit 16) must be initialised to 0b1 260 - SCR_EL3.API (bit 17) must be initialised to 0b1 261 262 - If the kernel is entered at EL1: 263 264 - HCR_EL2.APK (bit 40) must be initialised to 0b1 265 - HCR_EL2.API (bit 41) must be initialised to 0b1 266 267 For CPUs with Activity Monitors Unit v1 (AMUv1) extension present: 268 269 - If EL3 is present: 270 271 - CPTR_EL3.TAM (bit 30) must be initialised to 0b0 272 - CPTR_EL2.TAM (bit 30) must be initialised to 0b0 273 - AMCNTENSET0_EL0 must be initialised to 0b1111 274 - AMCNTENSET1_EL0 must be initialised to a platform specific value 275 having 0b1 set for the corresponding bit for each of the auxiliary 276 counters present. 277 278 - If the kernel is entered at EL1: 279 280 - AMCNTENSET0_EL0 must be initialised to 0b1111 281 - AMCNTENSET1_EL0 must be initialised to a platform specific value 282 having 0b1 set for the corresponding bit for each of the auxiliary 283 counters present. 284 285 For CPUs with the Fine Grained Traps (FEAT_FGT) extension present: 286 287 - If EL3 is present and the kernel is entered at EL2: 288 289 - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1. 290 291 For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present: 292 293 - If EL3 is present and the kernel is entered at EL2: 294 295 - SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1. 296 297 For CPUs with support for HCRX_EL2 (FEAT_HCX) present: 298 299 - If EL3 is present and the kernel is entered at EL2: 300 301 - SCR_EL3.HXEn (bit 38) must be initialised to 0b1. 302 303 For CPUs with Advanced SIMD and floating point support: 304 305 - If EL3 is present: 306 307 - CPTR_EL3.TFP (bit 10) must be initialised to 0b0. 308 309 - If EL2 is present and the kernel is entered at EL1: 310 311 - CPTR_EL2.TFP (bit 10) must be initialised to 0b0. 312 313 For CPUs with the Scalable Vector Extension (FEAT_SVE) present: 314 315 - if EL3 is present: 316 317 - CPTR_EL3.EZ (bit 8) must be initialised to 0b1. 318 319 - ZCR_EL3.LEN must be initialised to the same value for all CPUs the 320 kernel is executed on. 321 322 - If the kernel is entered at EL1 and EL2 is present: 323 324 - CPTR_EL2.TZ (bit 8) must be initialised to 0b0. 325 326 - CPTR_EL2.ZEN (bits 17:16) must be initialised to 0b11. 327 328 - ZCR_EL2.LEN must be initialised to the same value for all CPUs the 329 kernel will execute on. 330 331 For CPUs with the Scalable Matrix Extension (FEAT_SME): 332 333 - If EL3 is present: 334 335 - CPTR_EL3.ESM (bit 12) must be initialised to 0b1. 336 337 - SCR_EL3.EnTP2 (bit 41) must be initialised to 0b1. 338 339 - SMCR_EL3.LEN must be initialised to the same value for all CPUs the 340 kernel will execute on. 341 342 - If the kernel is entered at EL1 and EL2 is present: 343 344 - CPTR_EL2.TSM (bit 12) must be initialised to 0b0. 345 346 - CPTR_EL2.SMEN (bits 25:24) must be initialised to 0b11. 347 348 - SCTLR_EL2.EnTP2 (bit 60) must be initialised to 0b1. 349 350 - SMCR_EL2.LEN must be initialised to the same value for all CPUs the 351 kernel will execute on. 352 353 - HWFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01. 354 355 - HWFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01. 356 357 - HWFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01. 358 359 - HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01. 360 361 For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64): 362 363 - If EL3 is present: 364 365 - SMCR_EL3.FA64 (bit 31) must be initialised to 0b1. 366 367 - If the kernel is entered at EL1 and EL2 is present: 368 369 - SMCR_EL2.FA64 (bit 31) must be initialised to 0b1. 370 371 For CPUs with the Memory Tagging Extension feature (FEAT_MTE2): 372 373 - If EL3 is present: 374 375 - SCR_EL3.ATA (bit 26) must be initialised to 0b1. 376 377 - If the kernel is entered at EL1 and EL2 is present: 378 379 - HCR_EL2.ATA (bit 56) must be initialised to 0b1. 380 381 For CPUs with the Scalable Matrix Extension version 2 (FEAT_SME2): 382 383 - If EL3 is present: 384 385 - SMCR_EL3.EZT0 (bit 30) must be initialised to 0b1. 386 387 - If the kernel is entered at EL1 and EL2 is present: 388 389 - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1. 390 391 For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9): 392 393 - If EL3 is present: 394 395 - MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1. 396 397 - If the kernel is entered at EL1 and EL2 is present: 398 399 - HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. 400 - HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. 401 - HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. 402 403 - HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. 404 - HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. 405 - HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. 406 407 For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS): 408 409 - If the kernel is entered at EL1 and EL2 is present: 410 411 - HCRX_EL2.MSCEn (bit 11) must be initialised to 0b1. 412 413 - HCRX_EL2.MCE2 (bit 10) must be initialised to 0b1 and the hypervisor 414 must handle MOPS exceptions as described in :ref:`arm64_mops_hyp`. 415 416 For CPUs with the Extended Translation Control Register feature (FEAT_TCR2): 417 418 - If EL3 is present: 419 420 - SCR_EL3.TCR2En (bit 43) must be initialised to 0b1. 421 422 - If the kernel is entered at EL1 and EL2 is present: 423 424 - HCRX_EL2.TCR2En (bit 14) must be initialised to 0b1. 425 426 For CPUs with the Stage 1 Permission Indirection Extension feature (FEAT_S1PIE): 427 428 - If EL3 is present: 429 430 - SCR_EL3.PIEn (bit 45) must be initialised to 0b1. 431 432 - If the kernel is entered at EL1 and EL2 is present: 433 434 - HFGRTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1. 435 436 - HFGWTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1. 437 438 - HFGRTR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1. 439 440 - HFGRWR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1. 441 442 - For CPUs with Guarded Control Stacks (FEAT_GCS): 443 444 - GCSCR_EL1 must be initialised to 0. 445 446 - GCSCRE0_EL1 must be initialised to 0. 447 448 - If EL3 is present: 449 450 - SCR_EL3.GCSEn (bit 39) must be initialised to 0b1. 451 452 - If EL2 is present: 453 454 - GCSCR_EL2 must be initialised to 0. 455 456 - If the kernel is entered at EL1 and EL2 is present: 457 458 - HCRX_EL2.GCSEn must be initialised to 0b1. 459 460 - HFGITR_EL2.nGCSEPP (bit 59) must be initialised to 0b1. 461 462 - HFGITR_EL2.nGCSSTR_EL1 (bit 58) must be initialised to 0b1. 463 464 - HFGITR_EL2.nGCSPUSHM_EL1 (bit 57) must be initialised to 0b1. 465 466 - HFGRTR_EL2.nGCS_EL1 (bit 53) must be initialised to 0b1. 467 468 - HFGRTR_EL2.nGCS_EL0 (bit 52) must be initialised to 0b1. 469 470 - HFGWTR_EL2.nGCS_EL1 (bit 53) must be initialised to 0b1. 471 472 - HFGWTR_EL2.nGCS_EL0 (bit 52) must be initialised to 0b1. 473 474 - For CPUs with debug architecture i.e FEAT_Debugv8pN (all versions): 475 476 - If EL3 is present: 477 478 - MDCR_EL3.TDA (bit 9) must be initialized to 0b0 479 480 - For CPUs with FEAT_PMUv3: 481 482 - If EL3 is present: 483 484 - MDCR_EL3.TPM (bit 6) must be initialized to 0b0 485 486The requirements described above for CPU mode, caches, MMUs, architected 487timers, coherency and system registers apply to all CPUs. All CPUs must 488enter the kernel in the same exception level. Where the values documented 489disable traps it is permissible for these traps to be enabled so long as 490those traps are handled transparently by higher exception levels as though 491the values documented were set. 492 493The boot loader is expected to enter the kernel on each CPU in the 494following manner: 495 496- The primary CPU must jump directly to the first instruction of the 497 kernel image. The device tree blob passed by this CPU must contain 498 an 'enable-method' property for each cpu node. The supported 499 enable-methods are described below. 500 501 It is expected that the bootloader will generate these device tree 502 properties and insert them into the blob prior to kernel entry. 503 504- CPUs with a "spin-table" enable-method must have a 'cpu-release-addr' 505 property in their cpu node. This property identifies a 506 naturally-aligned 64-bit zero-initalised memory location. 507 508 These CPUs should spin outside of the kernel in a reserved area of 509 memory (communicated to the kernel by a /memreserve/ region in the 510 device tree) polling their cpu-release-addr location, which must be 511 contained in the reserved region. A wfe instruction may be inserted 512 to reduce the overhead of the busy-loop and a sev will be issued by 513 the primary CPU. When a read of the location pointed to by the 514 cpu-release-addr returns a non-zero value, the CPU must jump to this 515 value. The value will be written as a single 64-bit little-endian 516 value, so CPUs must convert the read value to their native endianness 517 before jumping to it. 518 519- CPUs with a "psci" enable method should remain outside of 520 the kernel (i.e. outside of the regions of memory described to the 521 kernel in the memory node, or in a reserved area of memory described 522 to the kernel by a /memreserve/ region in the device tree). The 523 kernel will issue CPU_ON calls as described in ARM document number ARM 524 DEN 0022A ("Power State Coordination Interface System Software on ARM 525 processors") to bring CPUs into the kernel. 526 527 The device tree should contain a 'psci' node, as described in 528 Documentation/devicetree/bindings/arm/psci.yaml. 529 530- Secondary CPU general-purpose register settings 531 532 - x0 = 0 (reserved for future use) 533 - x1 = 0 (reserved for future use) 534 - x2 = 0 (reserved for future use) 535 - x3 = 0 (reserved for future use) 536