1# Welcome to ACO 2 3ACO (short for *AMD compiler*) is a back-end compiler for AMD GCN / RDNA GPUs, based on the NIR compiler infrastructure. 4Simply put, ACO translates shader programs from the NIR intermediate representation into a GCN / RDNA binary which the GPU can execute. 5 6## Motivation 7 8Why did we choose to develop a new compiler backend? 9 101. We'd like to give gamers a fluid, stutter-free experience, so we prioritize compilation speed. 112. Good divergence analysis allows us to better optimize runtime performance. 123. Issues can be fixed within mesa releases, independently of the schedule of other projects. 13 14## Control flow 15 16Modern GPUs are SIMD machines that execute the shader in parallel. 17In case of GCN / RDNA the parallelism is achieved by executing the shader on several waves, and each wave has several lanes (32 or 64). 18When every lane executes exactly the same instructions, and takes the same path, it's uniform control flow; 19otherwise when some lanes take one path while other lanes take a different path, it's divergent. 20 21Each hardware lane corresponds to a shader invocation from a software perspective. 22 23The hardware doesn't directly support divergence, 24so in case of divergent control flow, the GPU must execute both code paths, each with some lanes disabled. 25This is why divergence is a performance concern in shader programming. 26 27ACO deals with divergent control flow by maintaining two control flow graphs (CFG): 28 29* logical CFG - directly translated from NIR and shows the intended control flow of the program. 30* linear CFG - created according to Whole-Function Vectorization by Ralf Karrenberg and Sebastian Hack. 31 The linear CFG represents how the program is physically executed on GPU and may contain additional blocks for control flow handling and to avoid critical edges. 32 Note that all nodes of the logical CFG also participate in the linear CFG, but not vice versa. 33 34## Compilation phases 35 36#### Instruction Selection 37 38The instruction selection is based around the divergence analysis and works in 3 passes on the NIR shader. 39 401. The divergence analysis pass calculates for each SSA definition if its value is guaranteed to be uniform across all threads of the workgroup. 412. We determine the register class for each SSA definition. 423. Actual instruction selection. The advanced divergence analysis allows for better usage of the scalar unit, scalar memory loads and the scalar register file. 43 44We have two types of instructions: 45 46* Hardware instructions as specified by the GCN / RDNA instruction set architecture manuals. 47* Pseudo instructions which are helpers that encapsulate more complex functionality. 48 They eventually get lowered to real hardware instructions. 49 50Each instruction can have operands (temporaries that it reads), and definitions (temporaries that it writes). 51Temporaries can be fixed to a specific register, or just specify a register class (either a single register, or a vector of several registers). 52 53#### Value Numbering 54 55The value numbering pass is necessary for two reasons: the lack of descriptor load representation in NIR, 56and every NIR instruction that gets emitted as multiple ACO instructions also has potential for CSE. 57This pass does dominator-tree value numbering. 58 59#### Optimization 60 61In this phase, simpler instructions are combined into more complex instructions (like the different versions of multiply-add as well as neg, abs, clamp, and output modifiers) and constants are inlined, moves are eliminated, etc. 62Exactly which optimizations are performed depends on the hardware for which the shader is being compiled. 63 64#### Setup of reduction temporaries 65 66This pass is responsible for making sure that register allocation is correct for reductions, by adding pseudo instructions that utilize linear VGPRs. 67When a temporary has a linear VGPR register class, this means that the variable is considered *live* in the linear control flow graph. 68 69#### Insert exec mask 70 71In the GCN/RDNA architecture, there is a special register called `exec` which is used for manually controlling which VALU threads (aka. *lanes*) are active. The value of `exec` has to change in divergent branches, loops, etc. and it needs to be restored after the branch or loop is complete. This pass ensures that the correct lanes are active in every branch. 72 73#### Live-Variable Analysis 74 75A live-variable analysis is used to calculate the register need of the shader. 76This information is used for spilling and scheduling before register allocation. 77 78#### Spilling 79 80First, we lower the shader program to CSSA form. 81Then, if the register demand exceeds the global limit, this pass lowers register usage by temporarily storing excess scalar values in free vector registers, or excess vector values in scratch memory, and reloading them when needed. It is based on the paper "Register Spilling and Live-Range Splitting for SSA-Form Programs". 82 83#### Instruction Scheduling 84 85Scheduling is another NP-complete problem where basically all known heuristics suffer from unpredictable change in register pressure. For that reason, the implemented scheduler does not completely re-schedule all instructions, but only aims to move up memory loads as far as possible without exceeding the maximum register limit for the pre-calculated wave count. The reason this works is that ILP is very limited on GCN. This approach looks promising so far. 86 87#### Register Allocation 88 89The register allocator works on SSA (as opposed to LLVM's which works on virtual registers). The SSA properties guarantee that there are always as many registers available as needed. The problem is that some instructions require a vector of neighboring registers to be available, but the free regs might be scattered. In this case, the register allocator inserts shuffle code (moving some temporaries to other registers) to make space for the variable. The assumption is that it is (almost) always better to have a few more moves than to sacrifice a wave. The RA does SSA-reconstruction on the fly, which makes its runtime linear. 90 91#### SSA Elimination 92 93The next step is a pass out of SSA by inserting parallelcopies at the end of blocks to match the phi nodes' semantics. 94 95#### Lower to HW instructions 96 97Most pseudo instructions are lowered to actual machine instructions. 98These are mostly parallel copy instructions created by instruction selection or register allocation and spill/reload code. 99 100#### ILP Scheduling 101 102This second scheduler works on registers rather than SSA-values to determine dependencies. It implements a forward list scheduling algorithm using a partial dependency graph of few instructions at a time and aims to create larger memory clauses and improve ILP. 103 104#### Insert wait states 105 106GCN requires some wait states to be manually inserted in order to ensure correct behavior on memory instructions and some register dependencies. 107This means that we need to insert `s_waitcnt` instructions (and its variants) so that the shader program waits until the eg. a memory operation is complete. 108 109#### Resolve hazards and insert NOPs 110 111Some instructions require wait states or other instructions to resolve hazards which are not handled by the hardware. 112This pass makes sure that no known hazards occur. 113 114#### Emit program - Assembler 115 116The assembler emits the actual binary that will be sent to the hardware for execution. ACO's assembler is straight-forward because all instructions have their format, opcode, registers and potential fields already available, so it only needs to cater to the some differences between each hardware generation. 117 118## Supported shader stages 119 120Hardware stages (as executed on the chip) don't exactly match software stages (as defined in OpenGL / Vulkan). 121Which software stage gets executed on which hardware stage depends on what kind of software stages are present in the current pipeline. 122 123An important difference is that VS is always the first stage to run in SW models, 124whereas HW VS refers to the last HW stage before fragment shading in GCN/RDNA terminology. 125That's why, among other things, the HW VS is no longer used to execute the SW VS when tessellation or geometry shading are used. 126 127#### Glossary of software stages 128 129* VS = Vertex Shader 130* TCS = Tessellation Control Shader, equivalent to D3D HS = Hull Shader 131* TES = Tessellation Evaluation Shader, equivalent to D3D DS = Domain Shader 132* GS = Geometry Shader 133* FS = Fragment Shader, equivalent to D3D PS = Pixel Shader 134* CS = Compute Shader 135* TS = Task Shader 136* MS = Mesh Shader 137 138#### Glossary of hardware stages 139 140* LS = Local Shader (merged into HS on GFX9+), only runs SW VS when tessellation is used 141* HS = Hull Shader, the HW equivalent of a Tessellation Control Shader, runs before the fixed function hardware performs tessellation 142* ES = Export Shader (merged into GS on GFX9+), if there is a GS in the SW pipeline, the preceding stage (ie. SW VS or SW TES) always has to run on this HW stage 143* GS = Geometry Shader, also known as legacy GS 144* VS = Vertex Shader, **not equivalent to SW VS**: when there is a GS in the SW pipeline this stage runs a "GS copy" shader, otherwise it always runs the SW stage before FS 145* NGG = Next Generation Geometry, a new hardware stage that replaces legacy HW GS and HW VS on RDNA GPUs 146* PS = Pixel Shader, the HW equivalent to SW FS 147* CS = Compute Shader 148 149##### Notes about HW VS and the "GS copy" shader 150 151HW PS reads its inputs from a special ring buffer called Parameter Cache (PC) that only HW VS can write to, using export instructions. 152However, legacy GS store their output in VRAM (before GFX10/NGG). 153So in order for HW PS to be able to read the GS outputs, we must run something on the VS stage which reads the GS outputs 154from VRAM and exports them to the PC. This is what we call a "GS copy" shader. 155From a HW perspective the "GS copy" shader is in fact VS (it runs on the HW VS stage), 156but from a SW perspective it's not part of the traditional pipeline, 157it's just some "glue code" that we need for outputs to play nicely. 158 159On GFX10/NGG this limitation no longer exists, because NGG can export directly to the PC. 160 161##### Notes about merged shaders 162 163The merged stages on GFX9 (and GFX10/legacy) are: LSHS and ESGS. On GFX10/NGG the ESGS is merged with HW VS into NGG. 164 165This might be confusing due to a mismatch between the number of invocations of these shaders. 166For example, ES is per-vertex, but GS is per-primitive. 167This is why merged shaders get an argument called `merged_wave_info` which tells how many invocations each part needs, 168and there is some code at the beginning of each part to ensure the correct number of invocations by disabling some threads. 169So, think about these as two independent shader programs slapped together. 170 171### Which software stage runs on which hardware stage? 172 173#### Graphics Pipeline 174 175##### GFX6-8: 176 177* Each SW stage has its own HW stage 178* LS and HS share the same LDS space, so LS can store its output to LDS, where HS can read it 179* HS, ES, GS outputs are stored in VRAM, next stage reads these from VRAM 180* GS outputs got to VRAM, so they have to be copied by a GS copy shader running on the HW VS stage 181 182| GFX6-8 HW stages: | LS | HS | ES | GS | VS | PS | ACO terminology | 183| -----------------------:|:----|:----|:----|:----|:-------|:---|:----------------| 184| SW stages: only VS+PS: | | | | | VS | FS | `vertex_vs`, `fragment_fs` | 185| with tess: | VS | TCS | | | TES | FS | `vertex_ls`, `tess_control_hs`, `tess_eval_vs`, `fragment_fs` | 186| with GS: | | | VS | GS | GS copy| FS | `vertex_es`, `geometry_gs`, `gs_copy_vs`, `fragment_fs` | 187| with both: | VS | TCS | TES | GS | GS copy| FS | `vertex_ls`, `tess_control_hs`, `tess_eval_es`, `geometry_gs`, `gs_copy_vs`, `fragment_fs` | 188 189##### GFX9+ (including GFX10/legacy): 190 191* HW LS and HS stages are merged, and the merged shader still uses LDS in the same way as before 192* HW ES and GS stages are merged, so ES outputs can go to LDS instead of VRAM 193* LSHS outputs and ESGS outputs are still stored in VRAM, so a GS copy shader is still necessary 194 195| GFX9+ HW stages: | LSHS | ESGS | VS | PS | ACO terminology | 196| -----------------------:|:----------|:----------|:-------|:---|:----------------| 197| SW stages: only VS+PS: | | | VS | FS | `vertex_vs`, `fragment_fs` | 198| with tess: | VS + TCS | | TES | FS | `vertex_tess_control_hs`, `tess_eval_vs`, `fragment_fs` | 199| with GS: | | VS + GS | GS copy| FS | `vertex_geometry_gs`, `gs_copy_vs`, `fragment_fs` | 200| with both: | VS + TCS | TES + GS | GS copy| FS | `vertex_tess_control_hs`, `tess_eval_geometry_gs`, `gs_copy_vs`, `fragment_fs` | 201 202##### NGG (GFX10+ only): 203 204 * HW GS and VS stages are now merged, and NGG can export directly to PC 205 * GS copy shaders are no longer needed 206 207| GFX10/NGG HW stages: | LSHS | NGG | PS | ACO terminology | 208| -----------------------:|:----------|:-------------------|:---|:----------------| 209| SW stages: only VS+PS: | | VS | FS | `vertex_ngg`, `fragment_fs` | 210| with tess: | VS + TCS | TES | FS | `vertex_tess_control_hs`, `tess_eval_ngg`, `fragment_fs` | 211| with GS: | | VS + GS | FS | `vertex_geometry_ngg`, `fragment_fs` | 212| with both: | VS + TCS | TES + GS | FS | `vertex_tess_control_hs`, `tess_eval_geometry_ngg`, `fragment_fs` | 213 214#### Mesh Shading Graphics Pipeline 215 216GFX10.3+: 217 218* TS will run as a CS and stores its output payload to VRAM 219* MS runs on NGG, loads its inputs from VRAM and stores outputs to LDS, then PC 220* Pixel Shaders work the same way as before 221 222| GFX10.3+ HW stages | CS | NGG | PS | ACO terminology | 223| -----------------------:|:------|:------|:---|:----------------| 224| SW stages: only MS+PS: | | MS | FS | `mesh_ngg`, `fragment_fs` | 225| with task: | TS | MS | FS | `task_cs`, `mesh_ngg`, `fragment_fs` | 226 227#### Compute pipeline 228 229GFX6-10: 230 231* Note that the SW CS always runs on the HW CS stage on all HW generations. 232 233| GFX6-10 HW stage | CS | ACO terminology | 234| -----------------------:|:-----|:----------------| 235| SW stage | CS | `compute_cs` | 236 237 238## How to debug 239 240Handy `RADV_DEBUG` options that help with ACO debugging: 241 242* `nocache` - you always want to use this when debugging, otherwise you risk using a broken shader from the cache. 243* `shaders` - makes ACO print the IR after register allocation, as well as the disassembled shader binary. 244* `metashaders` - does the same thing as `shaders` but for built-in RADV shaders. 245* `preoptir` - makes ACO print the final NIR shader before instruction selection, as well as the ACO IR after instruction selection. 246* `nongg` - disables NGG support 247 248We also have `ACO_DEBUG` options: 249 250* `validateir` - Validate the ACO IR between compilation stages. By default, enabled in debug builds and disabled in release builds. 251* `validatera` - Perform a RA (register allocation) validation. 252* `force-waitcnt` - Forces ACO to emit a wait state after each instruction when there is something to wait for. Harms performance. 253* `novn` - Disables the ACO value numbering stage. 254* `noopt` - Disables the ACO optimizer. 255* `nosched` - Disables the ACO pre-RA and post-RA scheduler. 256* `nosched-ilp` - Disables the ACO post-RA ILP scheduler. 257 258Note that you need to **combine these options into a comma-separated list**, for example: `RADV_DEBUG=nocache,shaders` otherwise only the last one will take effect. (This is how all environment variables work, yet this is an often made mistake.) Example: 259 260``` 261RADV_DEBUG=nocache,shaders ACO_DEBUG=validateir,validatera vkcube 262``` 263 264### Using GCC sanitizers 265 266GCC has several sanitizers which can help figure out hard to diagnose issues. To use these, you need to pass 267the `-Dbsanitize` flag to `meson` when building mesa. For example `-Dbsanitize=undefined` will add support for 268the undefined behavior sanitizer. 269 270### Hardened builds and glibc++ assertions 271 272Several Linux distributions use "hardened" builds meaning several special compiler flags are added by 273downstream packaging which are not used in mesa builds by default. These may be responsible for 274some bug reports of inexplicable crashes with assertion failures you can't reproduce. 275 276Most notable are the glibc++ debug flags, which you can use by adding the `-D_GLIBCXX_ASSERTIONS=1` and 277`-D_GLIBCXX_DEBUG=1` flags. 278 279To see the full list of downstream compiler flags, you can use eg. `rpm --eval "%optflags"` 280on Red Hat based distros like Fedora. 281 282### Good practices 283 284Here are some good practices we learned while debugging visual corruption and hangs. 285 2861. Bisecting shaders: 287 * Use renderdoc when examining shaders. This is deterministic while real games often use multi-threading or change the order in which shaders get compiled. 288 * Edit `radv_shader.c` or `radv_pipeline.c` to change if they are compiled with LLVM or ACO. 2892. Things to check early: 290 * Disable value_numbering, optimizer and/or scheduler. 291 Note that if any of these change the output, it does not necessarily mean that the error is there, as register assignment does also change. 2923. Finding the instruction causing a hang: 293 * The ability to directly manipulate the binaries gives us an easy way to find the exact instruction which causes the hang. 294 Use NULL exports (for FS and VS) and `s_endpgm` to end the shader early to find the problematic instruction. 2954. Other faulty instructions: 296 * Use print_asm and check for illegal instructions. 297 * Compare to the ACO IR to see if the assembly matches what we want (this can take a while). 298 Typical issues might be a wrong instruction format leading to a wrong opcode or an sgpr used for vgpr field. 2995. Comparing to the LLVM backend: 300 * If everything else didn't help, we probably just do something wrong. The LLVM backend is quite mature, so its output might help find differences, but this can be a long road. 301