Name Date Size #Lines LOC

..--

.ci/H25-Apr-2025-2,6491,814

.github/H25-Apr-2025-7,5186,082

backends/H25-Apr-2025-180,036135,933

build/H25-Apr-2025-3,5002,639

codegen/H25-Apr-2025-1,9271,444

configurations/H25-Apr-2025-142117

devtools/H25-Apr-2025-10,4688,144

docs/H25-Apr-2025-14,94610,774

examples/H25-Apr-2025-1,428,7441,414,747

executorch/H25-Apr-2025-

exir/H25-Apr-2025-56,54445,582

extension/H25-Apr-2025-221,255209,193

kernels/H25-Apr-2025-86,34863,293

profiler/H25-Apr-2025-709526

runtime/H25-Apr-2025-30,99120,143

schema/H25-Apr-2025-1,132802

scripts/H25-Apr-2025-706520

shim/H25-Apr-2025-3,3973,105

test/H25-Apr-2025-2,8202,057

third-party/H25-Apr-2025-1,106989

util/H25-Apr-2025-326254

.buckconfigH A D25-Apr-2025719 3629

.clang-formatH A D25-Apr-20256.6 KiB245244

.clang-tidyH A D25-Apr-2025218 1110

.cmake-format.yamlH A D25-Apr-202551 32

.cmakelintrcH A D25-Apr-2025245 21

.flake8H A D25-Apr-2025993 8177

.gitignoreH A D25-Apr-2025412 4037

.lintrunner.tomlH A D25-Apr-20255.9 KiB287278

Android.bpH A D25-Apr-20254.4 KiB144137

CMakeLists.txtH A D25-Apr-202526.7 KiB859744

CODE_OF_CONDUCT.mdH A D25-Apr-20253.5 KiB8160

CONTRIBUTING.mdH A D25-Apr-202514.9 KiB323257

LICENSEH A D25-Apr-20251.6 KiB3527

METADATAH A D25-Apr-2025655 2220

MODULE_LICENSE_BSDHD25-Apr-20250

OWNERSH A D25-Apr-202550 11

PREUPLOAD.cfgH A D25-Apr-202529 32

README-wheel.mdH A D25-Apr-20252.1 KiB4137

README.mdH A D25-Apr-20257.1 KiB11596

install_requirements.batH A D25-Apr-2025574 2115

install_requirements.pyH A D25-Apr-20256.5 KiB199132

install_requirements.shH A D25-Apr-2025743 2612

pyproject.tomlH A D25-Apr-20253.5 KiB112103

pytest.iniH A D25-Apr-20252.7 KiB7270

requirements-lintrunner.txtH A D25-Apr-2025363 2319

setup.pyH A D25-Apr-202529.7 KiB725415

version.txtH A D25-Apr-20258 21

README-wheel.md

1**ExecuTorch** is a [PyTorch](https://pytorch.org/) platform that provides
2infrastructure to run PyTorch programs everywhere from AR/VR wearables to
3standard on-device iOS and Android mobile deployments. One of the main goals for
4ExecuTorch is to enable wider customization and deployment capabilities of the
5PyTorch programs.
6
7The `executorch` pip package is in alpha.
8* Supported python versions: 3.10, 3.11
9* Compatible systems: Linux x86_64, macOS aarch64
10
11The prebuilt `executorch.extension.pybindings.portable_lib` module included in
12this package provides a way to run ExecuTorch `.pte` files, with some
13restrictions:
14* Only [core ATen
15  operators](https://pytorch.org/executorch/stable/ir-ops-set-definition.html)
16  are linked into the prebuilt module
17* Only the [XNNPACK backend
18  delegate](https://pytorch.org/executorch/main/native-delegates-executorch-xnnpack-delegate.html)
19  is linked into the prebuilt module
20* [macOS only] [Core ML](https://pytorch.org/executorch/main/build-run-coreml.html) and [MPS](https://pytorch.org/executorch/main/build-run-mps.html) backend delegates are linked into the prebuilt module.
21
22Please visit the [ExecuTorch website](https://pytorch.org/executorch/) for
23tutorials and documentation. Here are some starting points:
24* [Getting
25  Started](https://pytorch.org/executorch/stable/getting-started-setup.html)
26  * Set up the ExecuTorch environment and run PyTorch models locally.
27* [Working with
28  local LLMs](https://pytorch.org/executorch/stable/llm/getting-started.html)
29  * Learn how to use ExecuTorch to export and accelerate a large-language model
30    from scratch.
31* [Exporting to
32  ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html)
33  * Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
34    optimizing its performance using quantization and hardware delegation.
35* Running LLaMA on
36  [iOS](https://pytorch.org/executorch/stable/llm/llama-demo-ios.html) and
37  [Android](https://pytorch.org/executorch/stable/llm/llama-demo-android.html)
38  devices.
39  * Build and run LLaMA in a demo mobile app, and learn how to integrate models
40    with your own apps.
41

README.md

1# ExecuTorch
2
3**ExecuTorch** is an end-to-end solution for enabling on-device inference
4capabilities across mobile and edge devices including wearables, embedded
5devices and microcontrollers. It is part of the PyTorch Edge ecosystem and
6enables efficient deployment of PyTorch models to edge devices.
7
8Key value propositions of ExecuTorch are:
9
10- **Portability:** Compatibility with a wide variety of computing platforms,
11  from high-end mobile phones to highly constrained embedded systems and
12  microcontrollers.
13- **Productivity:** Enabling developers to use the same toolchains and Developer
14  Tools from PyTorch model authoring and conversion, to debugging and deployment
15  to a wide variety of platforms.
16- **Performance:** Providing end users with a seamless and high-performance
17  experience due to a lightweight runtime and utilizing full hardware
18  capabilities such as CPUs, NPUs, and DSPs.
19
20For a comprehensive technical overview of ExecuTorch and step-by-step tutorials,
21please visit our documentation website [for the latest release](https://pytorch.org/executorch/stable/index.html) (or the [main branch](https://pytorch.org/executorch/main/index.html)).
22
23Check out the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html#quick-setup-colab-jupyter-notebook-prototype) page for a quick spin.
24
25Check out the examples of [Llama](./examples/models/llama/README.md), [Llava](./examples/models/llava/README.md) and [other models](./examples/README.md) running on edge devices using ExecuTorch.
26
27
28**[UPDATE - 10/24]** We have added support for running [Llama 3.2 Quantized 1B/3B](./examples/models/llama/README.md) models via ExecuTorch.
29
30## Feedback
31
32We welcome any feedback, suggestions, and bug reports from the community to help
33us improve our technology. Please use the [PyTorch
34Forums](https://discuss.pytorch.org/c/executorch) for discussion and feedback
35about ExecuTorch using the **ExecuTorch** category, and our [GitHub
36repository](https://github.com/pytorch/executorch/issues) for bug reporting.
37
38We recommend using the latest release tag from the
39[Releases](https://github.com/pytorch/executorch/releases) page when developing.
40
41## Contributing
42
43See [CONTRIBUTING.md](CONTRIBUTING.md) for details about issues, PRs, code
44style, CI jobs, and other development topics.
45
46To connect with us and other community members, we invite you to join PyTorch Slack community by filling out this [form](https://docs.google.com/forms/d/e/1FAIpQLSeADnUNW36fjKjYzyHDOzEB_abKQE9b6gqqW9NXse6O0MWh0A/viewform). Once you've joined, you can:
47* Head to the `#executorch-general` channel for general questions, discussion, and community support.
48* Join the `#executorch-contributors` channel if you're interested in contributing directly to project development.
49
50
51## Directory Structure
52
53```
54executorch
55├── backends                        #  Backend delegate implementations.
56├── build                           #  Utilities for managing the build system.
57├── codegen                         #  Tooling to autogenerate bindings between kernels and the runtime.
58├── configurations
59├── docs                            #  Static docs tooling.
60├── examples                        #  Examples of various user flows, such as model export, delegates, and runtime execution.
61├── exir                            #  Ahead-of-time library: model capture and lowering APIs.
62|   ├── _serialize                  #  Serialize final export artifact.
63|   ├── backend                     #  Backend delegate ahead of time APIs
64|   ├── capture                     #  Program capture.
65|   ├── dialects                    #  Op sets for various dialects in the export process.
66|   ├── emit                        #  Conversion from ExportedProgram to ExecuTorch execution instructions.
67|   ├── operator                    #  Operator node manipulation utilities.
68|   ├── passes                      #  Built-in compiler passes.
69|   ├── program                     #  Export artifacts.
70|   ├── serde                       #  Graph module
71serialization/deserialization.
72|   ├── verification                #  IR verification.
73├── extension                       #  Extensions built on top of the runtime.
74|   ├── android                     #  ExecuTorch wrappers for Android apps.
75|   ├── apple                       #  ExecuTorch wrappers for iOS apps.
76|   ├── aten_util                   #  Converts to and from PyTorch ATen types.
77|   ├── data_loader                 #  1st party data loader implementations.
78|   ├── evalue_util                 #  Helpers for working with EValue objects.
79|   ├── gguf_util                   #  Tools to convert from the GGUF format.
80|   ├── kernel_util                 #  Helpers for registering kernels.
81|   ├── memory_allocator            #  1st party memory allocator implementations.
82|   ├── module                      #  A simplified C++ wrapper for the runtime.
83|   ├── parallel                    #  C++ threadpool integration.
84|   ├── pybindings                  #  Python API for executorch runtime.
85|   ├── pytree                      #  C++ and Python flattening and unflattening lib for pytrees.
86|   ├── runner_util                 #  Helpers for writing C++ PTE-execution
87tools.
88|   ├── testing_util                #  Helpers for writing C++ tests.
89|   ├── training                    #  Experimental libraries for on-device training
90├── kernels                         #  1st party kernel implementations.
91|   ├── aten
92|   ├── optimized
93|   ├── portable                    #  Reference implementations of ATen operators.
94|   ├── prim_ops                    #  Special ops used in executorch runtime for control flow and symbolic primitives.
95|   ├── quantized
96├── profiler                        #  Utilities for profiling runtime execution.
97├── runtime                         #  Core C++ runtime.
98|   ├── backend                     #  Backend delegate runtime APIs.
99|   ├── core                        #  Core structures used across all levels of the runtime.
100|   ├── executor                    #  Model loading, initialization, and execution.
101|   ├── kernel                      #  Kernel registration and management.
102|   ├── platform                    #  Layer between architecture specific code and portable C++.
103├── schema                          #  ExecuTorch PTE file format flatbuffer
104schemas.
105├── scripts                         #  Utility scripts for size management, dependency management, etc.
106├── devtools                        #  Model profiling, debugging, and introspection.
107├── shim                            #  Compatibility layer between OSS and Internal builds
108├── test                            #  Broad scoped end-to-end tests.
109├── third-party                     #  Third-party dependencies.
110├── util                            #  Various helpers and scripts.
111```
112
113## License
114ExecuTorch is BSD licensed, as found in the LICENSE file.
115