Name Date Size #Lines LOC

..--

app/H25-Apr-2025-2,1801,949

gradle/wrapper/H25-Apr-2025-76

.gitignoreH A D25-Apr-2025113 1211

README.mdH A D25-Apr-20258.2 KiB197137

build.gradle.ktsH A D25-Apr-2025459 144

gradle.propertiesH A D25-Apr-20251.3 KiB2423

gradlewH A D25-Apr-20255.6 KiB186125

gradlew.batH A D25-Apr-20252.9 KiB9673

settings.gradle.ktsH A D25-Apr-2025574 3017

setup.shH A D25-Apr-20251.5 KiB4228

README.md

1# Building an ExecuTorch Android Demo App
2
3This is forked from [PyTorch Android Demo App](https://github.com/pytorch/android-demo-app).
4
5This guide explains how to setup ExecuTorch for Android using a demo app. The app employs a [DeepLab v3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) model for image segmentation tasks. Models are exported to ExecuTorch using [XNNPACK FP32 backend](tutorial-xnnpack-delegate-lowering.md).
6
7::::{grid} 2
8:::{grid-item-card}  What you will learn
9:class-card: card-prerequisites
10* How to set up a build target for Android arm64-v8a
11* How to build the required ExecuTorch runtime with JNI wrapper for Android
12* How to build the app with required JNI library and model file
13:::
14
15:::{grid-item-card} Prerequisites
16:class-card: card-prerequisites
17* Refer to [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment.
18* Download and install [Android Studio and SDK](https://developer.android.com/studio).
19* Supported Host OS: CentOS, macOS Ventura (M1/x86_64). See below for Qualcomm HTP specific requirements.
20* *Qualcomm HTP Only[^1]:* To build and run on Qualcomm's AI Engine Direct, please follow [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](build-run-qualcomm-ai-engine-direct-backend.md) for hardware and software pre-requisites. The version we use for this tutorial is 2.19. The chip we use for this tutorial is SM8450.
21:::
22::::
23
24[^1]: This section applies only if Qualcomm HTP Backend is needed in the app. Same applies to sections with title`Qualcomm Hexagon NPU`.
25
26```{note}
27This demo app and tutorial has only been validated with arm64-v8a [ABI](https://developer.android.com/ndk/guides/abis).
28```
29
30
31## Build
32
33### Ahead-Of-Time
34
35We generate the model file for the ExecuTorch runtime in Android Demo App.
36
37#### XNNPACK Delegation
38
39For delegating DeepLab v3 to XNNPACK backend, please do the following to export the model:
40
41```bash
42python3 -m examples.xnnpack.aot_compiler --model_name="dl3" --delegate
43mkdir -p examples/demo-apps/android/ExecuTorchDemo/app/src/main/assets/
44cp dl3_xnnpack_fp32.pte examples/demo-apps/android/ExecuTorchDemo/app/src/main/assets/
45```
46
47For more detailed tutorial of lowering to XNNPACK, please see [XNNPACK backend](tutorial-xnnpack-delegate-lowering.md).
48
49#### Qualcomm Hexagon NPU
50
51For delegating to Qualcomm Hexagon NPU, please follow the tutorial [here](build-run-qualcomm-ai-engine-direct-backend.md).
52
53After generating the model, copy the model to `assets` directory.
54
55```bash
56python -m examples.qualcomm.scripts.deeplab_v3 -b build-android -m SM8450 -s <adb_connected_device_serial>
57cp deeplab_v3/dlv3_qnn.pte examples/demo-apps/android/ExecuTorchDemo/app/src/main/assets/
58```
59
60### Runtime
61
62We build the required ExecuTorch runtime library to run the model.
63
64#### XNNPACK
65
661. Build the CMake target for the library with XNNPACK backend:
67
68```bash
69export ANDROID_NDK=<path-to-android-ndk>
70export ANDROID_ABI=arm64-v8a
71
72rm -rf cmake-android-out && mkdir cmake-android-out
73
74# Build the core executorch library
75cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \
76  -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \
77  -DANDROID_ABI="${ANDROID_ABI}" \
78  -DEXECUTORCH_BUILD_XNNPACK=ON \
79  -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
80  -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
81  -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \
82  -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
83  -Bcmake-android-out
84
85cmake --build cmake-android-out -j16 --target install
86```
87
88When we set `EXECUTORCH_BUILD_XNNPACK=ON`, we will build the target [`xnnpack_backend`](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt) which in turn is linked into libexecutorch_jni via [CMake](https://github.com/pytorch/executorch/blob/main/examples/demo-apps/android/jni/CMakeLists.txt).
89
902. Build the Android extension:
91
92```bash
93
94# Build the android extension
95cmake extension/android \
96  -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}"/build/cmake/android.toolchain.cmake \
97  -DANDROID_ABI="${ANDROID_ABI}" \
98  -DCMAKE_INSTALL_PREFIX=cmake-android-out \
99  -Bcmake-android-out/extension/android
100
101cmake --build cmake-android-out/extension/android -j16
102```
103
104`libexecutorch_jni.so` wraps up the required XNNPACK Backend runtime library from `xnnpack_backend`, and adds an additional JNI layer using fbjni. This is later exposed to Java app.
105
106#### Qualcomm Hexagon NPU
107
1081. Build the CMake target for the library with Qualcomm Hexagon NPU (HTP) backend (XNNPACK also included):
109
110```bash
111export ANDROID_NDK=<path-to-android-ndk>
112export ANDROID_ABI=arm64-v8a
113export QNN_SDK_ROOT=<path-to-qnn-sdk>
114
115rm -rf cmake-android-out && mkdir cmake-android-out && cd cmake-android-out
116cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \
117    -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \
118    -DANDROID_ABI="${ANDROID_ABI}" \
119    -DEXECUTORCH_BUILD_XNNPACK=ON \
120    -DEXECUTORCH_BUILD_QNN=ON \
121    -DQNN_SDK_ROOT="${QNN_SDK_ROOT}" \
122    -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
123    -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
124    -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \
125    -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
126    -Bcmake-android-out
127
128cmake --build cmake-android-out -j16 --target install
129```
130Similar to the XNNPACK library, with this setup, we compile `libexecutorch_jni.so` but it adds an additional static library `qnn_executorch_backend` which wraps up Qualcomm HTP runtime library and registers the Qualcomm HTP backend. This is later exposed to Java app.
131
132`qnn_executorch_backend` is built when we turn on CMake option `EXECUTORCH_BUILD_QNN`. It will include the [CMakeLists.txt](https://github.com/pytorch/executorch/blob/main/backends/qualcomm/CMakeLists.txt) from backends/qualcomm where we `add_library(qnn_executorch_backend STATIC)`.
133
1342. Build the Android extension:
135
136```bash
137cmake extension/android \
138  -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}"/build/cmake/android.toolchain.cmake \
139  -DANDROID_ABI="${ANDROID_ABI}" \
140  -DCMAKE_INSTALL_PREFIX=cmake-android-out \
141  -Bcmake-android-out/extension/android
142
143cmake --build cmake-android-out/extension/android -j16
144```
145
146## Deploying on Device via Demo App
147
148### Steps for Deploying Model via XNNPACK
149
150```bash
151mkdir -p examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a
152cp cmake-android-out/extension/android/libexecutorch_jni.so \
153   examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a/libexecutorch.so
154```
155
156This allows the Android app to load ExecuTorch runtime with XNNPACK backend as a JNI library. Later, this shared library will be loaded by `NativePeer.java` in Java code.
157
158### Steps for Deploying Model via Qualcomm's AI Engine Direct
159
160```bash
161mkdir -p ../examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a
162```
163
164We need to push some additional Qualcomm HTP backend libraries to the app. Please refer to [Qualcomm docs](build-run-qualcomm-ai-engine-direct-backend.md) here.
165
166```bash
167cp ${QNN_SDK_ROOT}/lib/aarch64-android/libQnnHtp.so ${QNN_SDK_ROOT}/lib/hexagon-v69/unsigned/libQnnHtpV69Skel.so ${QNN_SDK_ROOT}/lib/aarch64-android/libQnnHtpV69Stub.so ${QNN_SDK_ROOT}/lib/aarch64-android/libQnnSystem.so \
168   examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a
169```
170
171Copy the core libraries:
172
173```bash
174cp cmake-android-out/extension/android/libexecutorch_jni.so \
175   examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a/libexecutorch.so
176cp cmake-android-out/lib/libqnn_executorch_backend.so \
177   examples/demo-apps/android/ExecuTorchDemo/app/src/main/jniLibs/arm64-v8a/libqnn_executorch_backend.so
178```
179
180## Running the App
181
1821. Open the project `examples/demo-apps/android/ExecuTorchDemo` with Android Studio.
183
1842. [Run](https://developer.android.com/studio/run) the app (^R).
185
186<img src="_static/img/android_studio.png" alt="Android Studio View" /><br>
187
188On the phone or emulator, you can try running the model:
189<img src="_static/img/android_demo_run.png" alt="Android Demo" /><br>
190
191## Takeaways
192Through this tutorial we've learnt how to build the ExecuTorch runtime library with XNNPACK (or Qualcomm HTP) backend, and expose it to JNI layer to build the Android app running segmentation model.
193
194## Reporting Issues
195
196If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new).
197