Readme.md
1# Keyword Spotting Example
2
3## Introduction
4
5This is a sample code showing keyword spotting using Arm NN public C++ API. The compiled application can take
6
7* an audio file
8
9as input and produce
10
11* recognised keyword in the audio file
12
13as output. The application works with the [fully quantised DS CNN Large model](https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/) which is trained to recongize 12 keywords, including an unknown word.
14
15## Dependencies
16
17This example utilises `libsndfile`, `libasound` and `libsamplerate` libraries to capture the raw audio data from file, and to re-sample to the expected sample rate. Top level inference API is provided by Arm NN library.
18
19### Arm NN
20
21Keyword spotting example build system does not trigger Arm NN compilation. Thus, before building the application,
22please ensure that Arm NN libraries and header files are available on your build platform.
23The application executable binary dynamically links with the following Arm NN libraries:
24
25* libarmnn.so
26* libarmnnTfLiteParser.so
27
28The build script searches for available Arm NN libraries in the following order:
29
301. Inside custom user directory specified by ARMNN_LIB_DIR cmake option.
312. Inside the current Arm NN repository, assuming that Arm NN was built following [these instructions](../../BuildGuideCrossCompilation.md).
323. Inside default locations for system libraries, assuming Arm NN was installed from deb packages.
33
34Arm NN header files will be searched in parent directory of found libraries files under `include` directory, i.e.
35libraries found in `/usr/lib` or `/usr/lib64` and header files in `/usr/include` (or `${ARMNN_LIB_DIR}/include`).
36
37Please see [find_armnn.cmake](./cmake/find_armnn.cmake) for implementation details.
38
39## Building
40
41There is one flow for building this application:
42
43* native build on a host platform
44
45### Build Options
46
47* ARMNN_LIB_DIR - point to the custom location of the Arm NN libs and headers.
48* BUILD_UNIT_TESTS - set to `1` to build tests. Additionally to the main application, `keyword-spotting-example-tests`
49unit tests executable will be created.
50
51### Native Build
52
53To build this application on a host platform, firstly ensure that required dependencies are installed:
54For example, for raspberry PI:
55
56```commandline
57sudo apt-get update
58sudo apt-get -yq install libsndfile1-dev
59sudo apt-get -yq install libasound2-dev
60sudo apt-get -yq install libsamplerate-dev
61```
62
63To build demo application, create a build directory:
64
65```commandline
66mkdir build
67cd build
68```
69
70If you have already installed Arm NN and and the required libraries:
71
72Inside build directory, run cmake and make commands:
73
74```commandline
75cmake ..
76make
77```
78
79This will build the following in bin directory:
80
81* `keyword-spotting-example` - application executable
82
83If you have custom Arm NN location, use `ARMNN_LIB_DIR` options:
84
85```commandline
86cmake -DARMNN_LIB_DIR=/path/to/armnn ..
87make
88```
89
90## Executing
91
92Once the application executable is built, it can be executed with the following options:
93
94* --audio-file-path: Path to the audio file to run keyword spotting on **[REQUIRED]**
95* --model-file-path: Path to the Keyword Spotting model to use **[REQUIRED]**
96
97* --preferred-backends: Takes the preferred backends in preference order, separated by comma.
98 For example: `CpuAcc,GpuAcc,CpuRef`. Accepted options: [`CpuAcc`, `CpuRef`, `GpuAcc`].
99 Defaults to `CpuRef` **[OPTIONAL]**
100
101### Keyword Spotting on a supplied audio file
102
103A small selection of suitable wav files containing keywords can be found [here](https://git.mlplatform.org/ml/ethos-u/ml-embedded-evaluation-kit.git/plain/resources/kws/samples/).
104To run keyword spotting on a supplied audio file and output the result to console:
105
106```commandline
107./keyword-spotting-example --audio-file-path /path/to/audio/file --model-file-path /path/to/model/file
108```
109
110# Application Overview
111
112This section provides a walkthrough of the application, explaining in detail the steps:
113
1141. Initialisation
115 1. Reading from Audio Source
1162. Creating a Network
117 1. Creating Parser and Importing Graph
118 2. Optimizing Graph for Compute Device
119 3. Creating Input and Output Binding Information
1203. Keyword spotting pipeline
121 1. Pre-processing the Captured Audio
122 2. Making Input and Output Tensors
123 3. Executing Inference
124 4. Postprocessing
125 5. Decoding and Processing Inference Output
126
127### Initialisation
128
129##### Reading from Audio Source
130
131After parsing user arguments, the chosen audio file is loaded into an AudioCapture object.
132We use [`AudioCapture`](./include/AudioCapture.hpp) in our main function to capture appropriately sized audio blocks from the source using the
133`Next()` function.
134
135The `AudioCapture` object also re-samples the audio input to a desired sample rate, and sets the number of channels used to one channel (i.e `mono`)
136
137### Creating a Network
138
139All operations with Arm NN and networks are encapsulated in [`ArmnnNetworkExecutor`](./include/ArmnnNetworkExecutor.hpp)
140class.
141
142##### Creating Parser and Importing Graph
143
144The first step with Arm NN SDK is to import a graph from file by using the appropriate parser.
145
146The Arm NN SDK provides parsers for reading graphs from a variety of model formats. In our application we specifically
147focus on `.tflite, .pb, .onnx` models.
148
149Based on the extension of the provided model file, the corresponding parser is created and the network file loaded with
150`CreateNetworkFromBinaryFile()` method. The parser will handle the creation of the underlying Arm NN graph.
151
152Currently this example only supports tflite format model files and uses `ITfLiteParser`:
153
154```c++
155#include "armnnTfLiteParser/ITfLiteParser.hpp"
156
157armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create();
158armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile(modelPath.c_str());
159```
160
161##### Optimizing Graph for Compute Device
162
163Arm NN supports optimized execution on multiple CPU and GPU devices. Prior to executing a graph, we must select the
164appropriate device context. We do this by creating a runtime context with default options with `IRuntime()`.
165
166For example:
167
168```c++
169#include "armnn/ArmNN.hpp"
170
171auto runtime = armnn::IRuntime::Create(armnn::IRuntime::CreationOptions());
172```
173
174We can optimize the imported graph by specifying a list of backends in order of preference and implement
175backend-specific optimizations. The backends are identified by a string unique to the backend,
176for example `CpuAcc, GpuAcc, CpuRef`.
177
178For example:
179
180```c++
181std::vector<armnn::BackendId> backends{"CpuAcc", "GpuAcc", "CpuRef"};
182```
183
184Internally and transparently, Arm NN splits the graph into subgraph based on backends, it calls a optimize subgraphs
185function on each of them and, if possible, substitutes the corresponding subgraph in the original graph with
186its optimized version.
187
188Using the `Optimize()` function we optimize the graph for inference and load the optimized network onto the compute
189device with `LoadNetwork()`. This function creates the backend-specific workloads
190for the layers and a backend specific workload factory which is called to create the workloads.
191
192For example:
193
194```c++
195armnn::IOptimizedNetworkPtr optNet = Optimize(*network,
196 backends,
197 m_Runtime->GetDeviceSpec(),
198 armnn::OptimizerOptions());
199std::string errorMessage;
200runtime->LoadNetwork(0, std::move(optNet), errorMessage));
201std::cerr << errorMessage << std::endl;
202```
203
204##### Creating Input and Output Binding Information
205
206Parsers can also be used to extract the input information for the network. By calling `GetSubgraphInputTensorNames`
207we extract all the input names and, with `GetNetworkInputBindingInfo`, bind the input points of the graph.
208For example:
209
210```c++
211std::vector<std::string> inputNames = parser->GetSubgraphInputTensorNames(0);
212auto inputBindingInfo = parser->GetNetworkInputBindingInfo(0, inputNames[0]);
213```
214
215The input binding information contains all the essential information about the input. It is a tuple consisting of
216integer identifiers for bindable layers (inputs, outputs) and the tensor info (data type, quantization information,
217number of dimensions, total number of elements).
218
219Similarly, we can get the output binding information for an output layer by using the parser to retrieve output
220tensor names and calling `GetNetworkOutputBindingInfo()`.
221
222### Keyword Spotting pipeline
223
224The keyword spotting pipeline has 3 steps to perform: data pre-processing, run inference and decode inference results.
225
226See [`KeywordSpottingPipeline`](include/KeywordSpottingPipeline.hpp) for more details.
227
228#### Pre-processing the Audio Input
229
230Each frame captured from source is read and stored by the AudioCapture object.
231It's `Next()` function provides us with the correctly positioned window of data, sized appropriately for the given model, to pre-process before inference.
232
233```c++
234std::vector<float> audioBlock = capture.Next();
235...
236std::vector<int8_t> preprocessedData = kwsPipeline->PreProcessing(audioBlock);
237```
238
239The `MFCC` class is then used to extract the Mel-frequency Cepstral Coefficients (MFCCs, [see Wikipedia](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum)) from each stored audio frame in the provided window of audio, to be used as features for the network. MFCCs are the result of computing the dot product of the Discrete Cosine Transform (DCT) Matrix and the log of the Mel energy.
240
241After all the MFCCs needed for an inference have been extracted from the audio data they are concatenated to make the input tensor for the model.
242
243#### Executing Inference
244
245```c++
246common::InferenceResults results;
247...
248kwsPipeline->Inference(preprocessedData, results);
249```
250
251Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and execute inference.
252A compute device performs inference for the loaded network using the `EnqueueWorkload()` function of the runtime context.
253For example:
254
255```c++
256//const void* inputData = ...;
257//outputTensors were pre-allocated before
258
259armnn::InputTensors inputTensors = {{ inputBindingInfo.first,armnn::ConstTensor(inputBindingInfo.second, inputData)}};
260runtime->EnqueueWorkload(0, inputTensors, outputTensors);
261```
262
263We allocate memory for output data once and map it to output tensor objects. After successful inference, we read data
264from the pre-allocated output data buffer. See [`ArmnnNetworkExecutor::ArmnnNetworkExecutor`](./src/ArmnnNetworkExecutor.cpp)
265and [`ArmnnNetworkExecutor::Run`](./src/ArmnnNetworkExecutor.cpp) for more details.
266
267#### Postprocessing
268
269##### Decoding
270
271The output from the inference is decoded to obtain the spotted keyword- the word with highest probability is outputted to the console.
272
273```c++
274kwsPipeline->PostProcessing(results, labels,
275 [](int index, std::string& label, float prob) -> void {
276 printf("Keyword \"%s\", index %d:, probability %f\n",
277 label.c_str(),
278 index,
279 prob);
280 });
281```
282
283The produced string is displayed on the console.
284