xref: /aosp_15_r20/external/tensorflow/tensorflow/lite/g3doc/examples/build/index.md (revision b6fb3261f9314811a0f4371741dbb8839866f948)
1# Build TensorFlow Lite models
2
3This page provides guidance for building
4your TensorFlow models with the intention of converting to the TensorFlow
5Lite model format. The machine learning (ML) models you use with TensorFlow
6Lite are originally
7built and trained using TensorFlow core libraries and tools. Once you've built
8a model with TensorFlow core, you can convert it to a smaller, more
9efficient ML model format called a TensorFlow Lite model.
10
11* If you have a model to convert already, see the
12    [Convert models overview](../convert/)
13    page for guidance on converting your model.
14
15* If you want to modify an existing model instead of starting from scratch,
16  see the [Modify models overview](../modify/model_maker) for guidance.
17
18
19## Building your model
20
21If you are building a custom model for your specific use case,
22you should start with developing and training a TensorFlow model or extending
23an existing one.
24
25### Model design constraints
26
27Before you start your model development process, you should be aware of the
28constraints for TensorFlow Lite models and build your model with these
29constraints in mind:
30
31* **Limited compute capabilities** - Compared to fully equipped servers with
32  multiple CPUs, high memory capacity, and specialized processors such as GPUs
33  and TPUs, mobile and edge devices are much more limited. While they are
34  growing in compute power and specialized hardware compatibility, the models
35  and data you can effectively process with them are still comparably limited.
36* **Size of models** - The overall complexity of a model, including data
37  pre-processing logic and the number of layers in the model, increases the
38  in-memory size of a model. A large model may run unacceptably slow or simply
39  may not fit in the available memory of a mobile or edge device.
40* **Size of data** - The size of input data that can be effectively processed
41  with a machine learning model is limited on a mobile or edge device. Models
42  that use large data libraries such as language libraries, image libraries, or
43  video clip libraries may not fit on these devices, and may require
44  off-device storage and access solutions.
45* **Supported TensorFlow operations** - TensorFlow Lite runtime environments
46  support a subset of machine learning model operations compared to
47  regular TensorFlow models. As you develop a model for use with TensorFlow
48  Lite, you should track the compatibility of your model against the
49  capabilities of TensorFlow Lite runtime environments.
50
51For more information building effective, compatible, high performance models
52for TensorFlow Lite, see
53[Performance best practices](../../performance/best_practices).
54
55### Model development
56
57To build a TensorFlow Lite model, you first need to build a model using the
58TensorFlow core libraries. TensorFlow core libraries are the lower-level
59libraries that provide APIs to build, train and deploy ML models.
60
61![TFLite build workflow](../../images/build/build_workflow_diag.png)
62
63TensorFlow provides two paths for doing this. You can develop
64your own custom model code or you can start with a model implementation
65available in the TensorFlow
66[Model Garden](https://www.tensorflow.org/tfmodels).
67
68#### Model Garden
69
70The TensorFlow Model Garden provides implementations of many state-of-the-art
71machine learning (ML) models for vision and natural language processing (NLP).
72You'll also find workflow tools to let you quickly configure and run those
73models on standard datasets. The machine learning models in the
74Model Garden include full code so you can
75test, train, or re-train them using your own datasets.
76
77Whether you are looking to benchmark performance for a
78well-known model, verify the results of recently released research, or extend
79existing models, the Model Garden can help you drive your ML goals.
80
81#### Custom models
82
83If your use case is outside of those supported by the models in Model Garden,
84you can use a high level library like
85[Keras](https://www.tensorflow.org/guide/keras/sequential_model) to
86develop your custom training code. To learn the fundamentals of TensorFlow, see
87the [TensorFlow guide](https://www.tensorflow.org/guide/basics). To get started
88with examples, see the
89[TensorFlow tutorials overview](https://www.tensorflow.org/tutorials) which
90contain pointers to beginning to expert level tutorials.
91
92### Model evaluation
93
94Once you've developed your model, you should evaluate its performance and test
95it on end-user devices.
96TensorFlow provides a few ways to do this.
97
98* [TensorBoard](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras)
99  is a tool for providing the measurements and visualizations needed during
100  the machine learning workflow. It enables tracking experiment metrics like
101  loss and accuracy, visualizing the model graph, projecting embeddings to a
102  lower dimensional space, and much more.
103* [Benchmarking tools](https://www.tensorflow.org/lite/performance/measurement)
104  are available for each supported platform such as the Android benchmark app
105  and the iOS benchmark app. Use these tools to measure and calculate statistics
106  for important performance metrics.
107
108### Model optimization
109
110With the [constraints](#model_constraints) on resources specific to
111TensorFlow Lite models, model optimization can help to ensure your
112model performance well and uses less compute resources. Machine learning model
113performance is usually a balance between size and speed of inference vs
114accuracy. TensorFlow Lite currently supports optimization via quantization,
115pruning and clustering. See the
116[model optimization] (https://www.tensorflow.org/lite/performance/model_optimization)
117topic for more details on these techniques. TensorFlow also provides a
118[Model optimization toolkit](https://www.tensorflow.org/model_optimization/guide)
119which provides an API that implements these techniques.
120
121## Next steps
122
123* To start building your custom model, see the
124  [quick start for beginners](https://www.tensorflow.org/tutorials/quickstart/beginner)
125  tutorial in TensorFlow core documentation.
126* To convert your custom TensorFlow model, see the
127  [Convert models overview](../convert).
128* See the
129  [operator compatibility](../../guide/ops_compatibility) guide to determine
130  if your model is compatible with TensorFlow Lite or if you'll need to take
131  additional steps to make it compatible.
132* See the
133  [performance best practices guide](https://www.tensorflow.org/lite/performance/best_practices)
134  for guidance on making your TensorFlow Lite models efficient and performant.
135* See the [performance metrics guide](../../performance/measurement) to learn
136  how to measure the performance of your model using benchmarking tools.
137
138
139