1{ 2 "nbformat": 4, 3 "nbformat_minor": 0, 4 "metadata": { 5 "colab": { 6 "name": "Jax to TFLite.ipynb", 7 "provenance": [], 8 "collapsed_sections": [], 9 "toc_visible": true 10 }, 11 "kernelspec": { 12 "name": "python3", 13 "display_name": "Python 3" 14 }, 15 "language_info": { 16 "name": "python" 17 } 18 }, 19 "cells": [ 20 { 21 "cell_type": "markdown", 22 "metadata": { 23 "id": "8vD3L4qeREvg" 24 }, 25 "source": [ 26 "##### Copyright 2021 The TensorFlow Authors." 27 ] 28 }, 29 { 30 "cell_type": "code", 31 "metadata": { 32 "id": "qLCxmWRyRMZE" 33 }, 34 "source": [ 35 "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 36 "# you may not use this file except in compliance with the License.\n", 37 "# You may obtain a copy of the License at\n", 38 "#\n", 39 "# https://www.apache.org/licenses/LICENSE-2.0\n", 40 "#\n", 41 "# Unless required by applicable law or agreed to in writing, software\n", 42 "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 43 "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 44 "# See the License for the specific language governing permissions and\n", 45 "# limitations under the License." 46 ], 47 "execution_count": null, 48 "outputs": [] 49 }, 50 { 51 "cell_type": "markdown", 52 "metadata": { 53 "id": "4k5PoHrgJQOU" 54 }, 55 "source": [ 56 "# Jax Model Conversion For TFLite\n", 57 "## Overview\n", 58 "Note: This API is new and only available via pip install tf-nightly. It will be available in TensorFlow version 2.7. Also, the API is still experimental and subject to changes.\n", 59 "\n", 60 "This CodeLab demonstrates how to build a model for MNIST recognition using Jax, and how to convert it to TensorFlow Lite. This codelab will also demonstrate how to optimize the Jax-converted TFLite model with post-training quantiztion." 61 ] 62 }, 63 { 64 "cell_type": "markdown", 65 "metadata": { 66 "id": "i8cfOBcjSByO" 67 }, 68 "source": [ 69 "<table class=\"tfo-notebook-buttons\" align=\"left\">\n", 70 " <td>\n", 71 " <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/examples/jax_conversion/overview\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n", 72 " </td>\n", 73 " <td>\n", 74 " <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/jax_conversion/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n", 75 " </td>\n", 76 " <td>\n", 77 " <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/jax_conversion/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n", 78 " </td>\n", 79 " <td>\n", 80 " <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/jax_conversion/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n", 81 " </td>\n", 82 "</table>" 83 ] 84 }, 85 { 86 "cell_type": "markdown", 87 "metadata": { 88 "id": "lq-T8XZMJ-zv" 89 }, 90 "source": [ 91 "## Prerequisites\n", 92 "It's recommended to try this feature with the newest TensorFlow nightly pip build." 93 ] 94 }, 95 { 96 "cell_type": "code", 97 "metadata": { 98 "id": "EV04hKdrnE4f" 99 }, 100 "source": [ 101 "!pip install tf-nightly --upgrade\n", 102 "!pip install jax --upgrade\n", 103 "!pip install jaxlib --upgrade" 104 ], 105 "execution_count": null, 106 "outputs": [] 107 }, 108 { 109 "cell_type": "markdown", 110 "metadata": { 111 "id": "QAeY43k9KM55" 112 }, 113 "source": [ 114 "## Data Preparation\n", 115 "Download the MNIST data with Keras dataset and pre-process." 116 ] 117 }, 118 { 119 "cell_type": "code", 120 "metadata": { 121 "id": "qSOPSZJn1_Tj" 122 }, 123 "source": [ 124 "import numpy as np\n", 125 "import tensorflow as tf\n", 126 "import functools\n", 127 "\n", 128 "import time\n", 129 "import itertools\n", 130 "\n", 131 "import numpy.random as npr\n", 132 "\n", 133 "import jax.numpy as jnp\n", 134 "from jax import jit, grad, random\n", 135 "from jax.example_libraries import optimizers\n", 136 "from jax.example_libraries import stax\n" 137 ], 138 "execution_count": null, 139 "outputs": [] 140 }, 141 { 142 "cell_type": "code", 143 "metadata": { 144 "id": "hdJIt3Da2Qn1" 145 }, 146 "source": [ 147 "def _one_hot(x, k, dtype=np.float32):\n", 148 " \"\"\"Create a one-hot encoding of x of size k.\"\"\"\n", 149 " return np.array(x[:, None] == np.arange(k), dtype)\n", 150 "\n", 151 "(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()\n", 152 "train_images, test_images = train_images / 255.0, test_images / 255.0\n", 153 "train_images = train_images.astype(np.float32)\n", 154 "test_images = test_images.astype(np.float32)\n", 155 "\n", 156 "train_labels = _one_hot(train_labels, 10)\n", 157 "test_labels = _one_hot(test_labels, 10)" 158 ], 159 "execution_count": null, 160 "outputs": [] 161 }, 162 { 163 "cell_type": "markdown", 164 "metadata": { 165 "id": "0eFhx85YKlEY" 166 }, 167 "source": [ 168 "## Build the MNIST model with Jax" 169 ] 170 }, 171 { 172 "cell_type": "code", 173 "metadata": { 174 "id": "mi3TKB9nnQdK" 175 }, 176 "source": [ 177 "def loss(params, batch):\n", 178 " inputs, targets = batch\n", 179 " preds = predict(params, inputs)\n", 180 " return -jnp.mean(jnp.sum(preds * targets, axis=1))\n", 181 "\n", 182 "def accuracy(params, batch):\n", 183 " inputs, targets = batch\n", 184 " target_class = jnp.argmax(targets, axis=1)\n", 185 " predicted_class = jnp.argmax(predict(params, inputs), axis=1)\n", 186 " return jnp.mean(predicted_class == target_class)\n", 187 "\n", 188 "init_random_params, predict = stax.serial(\n", 189 " stax.Flatten,\n", 190 " stax.Dense(1024), stax.Relu,\n", 191 " stax.Dense(1024), stax.Relu,\n", 192 " stax.Dense(10), stax.LogSoftmax)\n", 193 "\n", 194 "rng = random.PRNGKey(0)" 195 ], 196 "execution_count": null, 197 "outputs": [] 198 }, 199 { 200 "cell_type": "markdown", 201 "metadata": { 202 "id": "bRtnOBdJLd63" 203 }, 204 "source": [ 205 "## Train & Evaluate the model" 206 ] 207 }, 208 { 209 "cell_type": "code", 210 "metadata": { 211 "id": "SWbYRyj7LYZt" 212 }, 213 "source": [ 214 "step_size = 0.001\n", 215 "num_epochs = 10\n", 216 "batch_size = 128\n", 217 "momentum_mass = 0.9\n", 218 "\n", 219 "\n", 220 "num_train = train_images.shape[0]\n", 221 "num_complete_batches, leftover = divmod(num_train, batch_size)\n", 222 "num_batches = num_complete_batches + bool(leftover)\n", 223 "\n", 224 "def data_stream():\n", 225 " rng = npr.RandomState(0)\n", 226 " while True:\n", 227 " perm = rng.permutation(num_train)\n", 228 " for i in range(num_batches):\n", 229 " batch_idx = perm[i * batch_size:(i + 1) * batch_size]\n", 230 " yield train_images[batch_idx], train_labels[batch_idx]\n", 231 "batches = data_stream()\n", 232 "\n", 233 "opt_init, opt_update, get_params = optimizers.momentum(step_size, mass=momentum_mass)\n", 234 "\n", 235 "@jit\n", 236 "def update(i, opt_state, batch):\n", 237 " params = get_params(opt_state)\n", 238 " return opt_update(i, grad(loss)(params, batch), opt_state)\n", 239 "\n", 240 "_, init_params = init_random_params(rng, (-1, 28 * 28))\n", 241 "opt_state = opt_init(init_params)\n", 242 "itercount = itertools.count()\n", 243 "\n", 244 "print(\"\\nStarting training...\")\n", 245 "for epoch in range(num_epochs):\n", 246 " start_time = time.time()\n", 247 " for _ in range(num_batches):\n", 248 " opt_state = update(next(itercount), opt_state, next(batches))\n", 249 " epoch_time = time.time() - start_time\n", 250 "\n", 251 " params = get_params(opt_state)\n", 252 " train_acc = accuracy(params, (train_images, train_labels))\n", 253 " test_acc = accuracy(params, (test_images, test_labels))\n", 254 " print(\"Epoch {} in {:0.2f} sec\".format(epoch, epoch_time))\n", 255 " print(\"Training set accuracy {}\".format(train_acc))\n", 256 " print(\"Test set accuracy {}\".format(test_acc))" 257 ], 258 "execution_count": null, 259 "outputs": [] 260 }, 261 { 262 "cell_type": "markdown", 263 "metadata": { 264 "id": "7Y1OZBhfQhOj" 265 }, 266 "source": [ 267 "## Convert to TFLite model.\n", 268 "Note here, we\n", 269 "1. Inline the params to the Jax `predict` func with `functools.partial`.\n", 270 "2. Build a `jnp.zeros`, this is a \"placeholder\" tensor used for Jax to trace the model.\n", 271 "3. Call `experimental_from_jax`:\n", 272 "> * The `serving_func` is wrapped in a list.\n", 273 "> * The input is associated with a given name and passed in as an array wrapped in a list.\n", 274 "\n", 275 "\n", 276 "\n" 277 ] 278 }, 279 { 280 "cell_type": "code", 281 "metadata": { 282 "id": "6pcqKZqdNTmn" 283 }, 284 "source": [ 285 "serving_func = functools.partial(predict, params)\n", 286 "x_input = jnp.zeros((1, 28, 28))\n", 287 "converter = tf.lite.TFLiteConverter.experimental_from_jax(\n", 288 " [serving_func], [[('input1', x_input)]])\n", 289 "tflite_model = converter.convert()\n", 290 "with open('jax_mnist.tflite', 'wb') as f:\n", 291 " f.write(tflite_model)" 292 ], 293 "execution_count": null, 294 "outputs": [] 295 }, 296 { 297 "cell_type": "markdown", 298 "metadata": { 299 "id": "sqEhzaJPSPS1" 300 }, 301 "source": [ 302 "## Check the Converted TFLite Model\n", 303 "Compare the converted model's results with the Jax model." 304 ] 305 }, 306 { 307 "cell_type": "code", 308 "metadata": { 309 "id": "acj2AYzjSlaY" 310 }, 311 "source": [ 312 "expected = serving_func(train_images[0:1])\n", 313 "\n", 314 "# Run the model with TensorFlow Lite\n", 315 "interpreter = tf.lite.Interpreter(model_content=tflite_model)\n", 316 "interpreter.allocate_tensors()\n", 317 "input_details = interpreter.get_input_details()\n", 318 "output_details = interpreter.get_output_details()\n", 319 "interpreter.set_tensor(input_details[0][\"index\"], train_images[0:1, :, :])\n", 320 "interpreter.invoke()\n", 321 "result = interpreter.get_tensor(output_details[0][\"index\"])\n", 322 "\n", 323 "# Assert if the result of TFLite model is consistent with the JAX model.\n", 324 "np.testing.assert_almost_equal(expected, result, 1e-5)" 325 ], 326 "execution_count": null, 327 "outputs": [] 328 }, 329 { 330 "cell_type": "markdown", 331 "metadata": { 332 "id": "Qy9Gp4H2SjBL" 333 }, 334 "source": [ 335 "## Optimize the Model\n", 336 "We will provide a `representative_dataset` to do post-training quantiztion to optimize the model.\n", 337 "\n", 338 "\n" 339 ] 340 }, 341 { 342 "cell_type": "code", 343 "metadata": { 344 "id": "KI0rLV-Meg-2" 345 }, 346 "source": [ 347 "def representative_dataset():\n", 348 " for i in range(1000):\n", 349 " x = train_images[i:i+1]\n", 350 " yield [x]\n", 351 "\n", 352 "converter = tf.lite.TFLiteConverter.experimental_from_jax(\n", 353 " [serving_func], [[('x', x_input)]])\n", 354 "tflite_model = converter.convert()\n", 355 "converter.optimizations = [tf.lite.Optimize.DEFAULT]\n", 356 "converter.representative_dataset = representative_dataset\n", 357 "converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\n", 358 "tflite_quant_model = converter.convert()\n", 359 "with open('jax_mnist_quant.tflite', 'wb') as f:\n", 360 " f.write(tflite_quant_model)" 361 ], 362 "execution_count": null, 363 "outputs": [] 364 }, 365 { 366 "cell_type": "markdown", 367 "metadata": { 368 "id": "15xQR3JZS8TV" 369 }, 370 "source": [ 371 "## Evaluate the Optimized Model" 372 ] 373 }, 374 { 375 "cell_type": "code", 376 "metadata": { 377 "id": "X3oOm0OaevD6" 378 }, 379 "source": [ 380 "expected = serving_func(train_images[0:1])\n", 381 "\n", 382 "# Run the model with TensorFlow Lite\n", 383 "interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)\n", 384 "interpreter.allocate_tensors()\n", 385 "input_details = interpreter.get_input_details()\n", 386 "output_details = interpreter.get_output_details()\n", 387 "interpreter.set_tensor(input_details[0][\"index\"], train_images[0:1, :, :])\n", 388 "interpreter.invoke()\n", 389 "result = interpreter.get_tensor(output_details[0][\"index\"])\n", 390 "\n", 391 "# Assert if the result of TFLite model is consistent with the Jax model.\n", 392 "np.testing.assert_almost_equal(expected, result, 1e-5)" 393 ], 394 "execution_count": null, 395 "outputs": [] 396 }, 397 { 398 "cell_type": "markdown", 399 "metadata": { 400 "id": "QqHXCNa3myor" 401 }, 402 "source": [ 403 "## Compare the Quantized Model size\n", 404 "We should be able to see the quantized model is four times smaller than the original model." 405 ] 406 }, 407 { 408 "cell_type": "code", 409 "metadata": { 410 "id": "imFPw007juVG" 411 }, 412 "source": [ 413 "!du -h jax_mnist.tflite\n", 414 "!du -h jax_mnist_quant.tflite" 415 ], 416 "execution_count": null, 417 "outputs": [] 418 } 419 ] 420}