Create a TensorFlow Lite file
TensorFlow Lite is a runtime platform used to run TensorFlow models on mobile and embedded devices.
In this guide, we use a Guild project to implement a simple workflow for generating an object detector that can be deployed on iPhone and Android devices.
Verify sample object detector project
Follow the steps in Create an object detector to create a Guild AI project containing an object detector.
Verify each of the steps below.
PROJECT environment variable
Confirm that the
PROJECT environment variable is set to the sample
PROJECT is not defined, set it to the sample project location:
set PROJECT=<location of sample object detector project>
<location of sample object detector project> with the full
path to the sample project from Create an object detector.
Activate and verify project environment
Change to the project directory:
Activate the environment:
If you see the message
Guild environment ./env does not exist,
revisit the steps in Initialize a project
Use guild check to verify the environment:
Confirm that the value for
guild_home is in the project directory
List available project operations
From the project directory, list operations by running:
Guild should display the available operations for the object detector:
./detector:detect Detect images using a trained detector ./detector:evaluate Evaluate a trained detector ./detector:export-and-freeze Export a detection graph with checkpoint weights ./detector:prepare Prepare images annotated using Pascal VOC format ./detector:train Train detector from scratch ./detector:transfer-learn Train detector using transfer learning
If you see a different list of operations, verify the project Guild
guild.yml in the project directory) is:
- model: detector description: Sample object detector extends: - gpkg.object-detect/voc-annotated-images-directory-support - gpkg.object-detect/ssd-mobilenet-v2
If you receive an error message, verify that the project environment
is active (see above) and that
gpkg.object‑detect is installed. To
view the list of installed Guild packages, run:
gpkg.object‑detect is not shown in the list, install it by
guild install gpkg.object-detect
The modifications we make below require a new Guild package,
gpkg.tflite, which provides support for TensorFlow Lite.
Verify that the project is environment is activated (see above) and
gpkg.tflite by running:
guild install gpkg.tflite
Add TensorFlow Lite support
In this section, we add support to our object detector for generating
guild.yml to be:
- model: detector description: Sample object detector extends: - gpkg.object-detect/voc-annotated-images-directory-support - gpkg.object-detect/ssd-mobilenet-v2 - gpkg.tflite/tflite-support
tflite‑support to the list of model extensions. By
tflite‑support we inherit a new operation,
is used to generate a TensorFlow Lite file from a frozen inference
Save you changes to
Verify that the detector now has the
tflite operation by running:
guild ops tflite
Guild should show the new operation:
./detector:tflite Generate a TFLite file from a frozen graph
export‑and‑freeze to support tflite
To generated a tflite file, we need to make a change to the
export‑and‑freeze operation. The exported graph needs additional
operations to support TensorFlow Lite.
The export support in
gpkg.object‑detect supports this by way of a
tflite flags, which, when set to
yes, causes the exported graph to
include the required operations.
Let’s modify our model definition so that this behavior is enabled by default.
guild.yml to be:
- model: detector description: Sample object detector extends: - gpkg.object-detect/voc-annotated-images-directory-support - gpkg.object-detect/ssd-mobilenet-v2 - gpkg.tflite/tflite-support operations: export-and-freeze: flags: tflite: yes
This change modifies the default value of the
tflite flag to
yes. The rest of the configuration for the
operation remains unmodified.
Save you changes to
You can verify the new default value by running:
guild run export-and-freeze --help-op
‑‑help‑op option tells Guild to show operation help without
running the operation. You can use this option whenever you have a
question about an operation’s use and its supported flags.
Note the definition of the
tflite Whether or not to export graph with support for TensorFlow Lite (yes)
The default value is listed in parentheses as
Verify a trained model
To generate a tflite file, you must first train a detector. If you
have not already run the
transfer‑learn operation, revisit Train a
detector using transfer
Verify that you have a trained model by running:
guild ls -o transfer-learn
If you see
No matching runs, train a detector before continuing.
Generate a tflite compatible graph
export‑and‑freeze to generate a frozen inference graph that
supports TensorFlow Lite:
guild run export-and-freeze
yes (the new default value) and press
Guild generates a frozen inference graph. You can verify the graph files generated by running:
guild ls -p graph
graph/ graph/frozen_inference_graph.pb graph/tflite_graph.pb graph/tflite_graph.pbtxt
Generate a tflite file
Now that we have a frozen inference graph that supports tflite, we can run the tflite operation:
guild run tflite
Enter to confirm.
Guild generates a tflite file.
View the run files:
Guild shows the files:
You can deploy the tflite file by copying it from the run directory. To get the full path of the tflite file, use:
guild ls -f -p model.tflite
The use of
‑f tells Guild to show the full path to the file.
In this guide, we modified the object detector project that we created in the Create an object detector to support TensorFlow Lite. Specifically, we made the following changes:
gpkg.tflite/tflite‑supportto inherit the
tfliteoperation, which is run to generate a tflite file.
With these changes, we can generate a tflite file for our object detector using this sequence of operations: