How do I export a TensorFlow model as a .tflite file?

2024/11/15 16:19:16

Background information:

I have written a TensorFlow model very similar to the premade iris classification model provided by TensorFlow. The differences are relatively minor:

  • I am classifying football exercises, not iris species.
  • I have 10 features and one label, not 4 features and one label.
  • I have 5 different exercises, as opposed to 3 iris species.
  • My trainData contains around 3500 rows, not only 120.
  • My testData contains around 330 rows, not only 30.
  • I am using a DNN classifier with n_classes=6, not 3.

I now want to export the model as a .tflite file. But according to the TensorFlow Developer Guide, I need to first export the model to a tf.GraphDef file, then freeze it and only then will I be able to convert it. However, the tutorial provided by TensorFlow to create a .pb file from a custom model only seems to be optimized for image classification models.

Question:

So how do I convert a model like the iris classification example model into a .tflite file? Is there an easier, more direct way to do it, without having to export it to a .pb file, then freeze it and so on? An example based on the iris classification code or a link to a more explicit tutorial would be very useful!


Other information:

  • OS: macOS 10.13.4 High Sierra
  • TensorFlow Version: 1.8.0
  • Python Version: 3.6.4
  • Using PyCharm Community 2018.1.3

Code:

The iris classification code can be cloned by entering the following command:

git clone https://github.com/tensorflow/models

But in case you don't want to download the whole package, here it is:

This is the classifier file called premade_estimator.py:

    #  Copyright 2016 The TensorFlow Authors. All Rights Reserved.##  Licensed under the Apache License, Version 2.0 (the "License");#  you may not use this file except in compliance with the License.#  You may obtain a copy of the License at##  http://www.apache.org/licenses/LICENSE-2.0##  Unless required by applicable law or agreed to in writing,                         software#  distributed under the License is distributed on an "AS IS" BASIS,#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.#  See the License for the specific language governing permissions and#  limitations under the License."""An Example of a DNNClassifier for the Iris dataset."""from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport argparseimport tensorflow as tfimport iris_dataparser = argparse.ArgumentParser()parser.add_argument('--batch_size', default=100, type=int, help='batch size')parser.add_argument('--train_steps', default=1000, type=int,help='number of training steps')def main(argv):args = parser.parse_args(argv[1:])# Fetch the data(train_x, train_y), (test_x, test_y) = iris_data.load_data()# Feature columns describe how to use the input.my_feature_columns = []for key in train_x.keys():my_feature_columns.append(tf.feature_column.numeric_column(key=key))# Build 2 hidden layer DNN with 10, 10 units respectively.classifier = tf.estimator.DNNClassifier(feature_columns=my_feature_columns,# Two hidden layers of 10 nodes each.hidden_units=[10, 10],# The model must choose between 3 classes.n_classes=3)# Train the Model.classifier.train(input_fn=lambda: iris_data.train_input_fn(train_x, train_y,args.batch_size),steps=args.train_steps)# Evaluate the model.eval_result = classifier.evaluate(input_fn=lambda: iris_data.eval_input_fn(test_x, test_y,args.batch_size))print('\nTest set accuracy:         {accuracy:0.3f}\n'.format(**eval_result))# Generate predictions from the modelexpected = ['Setosa', 'Versicolor', 'Virginica']predict_x = {'SepalLength': [5.1, 5.9, 6.9],'SepalWidth': [3.3, 3.0, 3.1],'PetalLength': [1.7, 4.2, 5.4],'PetalWidth': [0.5, 1.5, 2.1],}predictions = classifier.predict(input_fn=lambda: iris_data.eval_input_fn(predict_x,labels=None,batch_size=args.batch_size))template = '\nPrediction is "{}" ({:.1f}%), expected "{}"'for pred_dict, expec in zip(predictions, expected):class_id = pred_dict['class_ids'][0]probability = pred_dict['probabilities'][class_id]print(template.format(iris_data.SPECIES[class_id],100 * probability, expec))if __name__ == '__main__':# tf.logging.set_verbosity(tf.logging.INFO)tf.app.run(main)

And this is the data file called iris_data.py:

    import pandas as pdimport tensorflow as tfTRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth','PetalLength', 'PetalWidth', 'Species']SPECIES = ['Setosa', 'Versicolor', 'Virginica']def maybe_download():train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-1], TRAIN_URL)test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-1], TEST_URL)return train_path, test_pathdef load_data(y_name='Species'):"""Returns the iris dataset as (train_x, train_y), (test_x, test_y)."""train_path, test_path = maybe_download()train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)train_x, train_y = train, train.pop(y_name)test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)test_x, test_y = test, test.pop(y_name)return (train_x, train_y), (test_x, test_y)def train_input_fn(features, labels, batch_size):"""An input function for training"""# Convert the inputs to a Dataset.dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))# Shuffle, repeat, and batch the examples.dataset = dataset.shuffle(1000).repeat().batch(batch_size)# Return the dataset.return datasetdef eval_input_fn(features, labels, batch_size):"""An input function for evaluation or prediction"""features = dict(features)if labels is None:# No labels, use only features.inputs = featureselse:inputs = (features, labels)# Convert the inputs to a Dataset.dataset = tf.data.Dataset.from_tensor_slices(inputs)# Batch the examplesassert batch_size is not None, "batch_size must not be None"dataset = dataset.batch(batch_size)# Return the dataset.return dataset

** UPDATE **

Ok so I have found a seemingly very useful piece of code on this page:

    import tensorflow as tfimg = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])out = tf.identity(val, name="out")with tf.Session() as sess:tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])open("test.tflite", "wb").write(tflite_model)

This little guy directly converts a simple model to a TensorFlow Lite Model. Now all I have to do is find a way to adapt this to the iris classification model. Any suggestions?

Answer

Is there an easier, more direct way to do it, without having to export it to a .pb file, then freeze it and so on?

Yes, as you pointed out in the updated question, it is possible to freeze the graph and use toco_convert in python api directly. It needs the graph to be frozen and the input and output shapes to be determined. In your question, there is no freeze graph step since there are no variables. If you have variables and run toco without converting those to constants first, toco will complain!

Now all I have to do is find a way to adapt this to the iris classification model. Any suggestions?

This one is slightly trickier and needs more work. Basically, you need to load the graph and figure out the input and output tensor names and then freeze the graph and call toco_convert. For finding the input and output tensor names in this case (where you have not defined the graph), you have to poke around the graph generated and determine them based on input shapes, names, etc. Here is the code that you can append at the end of your main function in premade_estimator.py to generate the tflite graph in this case.

print("\n====== classifier model_dir, latest_checkpoint ===========")
print(classifier.model_dir)
print(classifier.latest_checkpoint())
debug = Falsewith tf.Session() as sess:# First let's load meta graph and restore weightslatest_checkpoint_path = classifier.latest_checkpoint()saver = tf.train.import_meta_graph(latest_checkpoint_path + '.meta')saver.restore(sess, latest_checkpoint_path)# Get the input and output tensors needed for toco.# These were determined based on the debugging info printed / saved below.input_tensor = sess.graph.get_tensor_by_name("dnn/input_from_feature_columns/input_layer/concat:0")input_tensor.set_shape([1, 4])out_tensor = sess.graph.get_tensor_by_name("dnn/logits/BiasAdd:0")out_tensor.set_shape([1, 3])# Pass the output node name we are interested in.# Based on the debugging info printed / saved below, pulled out the# name of the node for the logits (before the softmax is applied).frozen_graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_node_names=["dnn/logits/BiasAdd"])if debug is True:print("\nORIGINAL GRAPH DEF Ops ===========================================")ops = sess.graph.get_operations()for op in ops:if "BiasAdd" in op.name or "input_layer" in op.name:print([op.name, op.values()])# save original graphdef to text filewith open("estimator_graph.pbtxt", "w") as fp:fp.write(str(sess.graph_def))print("\nFROZEN GRAPH DEF Nodes ===========================================")for node in frozen_graph_def.node:print(node.name)# save frozen graph def to text filewith open("estimator_frozen_graph.pbtxt", "w") as fp:fp.write(str(frozen_graph_def))tflite_model = tf.contrib.lite.toco_convert(frozen_graph_def, [input_tensor], [out_tensor])
open("estimator_model.tflite", "wb").write(tflite_model)

Note: I am assuming the logits from the final layer (before the Softmax is applied) as the output, corresponding to the node dnn/logits/BiasAdd. If you want the probabilities, I believe it is dnn/head/predictions/probabilities.

https://en.xdnf.cn/q/71862.html

Related Q&A

Using plotly in Jupyter to create animated chart in off-line mode

Ive been trying to get the "Filled-Area Animation in Python" example to work using plotly in offline mode in a Jupyter notebook. The example can be found here: https://plot.ly/python/filled-a…

Django: How to unit test Update Views/Forms

Im trying to unit test my update forms and views. Im using Django Crispy Forms for both my Create and Update Forms. UpdateForm inherits CreateForm and makes a small change to the submit button text. Th…

Why is Python faster than C++ in this case?

A program in both Python and C++ is given below, which performs the following task: read white-space delimited words from stdin, print the unique words sorted by string length along with a count of eac…

Python - write headers to csv

Currently i am writing query in python which export data from oracle dbo to .csv file. I am not sure how to write headers within file. try:connection = cx_Oracle.connect(user,pass,tns_name)cursor = con…

Opening/Attempting to Read a file [duplicate]

This question already has answers here:PyCharm shows unresolved references error for valid code(31 answers)Closed 5 years ago.I tried to simply read and store the contents of a text file into an array,…

How to pass custom settings through CrawlerProcess in scrapy?

I have two CrawlerProcesses, each is calling different spider. I want to pass custom settings to one of these processes to save the output of the spider to csv, I thought I could do this:storage_setti…

numpy how to slice index an array using arrays?

Perhaps this has been raised and addressed somewhere else but I havent found it. Suppose we have a numpy array: a = np.arange(100).reshape(10,10) b = np.zeros(a.shape) start = np.array([1,4,7]) # ca…

How to import _ssl in python 2.7.6?

My http server is based on BaseHTTPServer with Python 2.7.6. Now I want it to support ssl transportation, so called https.I have installed pyOpenSSL and recompiled python source code with ssl support. …

Unexpected Indent error in Python [duplicate]

This question already has answers here:Im getting an IndentationError (or a TabError). How do I fix it?(6 answers)Closed 4 years ago.I have a simple piece of code that Im not understanding where my er…

pyshark can not capture the packet on windows 7 (python)

I want to capture the packet using pyshark. but I could not capture the packet on windows 7.this is my python codeimport pyshark def NetCap():print capturing...livecapture = pyshark.LiveCapture(interf…