@@ -89,12 +89,14 @@ here's how you can translate the latest GoogLeNet model into a version that uses
89
89
eight-bit computations:
90
90
91
91
``` sh
92
- curl http ://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -o /tmp/inceptionv3.tgz
93
- tar xzf /tmp/inceptionv3.tgz -C /tmp/
92
+ curl -L " https ://storage.googleapis.com/ download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz " |
93
+ tar -C tensorflow/examples/label_image/data -xz
94
94
bazel build tensorflow/tools/graph_transforms:transform_graph
95
95
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
96
- --inputs=" Mul" --in_graph=/tmp/classify_image_graph_def.pb \
97
- --outputs=" softmax" --out_graph=/tmp/quantized_graph.pb \
96
+ --in_graph=tensorflow/examples/label_image/data/inception_v3_2016_08_28_frozen.pb \
97
+ --out_graph=/tmp/quantized_graph.pb \
98
+ --inputs=input \
99
+ --outputs=InceptionV3/Predictions/Reshape_1 \
98
100
--transforms=' add_default_attributes strip_unused_nodes(type=float, shape="1,299,299,3")
99
101
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
100
102
fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes
@@ -110,15 +112,7 @@ outputs though, and you should get equivalent results. Here's an example:
110
112
``` sh
111
113
bazel build tensorflow/examples/label_image:label_image
112
114
bazel-bin/tensorflow/examples/label_image/label_image \
113
- --image=< input-image> \
114
115
--graph=/tmp/quantized_graph.pb \
115
- --labels=/tmp/imagenet_synset_to_human_label_map.txt \
116
- --input_width=299 \
117
- --input_height=299 \
118
- --input_mean=128 \
119
- --input_std=128 \
120
- --input_layer=" Mul:0" \
121
- --output_layer=" softmax:0"
122
116
```
123
117
124
118
You'll see that this runs the newly-quantized graph, and outputs a very similar
0 commit comments