Skip to content

Commit da573b9

Browse files
authored
Merge pull request #11898 from korrawat/quant-tutorial-update
Update the graph used in quantization tutorial
2 parents b063737 + 21d8dae commit da573b9

File tree

1 file changed

+6
-12
lines changed

1 file changed

+6
-12
lines changed

performance/quantization.md

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -89,12 +89,14 @@ here's how you can translate the latest GoogLeNet model into a version that uses
8989
eight-bit computations:
9090

9191
```sh
92-
curl http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -o /tmp/inceptionv3.tgz
93-
tar xzf /tmp/inceptionv3.tgz -C /tmp/
92+
curl -L "https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz" |
93+
tar -C tensorflow/examples/label_image/data -xz
9494
bazel build tensorflow/tools/graph_transforms:transform_graph
9595
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
96-
--inputs="Mul" --in_graph=/tmp/classify_image_graph_def.pb \
97-
--outputs="softmax" --out_graph=/tmp/quantized_graph.pb \
96+
--in_graph=tensorflow/examples/label_image/data/inception_v3_2016_08_28_frozen.pb \
97+
--out_graph=/tmp/quantized_graph.pb \
98+
--inputs=input \
99+
--outputs=InceptionV3/Predictions/Reshape_1 \
98100
--transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,299,299,3")
99101
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
100102
fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes
@@ -110,15 +112,7 @@ outputs though, and you should get equivalent results. Here's an example:
110112
```sh
111113
bazel build tensorflow/examples/label_image:label_image
112114
bazel-bin/tensorflow/examples/label_image/label_image \
113-
--image=<input-image> \
114115
--graph=/tmp/quantized_graph.pb \
115-
--labels=/tmp/imagenet_synset_to_human_label_map.txt \
116-
--input_width=299 \
117-
--input_height=299 \
118-
--input_mean=128 \
119-
--input_std=128 \
120-
--input_layer="Mul:0" \
121-
--output_layer="softmax:0"
122116
```
123117

124118
You'll see that this runs the newly-quantized graph, and outputs a very similar

0 commit comments

Comments
 (0)