8000 AutoML Vision Edge Container Predict sample (#2117) · blechdom/python-docs-samples@029eb5a · GitHub 8000
[go: up one dir, main page]

Skip to content

Commit 029eb5a

Browse files
genquan9dizcology
authored andcommitted
AutoML Vision Edge Container Predict sample (GoogleCloudPlatform#2117)
* AutoML Vision Edge Container Predict sample * add one blank line after the function definition * Refine README with clarifications and sample outputs, and rename encoded_string as encoded_image * change port number and add docker link * rewrite tests in pytest * update port number from 8501 to 8505 to be consistent in README * rename test names and move run/stop dockders to setup/teardown * add comments to clarify the predictions with multiple images * add comments to clarify the predictions with multiple images in prediction * update indents from 2 to 4 spaces * update files with too long lines * fix trailing white spaces. * skip tests for now and add a TODO to enable it infuture
1 parent 1ed0e86 commit 029eb5a

File tree

4 files changed

+249
-0
lines changed

4 files changed

+249
-0
lines changed
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# AutoML Vision Edge Container Prediction
2+
3+
This is an example to show how to predict with AutoML Vision Edge Containers.
4+
The test (freeze_meta_graph_lib_test.py) shows an automatical way to run the
5+
prediction.
6+
7+
If you want to try the test manually with a sample model, please install
8+
[gsutil tools](https://cloud.google.com/storage/docs/gsutil_install) and
9+
[Docker CE](https://docs.docker.com/install/) first, and then follow the steps
10+
below. All the following instructions with commands assume you are in this
11+
folder with system variables as
12+
13+
```bash
14+
$ CONTAINER_NAME=AutomlVisionEdgeContainerPredict
15+
$ PORT=8505
16+
```
17+
18+
+ Step 1. Pull the Docker image.
19+
20+
```bash
21+
# This is a CPU TFServing 1.12.0 with some default settings compiled from
22+
# https://hub.docker.com/r/tensorflow/serving.
23+
$ DOCKER_GCS_DIR=gcr.io/automl-vision-ondevice
24+
$ CPU_DOCKER_GCS_PATH=${DOCKER_GCS_DIR}/gcloud-container-1.12.0:latest
25+
$ sudo docker pull ${CPU_DOCKER_GCS_PATH}
26+
```
27+
28+
+ Step 2. Get a sample saved model.
29+
30+
```bash
31+
$ MODEL_GCS_DIR=gs://cloud-samples-data/vision/edge_container_predict
32+
$ SAMPLE_SAVED_MODEL=${MODEL_GCS_DIR}/saved_model.pb
33+
$ mkdir model_path
34+
$ YOUR_MODEL_PATH=$(realpath model_path)
35+
$ gsutil -m cp ${SAMPLE_SAVED_MODEL} ${YOUR_MODEL_PATH}
36+
```
37+
38+
+ Step 3. Run the Docker container.
39+
10000
40+
```bash
41+
$ sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v \
42+
${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCS_PATH}
43+
```
44+
45+
+ Step 4. Send a prediction request.
46+
47+
```bash
48+
$ python automl_vision_edge_container_predict.py --image_file_path=./test.jpg \
49+
--image_key=1 --port_number=${PORT}
50+
```
51+
52+
The outputs are
53+
54+
```
55+
{
56+
'predictions':
57+
[
58+
{
59+
'scores': [0.0914393, 0.458942, 0.027604, 0.386767, 0.0352474],
60+
labels': ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'],
61+
'key': '1'
62+
}
63+
]
64+
}
65+
```
66+
67+
+ Step 5. Stop the container.
68+
69+
```bash
70+
sudo docker stop ${CONTAINER_NAME}
71+
```
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
#!/usr/bin/env python
2+
3+
# Copyright 2019 Google LLC
4+
#
5+
# Licensed under the Apache License, Version 2.0 (the "License");
6+
# you may not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
17+
r"""This is an example to call REST API from TFServing docker containers.
18+
19+
Examples:
20+
python automl_vision_edge_container_predict.py \
21+
--image_file_path=./test.jpg --image_key=1 --port_number=8051
22+
23+
"""
24+
25+
import argparse
26+
27+
# [START automl_vision_edge_container_predict]
28+
29+
import base64
30+
import io
31+
import json
32+
import requests
33+
34+
35+
def container_predict(image_file_path, image_key, port_number=8501):
36+
"""Sends a prediction request to TFServing docker container REST API.
37+
38+
Args:
39+
image_file_path: Path to a local image for the prediction request.
40+
image_key: Your chosen string key to identify the given image.
41+
port_number: The port number on your device to accept REST API calls.
42+
Returns:
43+
The response of the prediction request.
44+
"""
45+
46+
with io.open(image_file_path, 'rb') as image_file:
47+
encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
48+
49+
# The example here only shows prediction with one image. You can extend it
50+
# to predict with a batch of images indicated by different keys, which can
51+
# make sure that the responses corresponding to the given image.
52+
instances = {
53+
'instances': [
54+
{'image_bytes': {'b64': str(encoded_image)},
55+
'key': image_key}
56+
]
57+
}
58+
59+
# This example shows sending requests in the same server that you start
60+
# docker containers. If you would like to send requests to other servers,
61+
# please change localhost to IP of other servers.
62+
url = 'http://localhost:{}/v1/models/default:predict'.format(port_number)
63+
64+
response = requests.post(url, data=json.dumps(instances))
65+
print(response.json())
66+
# [END automl_vision_edge_container_predict]
67+
return response.json()
68+
69+
70+
def main():
71+
parser = argparse.ArgumentParser()
72+
parser.add_argument('--image_file_path', type=str)
73+
parser.add_argument('--image_key', type=str, default='1')
74+
parser.add_argument('--port_number', type=int, default=8501)
75+
args = parser.parse_args()
76+
77+
container_predict(args.image_file_path, args.image_key, args.port_number)
78+
79+
80+
if __name__ == '__main__':
81+
main()
Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
#!/usr/bin/env python
2+
3+
# Copyright 2019 Google LLC
4+
#
5+
# Licensed under the Apache License, Version 2.0 (the "License");
6+
# you may not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
17+
"""Tests for automl_vision_edge_container_predict.
18+
19+
The test will automatically start a container with a sample saved_model.pb,
20+
send a request with one image, verify the response and delete the started
21+
container.
22+
23+
If you want to try the test, please install
24+
[gsutil tools](https://cloud.google.com/storage/docs/gsutil_install) and
25+
[Docker CE](https://docs.docker.com/install/) first.
26+
27+
Examples:
28+
sudo python -m pytest automl_vision_edge_container_predict_test.py
29+
"""
30+
31+
import os
32+
import subprocess
33+
import time
34+
import automl_vision_edge_container_predict as predict
35+
import pytest
36+
37+
38+
# The absolute path of the current file. This will locate the model_path when
39+
# run docker containers.
40+
ROOT_DIR = os.path.abspath(os.path.dirname(__file__))
41+
MODEL_PATH = os.path.join(ROOT_DIR, 'model_path')
42+
# The cpu docker gcs path is from 'Edge container tutorial'.
43+
DOCKER_GCS_DIR = 'gcr.io/automl-vision-ondevice/'
44+
CPU_DOCKER_GCS_PATH = DOCKER_GCS_DIR + 'gcloud-container-1.12.0:latest'
45+
# The path of a sample saved model.
46+
MODEL_GCS_DIR = 'gs://cloud-samples-data/vision/edge_container_predict/'
47+
SAMPLE_SAVED_MODEL = MODEL_GCS_DIR + 'saved_model.pb'
48+
# Container Name.
49+
NAME = 'AutomlVisionEdgeContainerPredictTest'
50+
# Port Number.
51+
PORT_NUMBER = 8505
52+
53+
54+
@pytest.fixture
55+
def edge_container_predict_server_port():
56+
# set up
57+
# Pull the CPU docker.
58+
subprocess.check_output(['docker', 'pull', CPU_DOCKER_GCS_PATH])
59+
# Get the sample saved model.
60+
61+
if not os.path.exists(MODEL_PATH):
62+
os.mkdir(MODEL_PATH)
63+
subprocess.check_output(
64+
['gsutil', '-m', 'cp', SAMPLE_SAVED_MODEL, MODEL_PATH])
65+
66+
# Start the CPU docker.
67+
subprocess.Popen(['docker', 'run', '--rm', '--name', NAME, '-v',
68+
MODEL_PATH + ':/tmp/mounted_model/0001', '-p',
69+
str(PORT_NUMBER) + ':8501', '-t',
70+
CPU_DOCKER_GCS_PATH])
71+
# Sleep a few seconds to wait for the container running.
72+
time.sleep(10)
73+
74+
yield PORT_NUMBER
75+
76+
# tear down
77+
# Stop the container.
78+
subprocess.check_output(['docker', 'stop', NAME])
79+
# Remove the docker image.
80+
subprocess.check_output(['docker', 'rmi', CPU_DOCKER_GCS_PATH])
81+
82+
# TODO(dizcology): Enable tests in future.
83+
@pytest.mark.skip(reason='skipping to avoid running docker in docker')
84+
def test_edge_container_predict(capsys, edge_container_predict_server_port):
85+
image_file_path = 'test.jpg'
86+
# If you send requests with one image each time, the key value does not
87+
# matter. If you send requests with multiple images, please used different
88+
# keys to indicated different images, which can make sure that the
89+
# responses corresponding to the given image.
90+
image_key = '1'
91+
# Send a request.
92+
response = predict.container_predict(
93+
image_file_path, image_key, PORT_NUMBER)
94+
# Verify the response.
95+
assert 'predictions' in response
96+
assert 'key' in response['predictions'][0]
97+
assert image_key == response['predictions'][0]['key']
64.7 KB
Loading

0 commit comments

Comments
 (0)
0