In part 1 we knew how to prepare data and use a pre-trained Keras model to make prediction. In this part, I will show how to modify and re-train the pre-trained model so that the model will predict our data labels correctly.

Discover Last Layers of pre-trained Model

pd.DataFrame(model.layers).tail()
308	<keras.layers.merge.Concatenate object at 0x11...
309	<keras.layers.core.Activation object at 0x11ec...
310	<keras.layers.merge.Concatenate object at 0x11...
311	<keras.layers.pooling.GlobalAveragePooling2D o...
312	<keras.layers.core.Dense object at 0x11ecbbac8>

The last layer is a fully connected layer to output the data label classes (1000 classes). Previous layer is the GlobalAveragePooling2D layer, output of this layer in the input of the last fully connected layer. In order to modify the model for our labels (2 classes) we need to remove the last layer and replace it with other layers so that it can output 2 classes.

We want to visualize the GlobalAveragePooling2D layer, we construct a model to output the intermediate layer outputs.

from keras.models import Model

intermediate_model = Model(inputs=model.input, outputs=model.layers[311].output)
%matplotlib inline
features = intermediate_model.predict(x)
pd.DataFrame(features.reshape(-1,1)).plot(figsize=(12,3))
<matplotlib.axes._subplots.AxesSubplot at 0x120c27748>

The output of the GlobalAveragePooling2D are 2048 dimensions features. Inception-V3 model classifies 1000 classes by using Dense layer at the end of the network, which use these feature as input. Now, we want to classify  “Dog Food/Cat Food” classes. We need to remove the last layer and add another Dense layer.

from keras.layers import Dense

x = intermediate_model.output
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x) # This now become last layer of our model 
transfer_model = Model(inputs=intermediate_model.input, outputs=predictions)

For fine tune, we do not want to re-train all layers of the network, we only want to train the new Dense layer added. We can do this by freeze the layers of intermediate model:

for layer in transfer_model.layers:
    layer.trainable = False
# Unfreeze two last layers transfer_model.layers[312].trainable = True
transfer_model.layers[313].trainable = True

Compile and re-train the transfer model:

transfer_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
transfer_model.fit(X_train, y_train, 
                   epochs=20, verbose=2, 
                   validation_data=(X_test, y_test))
...
Epoch 18/20
122s - loss: 0.1362 - acc: 0.9750 - val_loss: 0.5042 - val_acc: 0.7250
Epoch 19/20
110s - loss: 0.1278 - acc: 0.9688 - val_loss: 0.4404 - val_acc: 0.7750
Epoch 20/20
113s - loss: 0.1292 - acc: 0.9750 - val_loss: 0.5495 - val_acc: 0.7250
<keras.callbacks.History at 0x1201c3550>

Evaluate the model

loss, acc = transfer_model.evaluate(X_test, y_test)
print('Loss: {}, Accuracy: {}'.format(loss, acc))
40/40 [==============================] - 24s    
Loss: 0.5231947183609009, Accuracy: 0.725

Clearly we have a variance problem, we need to get more training data for our model. For this tutorial we gonna use this trained model.

Make prediction

img_path = 'dog-food-test.jpg'
img = image.load_img(img_path, target_size=(299, 299))
x= image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

preds = transfer_model.predict(x)
print(preds)
[[ 0.99878365  0.00121639]]

Our fine tuned model predict the above image is a dog food which is a correct prediction.

Now we have created and trained a model that can predict an image is dog food or cat food. Let’s deploy this model to Google Cloud. Deploy the model to Google Cloud help us to scale it up in production to serve lots of users.

3. Deploy Model to Google Cloud

We can upload our model to Google Cloud, Google Cloud ML Engine will serve our model, accept prediction requests by REST API, and do auto scalling as well.

Build a graph that converts image

Since Keras model accepts only raw image array as input, we should convert image format to raw image array.

import tensorflow as tf
from tensorflow.python.framework import graph_util
from keras import backend as K


sess = K.get_session()

from tensorflow.python.framework import graph_util

# Make GraphDef of Transfer Model g_trans = sess.graph
g_trans_def = graph_util.convert_variables_to_constants(sess,
                                                        g_trans.as_graph_def(),
                                                        [transfer_model.output.name.replace(':0','')])

# Image Converter Model with tf.Graph().as_default() as g_input:
    input_b64 = tf.placeholder(shape=(1,), dtype=tf.string, name='input')
    input_bytes = tf.decode_base64(input_b64[0])
    image = tf.image.decode_image(input_bytes)
    image_f = tf.image.convert_image_dtype(image, dtype=tf.float32)
    input_image = tf.expand_dims(image_f, 0)
    output = tf.identity(input_image, name='input_image')

g_input_def = g_input.as_graph_def()



with tf.Graph().as_default() as g_combined:
    x = tf.placeholder(tf.string, name="input_b64")

    im, = tf.import_graph_def(g_input_def,
                              input_map={'input:0': x},
                              return_elements=["input_image:0"])

    pred, = tf.import_graph_def(g_trans_def,
                                input_map={transfer_model.input.name: im,
                                          'batch_normalization_1/keras_learning_phase:0': False},
                                return_elements=[transfer_model.output.name])

    with tf.Session() as sess2:
        inputs = {"inputs": tf.saved_model.utils.build_tensor_info(x)}
        outputs = {"outputs": tf.saved_model.utils.build_tensor_info(pred)}
        signature = tf.saved_model.signature_def_utils.build_signature_def(
            inputs=inputs,
            outputs=outputs,
            method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
        )

        # save as SavedModel         b = tf.saved_model.builder.SavedModelBuilder('exported_model')
        b.add_meta_graph_and_variables(sess2,
                                       [tf.saved_model.tag_constants.SERVING],
                                       signature_def_map={'serving_default': signature})
        b.save()
INFO:tensorflow:Froze 380 variables.
Converted 380 variables to const ops.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b'exported_model/saved_model.pb'

We just exported the model to exported_model folder. This folder contains saved_model.pb file, this file contains the structure of our model in Google’s protobuf format. There’s alseo a variable sub-folder that cntains a checkpoint of the train weights from our NN network. Now the model is ready to upload Google Cloud.

Configure a Google Cloud Project

Next we need to create a project on Google Cloud Platform to host our model. This can be easily done from the Google Cloud platform dashboard. Let’s name this project “Deep Learning”. From here you can access all the different Google Cloud services. In order to run our model in Google Cloud we need to enable Google Cloud Machine Learning service. Let’s go to APIs & Services menu find Google Cloud ML service and enable it. Once we enable Google Cloud ML service, we are good to upload our model.

We will use the Google Cloud SDK to work with Google Cloud. Go to https://cloud.google.com/sdk/ to download and learning how to use the SDK. Once you install the SDK, everything is ready to go.

First we need to run the following command to configure google cloud environment in our computer:

$gcloud init

Uploading our model

Now we are ready to upload our model to google cloud. We use gsutil from the SDK to do so.

First we need to create a bucket to store our model:

Let’s create our datasart-dl-model bucket on google cloud

gsutil mb -l us-central1 gs://datasart-dl-model

Next we need to upload our model to the created bucket:

gsutil cp -R exported_model/* gs://datasart-dl-model/cat_dog_food_v1/

Next we have to tell the Google Cloud ML engine that we want to create a new model:

gcloud ml-engine models create fooddetector --regions us-central1

Finally we can create a running version of the model:

gcloud ml-engine versions create v1 --model=fooddetector --origin=gs://datasart-dl-model/cat_dog_food_v1/ --runtime-version=1.2

This will take several minutes to finish. Once the process is done, we are ready to use the model anywhere.

Using the model

Now we can call the model from Google Cloud to predict hand written digit whether is six or not six.

We can use Google API client library to call our model. For Python you can easily install the library with pip.

from oauth2client.client import GoogleCredentials
import googleapiclient.discovery

We will need to have a credentials to access our google cloud service. Go to Google Cloud Platform dashboard (APIs & Service -> Credentials) and create a service account key for our project and save it as json file to our local computer.

PROJECT_ID = "deep-learning-180004"
MODEL_NAME = "fooddetector"
CREDENTIALS_FILE = "credentials.json"
# Connect to the Google Cloud ML service credentials = GoogleCredentials.from_stream(CREDENTIALS_FILE)
service = googleapiclient.discovery.build('ml', 'v1', credentials=credentials)

Let’s classify the above image. The converter graph doesn’t have any resizing function, we need to resize the image.

with open('dog-food-test_299_299.jpg', 'rb') as f:
    b64_x = f.read()
import base64
import json

b64_x = base64.urlsafe_b64encode(b64_x)
b64_x = b64_x.decode('utf-8')
input_instance = dict(inputs=b64_x)
input_instance = json.loads(json.dumps(input_instance))
request_body = {"instances": [input_instance]}

request = service.projects().predict(name=name, body=request_body)

try:
    response = request.execute()
except errors.HttpError as err:
    print(err._get_reason())
response
{'predictions': [{'outputs': [0.9978664517402649, 0.0021335373166948557]}]}

This is the response from deployed model. The list “outputs” represents confidece of Dog Food and Cat Food respectively. From the prediction we can see that the model predict the test image is dog food with 99.78% confident.

4. Summary

In this tutorial series, I have shown a way to create a backend for Deep Learning application by using Keras library and Google Cloud ML Engine. Specifically, I’ve learned:

  • How to use pre-trained model with Keras
  • How to prepare data for a Deep Learning application
  • How to do transfer learning to fine tune a pre-trained model on a specific data
  • How to deploy a trained model to Google Cloud to help us scale it up in production to serve many users