How to extend a Keras Model

Passing through instance keys and features when using a Keras model

Drew Hodun
Towards Data Science

--

Generally, you only need your Keras model to return prediction values, but there are situations where you want your predictions to retain a portion of the input. A common example is forwarding unique ‘instance keys’ while performing batch predictions. In this blog and corresponding notebook code, I’ll demonstrate how to modify the signature of a trained Keras model to forward features to the output or pass through instance keys.

Sorting through instance keys. Photo by Samantha Lam on Unsplash

How to forward instance keys to the output

Sometimes you’ll have a unique instance key that is associated with each row and you want that key to be output along with the prediction so you know which row the prediction belongs to. You’ll need to add keys when executing distributed batch predictions with a service like Cloud AI Platform batch prediction. Also, if you’re performing continuous evaluation on your model and you’d like to log metadata about predictions for later analysis. Lak Lakshmanan shows how to do this with TensorFlow estimators, but what about Keras?

Let’s say you have a previously trained model that has been saved with tf.saved_model.save(). Running the following, you can inspect the serving signature of the model and see the expected inputs and outputs:

tf.saved_model.save(model, MODEL_EXPORT_PATH)!saved_model_cli show — tag_set serve — signature_def serving_default — dir {MODEL_EXPORT_PATH}The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 28, 28)
name: serving_default_image:0
The given SavedModel SignatureDef contains the following output(s):
outputs['preds'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

To pass through a unique row key and a previously saved model, load your model, create an alternative serving function, and re-save as follows:

loaded_model = tf.keras.models.load_model(MODEL_EXPORT_PATH)@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string),tf.TensorSpec([None, 28, 28], dtype=tf.float32)])
def keyed_prediction(key, image):
pred = loaded_model(image, training=False)
return {
'preds': pred,
'key': key
}
# Resave model, but specify new serving signature
KEYED_EXPORT_PATH = './keyed_model/'
loaded_model.save(KEYED_EXPORT_PATH, signatures={'serving_default': keyed_prediction})

Now when we inspect the serving signature of the model, we will see that it has the key as both input and output:

!saved_model_cli show --tag_set serve --signature_def serving_default --dir {KEYED_EXPORT_PATH}The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 28, 28)
name: serving_default_image:0
inputs['key'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_key:0
The given SavedModel SignatureDef contains the following output(s):
outputs['key'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: StatefulPartitionedCall:0
outputs['preds'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict

Your model service will now expect both an image tensor and a key in any prediction call and will output preds and key in its response. An upside to this method is you don’t need access to the code that generated the model, just the serialized SavedModel.

How to leverage multiple serving signatures

Sometimes it’s handy to save the model with both serving signatures, either for compatibility reasons (i.e the default signature is unkeyed) or so that a single serving infrastructure can handle both keyed and unkeyed predictions and the user decides which to perform. You’ll need to pull the serving function off of the loaded model and indicate that as one of the serving signatures when saved out again:

inference_function = loaded_model.signatures['serving_default']loaded_model.save(DUAL_SIGNATURE_EXPORT_PATH, signatures={'serving_default': keyed_prediction,
'unkeyed_signature': inference_function})

How to forward input features to the output

You may also want to forward certain input features for model debugging purposes, or to compute evaluation metrics on specific slices of data, (e.g. compute RMSE of baby weight based on whether the baby is pre-term or full-term).

This example presumes you’re using multiple named Inputs, something you would do if you wanted to take advantage of TensorFlow feature columns as described here. Your first option would be to train the model as usual and take advantage of the Keras Functional API to create a slightly different model signature while maintaining the same weights:

tax_rate = Input(shape=(1,), dtype=tf.float32, name="tax_rate")
rooms = Input(shape=(1,), dtype=tf.float32, name="rooms")
x = tf.keras.layers.Concatenate()([tax_rate, rooms])
x = tf.keras.layers.Dense(64, activation='relu')(x)
price = tf.keras.layers.Dense(1, activation=None, name="price")(x)
# Functional API model instead of Sequential
model = Model(inputs=[tax_rate, rooms], outputs=[price])
# Compile, train, etc...
#
#
#
forward_model = Model(inputs=[tax_rate, rooms], outputs=[price, tax_rate])

The other alternative, particularly useful if you don’t have the code that generated the model, is modify the serving signature as you did with the keyed prediction model:

@tf.function(input_signature=[tf.TensorSpec([None, 1], dtype=tf.float32), tf.TensorSpec([None, 1], dtype=tf.float32)])
def feature_forward_prediction(tax_rate, rooms):
pred = model([tax_rate, rooms], training=False)
return {
'price': pred,
'tax_rate': tax_rate
}
model.save(FORWARD_EXPORT_PATH, signatures={'serving_default': feature_forward_prediction})

Enjoy!

Thanks to Lak Lakshmanan for helping me update his original Estimator post to Keras.

--

--