In this post, we'll learn how to save and load models with TensorFlow.js and Node.js. We'll cover the following topics:
The first step is to save our model. We can do this with the tf.Model.save
method. This method takes two arguments:
path
: The path to save the model to.overwrite
: A boolean indicating whether or not to overwrite the existing model at the specified path.Here's an example of saving a model:
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
model.save('/tmp/model/1/model.json', true);
In this example, we've saved our model to the /tmp/model/1
directory. The overwrite
parameter is set to true
, so if there was already a model in that directory, it would be overwritten.
Now that we've saved our model, let's learn how to load it. We can do this with the tf.loadModel
method. This method takes one argument:
path
: The path to the saved model.Here's an example of loading a model:
const model = await tf.loadModel('/tmp/model/1/model.json');
In this example, we've loaded our model from the /tmp/model/1
directory.
Once we've loaded our model, we can use it to make predictions. We can do this with the model.predict
method. This method takes one argument:
inputs
: The input data to make predictions on. This can be a single tf.Tensor
or an Array
of tf.Tensor
s.Here's an example of using a model to make predictions:
const input = tf.tensor2d([[1.0]]);
const output = model.predict(input);
output.print();
In this example, we've created a tf.Tensor
with the value [[1.0]]
. We've passed this tensor to the model.predict
method, which returns a prediction. We've then printed the prediction to the console.