Pytorch Sample¶
Requirements¶
- Authenticated to gcloud (
gcloud auth application-default login
)
This notebook demonstrate how to create and deploy PyTorch model to Merlin. It uses IRIS classifier model as example.
[ ]:
!pip install --upgrade -r requirements.txt > /dev/null
[ ]:
import merlin
import warnings
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from merlin.model import ModelType
from sklearn.datasets import load_iris
warnings.filterwarnings('ignore')
1. Initialize¶
1.1 Set Merlin Server¶
[ ]:
merlin.set_url("localhost:3000/api/merlin")
1.2 Set Active Project¶
project
represent a project in real life. You may have multiple model within a project.
merlin.set_project(<project_name>)
will set the active project into the name matched by argument. You can only set it to an existing project. If you would like to create a new project, please do so from the MLP console at http://localhost:3000/projects/create.
[ ]:
merlin.set_project("sample")
1.3 Set Active Model¶
model
represents an abstract ML model. Conceptually, model
in Merlin is similar to a class in programming language. To instantiate a model
you’ll have to create a model_version
.
Each model
has a type, currently model type supported by Merlin are: sklearn, xgboost, tensorflow, pytorch, and user defined model (i.e. pyfunc model).
model_version
represents a snapshot of particular model
iteration. You’ll be able to attach information such as metrics and tag to a given model_version
as well as deploy it as a model service.
merlin.set_model(<model_name>, <model_type>)
will set the active model to the name given by parameter, if the model with given name is not found, a new model will be created.
[ ]:
merlin.set_model("pytorch-sample", ModelType.PYTORCH)
2. Train Model¶
2.1 Prepare training data¶
[ ]:
iris = load_iris()
y = iris['target']
X = iris['data']
train_X = Variable(torch.Tensor(X).float())
train_y = Variable(torch.Tensor(y).long())
2.2 Create PyTorch Model¶
[ ]:
class PyTorchModel(nn.Module):
# define nn
def __init__(self):
super(PyTorchModel, self).__init__()
self.fc1 = nn.Linear(4, 100)
self.fc2 = nn.Linear(100, 100)
self.fc3 = nn.Linear(100, 3)
self.softmax = nn.Softmax(dim=1)
def forward(self, X):
X = F.relu(self.fc1(X))
X = self.fc2(X)
X = self.fc3(X)
X = self.softmax(X)
return X
2.3 Train and Check Prediction¶
[ ]:
net = PyTorchModel()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
for epoch in range(10):
optimizer.zero_grad()
out = net(train_X)
loss = criterion(out, train_y)
loss.backward()
optimizer.step()
predict_out = net(train_X)
predict_y = torch.max(predict_out, 1)
predict_y
3. Deploy Model¶
3.1 Serialize Model¶
[ ]:
model_dir = "pytorch-model"
model_path = os.path.join(model_dir, "model.pt")
torch.save(net.state_dict(), model_path)
3.2 Create Model Version and Upload¶
merlin.new_model_version()
is a convenient method to create a model version and start its development process. It is equal to following codes:
v = model.new_model_version()
v.start()
v.log_pytorch_model(model_dir=model_dir)
v.finish()
[ ]:
# Create new version of the model
with merlin.new_model_version() as v:
# Upload the serialized model to Merlin
merlin.log_pytorch_model(model_dir=model_dir)
3.3 Deploy Model¶
Each of a deployed model version will have its own generated url
[ ]:
endpoint = merlin.deploy(v)
3.4 Send Test Request¶
[ ]:
%%bash -s "$endpoint.url"
curl -v -X POST $1 -d '{
"instances": [
[2.8, 1.0, 6.8, 0.4],
[3.1, 1.4, 4.5, 1.6]
]
}'
3.5 Delete Deployment¶
[ ]:
merlin.undeploy(v)