Announcing a New Partnership Between Modelbit and Neptune

By
Michael Butler, ML Community Lead
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We are excited to announce that Neptune and Modelbit have partnered to release an integration to enable better ML model deployment and experiment tracking. Data scientists and machine learning engineers can use the integration to train and deploy machine learning models in Modelbit while logging and visualizing training progress in Neptune.

If you are not already familiar, Neptune is a lightweight experiment tracker for MLOps. It offers a single place to track, compare, store, and collaborate on experiments and models.

Neptune.ai dashboard.

Modelbit is a machine learning platform that makes deploying custom ML models to REST Endpoints as simple as calling “modelbit.deploy()” in any data science notebook or Python editor.

Automatically generated API endpoint in Modelbit.

In this post we will cover the following topics:

  • Setting up the integration between Modelbit and Neptune
  • Creating a training job in Modelbit that:
  • Logs the model’s hyperparameters and accuracy to Neptune
  • Deploys the model to a REST endpoint.

In case you want to jump right into setting up the integration, you can follow the instructions in Modelbit’s documentation.

Setting Up the Integration

To get started you’ll need to create free accounts with both Modelbit and Neptune:

Modelbit integrates with Neptune using your Neptune API token so you can log training metadata and model performance to your Neptune projects.

To add your Neptune API token to Modelbit, go to the Integrations tab of Settings in your Modelbit account, click the “Neptune” tile, and add your “NEPTUNE_API_TOKEN”. This token will be available in your training jobs' environments as an environment variable so you can automatically authenticate with Neptune.

Creating a Modelbit training job that uses Neptune

We'll make a training job to train a model to predict flower types, using the Scikit Learn Iris dataset. We'll log the model's hyperparameters and accuracy to Neptune and then deploy the model to a REST endpoint.

Our model is very simple and relies on two features to predict the flower type.

Setup

First, import “modelbit” and “neptune” and authenticate your notebook with Modelbit:


import modelbit, neptune
mb = modelbit.login()

If your “NEPTUNE_API_TOKEN” isn't already in your notebook's environment, add it:


import os
os.environ["NEPTUNE_API_TOKEN"] = mb.get_secret("NEPTUNE_API_TOKEN")

Creating the training job

We'll create a function to encapsulate our training logic. At the top of the function we call “run = neptune.init(...)” to start a run and record our hyperparameters with “run[...]”. Be sure to change the “project=” parameter in “neptune.init_run”.

Then we create and fit the model, logging the model's accuracy to Neptune and saving the model with “mb.add_model”.


from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
import random

def train_flower_classifier():
# pick our hyperparameters
random_state = random.randint(1, 10_000)
n_estimators = random.randint(2, 10)
max_depth = random.randint(2, 5)

# Init Neptune and log hyperparameters to Neptune
run = neptune.init_run(project="your-workspace/your-project")
run["random_state"] = random_state
run["n_estimators"] = n_estimators
run["max_depth"] = max_depth

# Prepare our dataset
X, y = datasets.load_iris(return_X_y=True, as_frame=True)
X = X[["sepal length (cm)", "sepal width (cm)"]] # only use two features
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=random_state)

model = RandomForestClassifier(n_estimators=n_estimators, max_depth=max_depth)
model.fit(X_train,y_train)

# Log accuracy to Neptune
predictions = model.predict(X_test)
run["accuracy"] = metrics.accuracy_score(y_test, predictions)
run.stop() # Stop Neptune session

# Save model to the registry
mb.add_model("flower_classifier", model)

Deploy and run the training job

We can now deploy our training function to Modelbit with “mb.add_job”:


mb.add_job(train_flower_classifier, deployment_name="predict_flower")

Click the “View in Modelbit” button then click “Run Now”. Once the job completes, head over to your Neptune project to see that the job logged a new run!

ML model training jobs in Modelbit.

Create a REST Endpoint

Finally, we'll deploy our flower predictor model to a REST endpoint. We'll make an inference function that accepts two input features and calls the model we trained, returning the predicted flower type:


flower_names = ["setosa", "versicolor", "virginica"]

def predict_flower(sepal_len: float, sepal_width: float) -> str:
model = mb.get_model("flower_classifier")
predicted_class = model.predict([[sepal_len, sepal_width]])[0]
return flower_names[predicted_class]

Deploy the inference function to create a REST endpoint:


mb.deploy(predict_flower)

Our flower predicting model is live as a REST endpoint, and every time we retrain it the hyperparameters and accuracy are logged to Neptune for careful tracking.

Neptune project showing different ML model runs.

Next Steps

Both Neptune and Modelbit share a vision of empowering ML teams to confidently ship impactful ML models into production. With this integration, machine learning engineers and data scientists can train and deploy machine learning models in Modelbit while logging and visualizing training progress in Neptune.

As a reminder, both Neptune and Modelbit have options to get started for free. Give the new integration a try and let us know what you think!

Deploy Custom ML Models to Production with Modelbit

Join other world class machine learning teams deploying customized machine learning models to REST Endpoints.
Get Started for Free