Modelbit and Arize AI partner to enable rapid ML model deployment and monitoring

By
Michael Butler, ML Community Lead
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Seemingly every day a new open source model is announced with the potential to outperform any of the models that ML teams have already spent months setting up in production.

Websites like Hugging Face have made it easy to pull a new model down from the hub and fine tune it in a Jupyter notebook with training data. The challenge isn’t learning about these new model technologies and coming up with hypotheses about the impact they could have in your product. 

One challenge is the amount of work that goes into building the infrastructure to deploy these models and monitor their performance in production.

Many machine learning teams have spent an immense amount of time building custom pipelines. These home-grown pipelines work well enough for a few models, but any attempt to deploy and monitor newer models would require another herculean effort to rebuild a custom pipeline.

Rapid Deployment & Monitoring

Modelbit and Arize’s new integration enables teams to rapidly deploy ML models into production with one line of code and begin monitoring and fine tuning instantly. Below, we’ll walk through how to rapidly deploy models into production with Modelbit, and immediately monitor their performance with Arize.

Using Modelbit and Arize Together

If you aren’t already a customer you’ll need to create a free account with Arize and Modelbit. Once you’ve got your accounts set up, the integration can be created in a few short steps. 

Here are the steps we’ll cover:

  • How to configure your notebook environment
  • Deploying an ML model to REST with Modelbit
  • How to send your model’s inferences from Modelbit to Arize

Step 1: Adding Arize keys to Modelbit

To add your Arize Space and API keys to Modelbit:

1. In your Arize account, locate your Space Key and API Key in Arize on the Space Settings page.

2. Next, in Modelbit, click the Arize integration in Settings and add your keys.

Modelbit's Integration Page
Inputting Your Arize API Keys

Step 2: Setting up your notebook environment

Now it’s time to set up the notebook environment. To make development easier in your notebook environment, set the envvars "ARIZE_SPACE_KEY" and "ARIZE_API_KEY" to your Arize Space and API keys.

Alternatively, you can use Modelbit to set those environment variables:

{%CODE python%}
import os

os.environ["ARIZE_SPACE_KEY"] = mb.get_secret("ARIZE_SPACE_KEY")
os.environ["ARIZE_API_KEY"] = mb.get_secret("ARIZE_API_KEY")
{%/CODE%}

Step 3: Logging inferences to Arize

In order to start logging inferences to Arize, you’ll need to define a function that will log your inference results to Arize.

{%CODE python%}
from arize.api import Client
from arize.utils.types import ModelTypes, Environments

def log_to_arize(features, prediction):

    arize_resp = Client().log(
        model_id='sample-model-1',
        model_type=ModelTypes.SCORE_CATEGORICAL,
        environment=Environments.PRODUCTION,
        features=features,
        prediction_label=prediction,
    ).result()
    if arize_resp.status_code != 200:
        print(f'Arize logging failed: {arize_resp.text}')
{%/CODE%}

As a next step, define your inference function that logs its results to Arize by calling "log_to_arize":

{%CODE python%}
def example_arize(features):
        # first, calculate your inference
        prediction = ('Fraud', 0.4) # This might be "model.predict(features...)" in your code

        # then log the inference to Arize
        log_to_arize(features, prediction)

        # after logging is complete, return the inference
        return prediction
{%/CODE%}

Finally, deploy your inference function to Modelbit. The call to "mb.deploy" will automatically include your "log_to_arize" function:

{%CODE python%}
mb.deploy(example_arize)
{%/CODE%}

That’s all there is to it. Now, whenever your Modelbit deployment produces an inference, that inference will be logged to Arize! With inferences being logged from Modelbit to Arize, you can now easily monitor, troubleshoot, and fine tune your models running in production. When you set up monitoring in Arize you can define custom thresholds and receive alerts over email, Slack, and other mediums when those thresholds are met. Arize even has features such as automated model retraining and the ability to export data back to Jupyter notebooks. 

What makes the integration between Modelbit and Arize even more powerful is the ability to detect issues with your models in Arize, diagnose and fix these issues, and then easily deploy to production again using Modelbit.

Arize Dashboard

Try it for free today

Arize and Modelbit are both on a mission to help machine learning teams move faster and increase their ability to make an impact. Both Arize and Modelbit have free trials, so give it a try and let us know what you think.

Deploy Custom ML Models to Production with Modelbit

Join other world class machine learning teams deploying customized machine learning models to REST Endpoints.
Get Started for Free