Announcing Modelbit’s New Integration with Tecton

By
Michael Butler, ML Community Lead
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We are excited to announce that Tecton and Modelbit have partnered to release an integration to enable a more streamlined ML model deployment and feature management workflow.

With this new integration, machine learning teams can use Modelbit to rapidly train and deploy ML models that retrieve and utilize features built in their Tecton feature store. 

New to Modelbit or Tecton?

Modelbit is a tool that makes it easy for ML teams to train and deploy machine learning models. With a few lines of code in any Python environment, ML models can be deployed to isolated containers behind REST APIs running on GPUs.

Tecton is a feature platform for machine learning, providing a centralized way to transform, store, and serve features for batch and real-time models.

How the Tecton & Modelbit Integration Helps ML Teams

With Tecton and Modelbit, any team can turn their data into a real-time ML application. Tecton serves the data, while Modelbit serves the model, and runs the inference pipeline for real-time decisions. Best of all, everything can be developed and tested directly in a notebook and only requires Python.

Streamlined Machine Learning Pipeline: The Tecton and Modelbit integration provides a more streamlined pipeline for developing and deploying machine learning models. Machine learning engineers and data scientists can focus on model development and experimentation without worrying about the intricacies of feature management and model deployment.

Real-Time Feature Access: For models that rely on up-to-date data (like those used in dynamic environments such as fraud detection, recommendation systems), this integration ensures that the models have access to the most current features. This real-time feature access is crucial for maintaining high model accuracy and performance.

Reduced Operational Complexity: By integrating feature management directly into model deployment workflows, teams can spend less time configuring data pipelines and more time on refining models and extracting valuable insights.

Rapid Prototyping and Deployment: With this integration, the time from model conception to deployment can be significantly reduced. ML engineers and data scientists can quickly prototype models using a wide range of features from Tecton and deploy them efficiently using Modelbit.

How the Tecton & Modelbit Integration Works

No matter what type of ML model you want to build, Tecton’s feature platform can work with Modelbit.

To get started you’ll need:

  1. A Modelbit Free Trial Account
  2. A Tecton Account

Once you’re set up with an account for both tools, you’ll connect Modelbit to Tecton using a Tecton API key. Then, you’ll be able to create functions for your ML models that make calls to Tecton, and deploy those functions using Modelbit. All of this will be done in whichever Python environment you normally work in. Here’s an example:

First, we would create a function that retrieves features from Tecton:


def get_tecton_feature_data(user_id: str):
    online_feature_data = requests.post(
        headers={"Authorization": "Tecton-key " + mb.get_secret("TECTON_API_KEY")},
        url=f"https://{TECTON_ACCOUNT}.tecton.ai/api/v1/feature-service/get-features",
        data=json.dumps({
            "params": {
                "feature_service_name": "fraud_detection_feature_service",
                "join_key_map": {"user_id": user_id},
                "metadata_options": {"include_names": True},
                "workspace_name": TECTON_WORKSPACE
            }
        }),
    )
    return online_feature_data.json()

Then, we would use that function and our model together in a function that returns an inference:


def predict_fraud(user_id: str) -> float:
    feature_data = get_tecton_feature_data(user_id)
    columns = [f["name"].replace(".", "__") for f in feature_data["metadata"]["features"]]
    data = [feature_data["result"]["features"]]
    features = pd.DataFrame(data, columns=columns)
    return model.predict(features)[0]

Finally, we can deploy our model that retrieves features from Tecton by passing our inference function to "modelbit.deploy():


mb.deploy(predict_fraud)

When we do so, Modelbit will deploy your model to a fully isolated container behind a REST API.

Here is a full step-by-step tutorial for building an ML model that retrieves features from Tecton and deploys the model using Modelbit.

Real-World Example: Building Fraud Detection Models

ML teams at modern digital banks and financial technology companies develop fraud detection models at scale. The fraud detection models that these teams build are a core pillar for the success of the business.

The easier it is for these ML teams to constantly train and deploy more performant models into production, the better the outcomes for the business. 

Here are a few areas where the Modelbit and Tecton integration add value for such teams:

Feature Management: Fraud detection models require a dynamic set of features like transaction history, user behavior, and geographic data. Tecton manages these features, ensuring they are up-to-date and accurately reflect recent patterns.

Model Deployment: Tecton brings the features online, while Modelbit deploys an inference pipeline that retrieves features from Tecton and runs model inference to predict fraud. The result is a single low-latency fraud detection API powered by real-time data.

Real-Time Updates: In fraud detection, real-time response is crucial. This integration allows the deployed model to react quickly to new data or patterns as Tecton updates the feature store, enhancing the model's effectiveness.

Ready to see it in action?

The Modelbit and Tecton integration offers a more streamlined and efficient workflow for machine learning projects, particularly those requiring dynamic and consistent feature sets like fraud detection models. This integration simplifies the process from feature management to model deployment, making it a valuable tool for any team working in the machine learning space.

Get started by creating a free trial of Modelbit or by booking a demo.

Deploy Custom ML Models to Production with Modelbit

Join other world class machine learning teams deploying customized machine learning models to REST Endpoints.
Get Started for Free