Announcing MLOps for Snowpark - Deploy ML Models Into Snowpark and Manage Them In Production

By
Harry Glaser, Co-Founder & CEO
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Modelbit is excited to announce new functionality that makes it easier than ever for data science and machine learning teams to both deploy and manage ML models directly in Snowpark

Starting today, Snowflake customers can now use Modelbit to deploy machine learning models into Snowflake warehouses with Snowpark Python. When you run “modelbit.deploy()” from within any Python notebook (like Hex, Colab, or Jupyter), your ML model will be instantly deployed into a Snowpark Python UDF and run inferences directly within your Snowflake warehouse. 

Machine learning models deployed this way will come with all the MLOps features you need to manage models ML in production. Features such as:

  • Custom Python environments
  • Git integration
  • Logging
  • Model monitoring
  • Load balancing
  • and more. 

This deeper Snowflake integration includes enhanced security, because when you use Modelbit with Snowpark your data will never leave your Snowflake warehouse. In addition, you can now use your Snowflake credits on ML inference compute workloads. You can read more in our docs here.

Modelbit & Snowpark Diagram

MLOps for Snowpark

Custom Python Environments Even In Snowpark

Even with inferences running in Snowflake, Modelbit’s Python environment auto-detection is more important than ever. Modelbit replicates the Python environment found in the model training notebook inside the Python UDF. In cases where packages needed by your model aren’t available in Snowpark’s Anaconda channel, Modelbit can “vendor” the packages and deliver them directly into your Snowpark UDF so they are still available for inference! Each model still runs in the specific Python environment that it was trained in and deployed from.

Monitoring, Logging, and Experiment Tracking

Modelbit offers a suite of MLOps tools designed to enhance and streamline the management of your ML models in production environments. With features such as advanced monitoring, detailed logging, and comprehensive experiment tracking, Modelbit ensures your machine learning workflows are efficient, transparent, and scalable. 

The monitoring capabilities allow you to keep a close eye on your model's performance and health in real-time, ensuring any deviations are quickly identified and addressed. 

Logging features provide deep insights into model behavior and system interactions, facilitating debugging and historical analysis. 

Meanwhile, experiment tracking offers a structured approach to managing numerous model iterations, enabling data scientists to compare, contrast, and choose the best-performing models with ease. 

Together, these features not only optimize the operational aspects of machine learning but also significantly reduce the time and effort required for model management, leading to quicker iterations and more robust ML deployments.

Model Usage Charts in Modelbit

Git Integration and CI/CD

Modelbit’s Git integration is also more important than ever with the Modelbit Snowpark integration. All model code and artifacts are synced to your GitHub repo where you can run your normal CI/CD and version control processes. 

With Modelbit, you can use your favorite Python notebook as your development environment; Snowflake Snowpark as your production runtime environment; and GitHub as your version control and CI/CD platform; and all three of these platforms are kept in sync by Modelbit. This enables professional ML teams to take advantage of Snowflake Snowpark at enterprise scale.

Deploying ML Models Directly to Snowpark

Data and Compute In Your Snowflake Warehouse

Customers with sensitive private data, and customers in highly regulated industries often need their data to remain in their Snowflake warehouse even while ML inferences are being performed. With Modelbit’s Snowpark Python integration, the machine learning code is delivered into the warehouse, rather than the data being taken out of the warehouse. 

Inferences are performed using the code you wrote in the Python notebook, but the execution happens inside the Snowflake warehouse that has already been validated by the customer’s IT and security teams.

ML models APIs in Modelbit

Use Your Snowflake Credits on ML Inferences

An additional benefit of performing inferences directly in the warehouse is that prepaid Snowflake credits are used to pay for the compute that the inferences require. 

When you use external compute for inferences it has a separate cost in addition to the warehouse itself. But with Modelbit’s Snowpark Python integration, the compute remaining in the warehouse means that your already-paid-for Snowflake credits can be spent on inference compute as well.

What’s next?

Modelbit will soon release functionality to deploy models directly to Snowpark Container Services (SPCS) that can run on GPUs. Additionally, a Snowflake Native app is being built which will allow you to launch Modelbit directly from your Snowflake console.

Try Modelbit and Snowpark today

With Snowflake’s increasingly powerful, flexible compute options; and Modelbit’s enterprise-scale MLOps management platform, large-scale ML teams finally have the tools they need to run large inference jobs with thousands of models in their own custom Python environments!

Contact us for a custom demo of the new functionality or try it yourself in a free trial.

Deploy Custom ML Models to Production with Modelbit

Join other world class machine learning teams deploying customized machine learning models to REST Endpoints.
Get Started for Free