Jupyter notebooks and their derivatives, including Jupyter Lab, Google Colab, Hex and DeepNote are great for developing and training machine learning models.
Discover how to effectively manage and track model versions for rapid experimentation, deployment, and rollback in this comprehensive guide. Implement model versioning and ensure seamless management and deployment of ML models.
Startups are hypotheses: Every startup is a bet that the world can be better in one highly specific, but massively impactful way. At Modelbit, our hypothesis is that machine learning practitioners will change the world with their models. They just need it to be a little easier to deploy those models to production.
Learn how to deploy the OpenAI Whisper-Large-v2 model for speech recognition and transcription using Modelbit. Use speech recognition models and learn how to integrate them into your applications.
In this post we’ll explain the customizations we’ve added to Git that makes using Git for both code and models a great experience.
Explore the future trends and predictions in the evolving landscape of machine learning (ML) deployment. From serverless to multi-cloud, edge, and production pipelines. Check out our ML deployment predictions, including GPU inference, canonical ML stack, and more!
This tutorial guides you through deploying a pre-trained BERT model as a real-time REST API endpoint for efficient and scalable text classification in production using Modelbit.
In this tutorial, we'll walk through the steps to deploy a ResNet-50 image classification model to a REST API Endpoint.
Innovation in ML frameworks and ML models is only accelerating. The best teams commit themselves to building ML platforms that allow them to rapidly experiment with and deploy new ML model types.
Many modern model technologies require GPUs for training and inference. By using Modelbit alongside Hex, we can leverage Modelbit’s scalable compute with on-demand GPUs to do the model training. We can orchestrate the model training and deployment in our Hex project. And finally, we can deploy the model to a production container behind a REST API using Modelbit.
TAPAS is a BERT-based model from Google that can answer questions about a table with natural language. In this post we show how you can deploy a TAPAS model to a REST API in minutes.
We are excited to announce that Neptune and Modelbit have partnered to release an integration to enable better ML model deployment and experiment tracking.
While SageMaker was once thought of as the default platform to develop and deploy ML models into production, it is increasingly becoming a burden on ML teams who are looking to iterate quickly in a world where the pace of ML model innovation is accelerating.
In this article, you will learn how to deploy the Grounding DINO Model as a REST API endpoint for object detection using Modelbit.
OWL-ViT is a new object detection model from the team at Google Research. In this post we walk through how to deploy an OWL-ViT model to a REST API.
In this blog post take a look at what a machine learning model deployment strategy is, why it is important to have one, and the different types of ML model deployment strategies you should consider.
Modelbit and Arize’s new integration enables teams to rapidly deploy ML models into production with one line of code and begin monitoring and fine tuning instantly.
In this article we walk through how we built a Docker environment build time predictor as a key feature in our product and deployed it into production using Modelbit.
Can you go from idea to inference in minutes? That’s what we set out to answer when we tested using Deepnote AI and Modelbit together. In this article we walk through the process of deciding on a model, building and training it using AI, and deploying it to production with one line of code via Modelbit.
Announcing the Eppo & Modelbit Partnership! Learn how to A/B test your machine learning models using the two premier MLOps platforms.
In their paper, Facebook's new Segment-Anything model shows off impressive image recognition performance, beating even some models that know what type of image they’re looking for. In this tutorial we'll talk through the steps to deploy a Segment-Anything model to a REST Endpoint.
In our ML Spotlight Series, we highlight companies building ML into their product to disrupt industries and change the world. Veriff is using machine learning and AI to make identity verification more accurate.
ML model deployment can seem like an onerous process, especially for teams with limited engineering resources. We’ve spoken to hundreds of data science teams to lay out the 9 key questions you need to ask when you’re ready to deploy your ML model into production. And yes, all 9 are super important.
With modern data science and machine learning, it’s easier than ever to predict whether a customer is going to churn. With the right training data and modeling libraries, we can quickly train a model that scores a customer’s likelihood of churning.
You might not know where the time has gone in 2023 so far, but you can know which Data Science and ML conferences you can still attend. We promise: These are events you'll have an opportunity to truly learn.
Modelbit is proud to announce $5M of seed funding led by Leo Polovets of Susa Ventures, with participation from Snowflake and other funds and angel investors.
Five tricks we've learned the hard way to make working with Pandas DataFrames easier for data scientists everywhere.
How to call lambda functions efficiently in batch from Amazon Redshift. A helpful guide for those struggling with slow calls to Lambda from Redshift when doing batch inference for machine learning.
A simple and elegant way to develop machine learning models in Hex, and then deploy them to the cloud with Modelbit
How to build a lead scoring model for a b2b business and deploy it to production so it can be used in both online inference via REST API, and offline batch scoring via SQL function.
A step-by-step guide to building ML models in Deepnote and deploying them to Snowflake using Modelbit.
Our first impressions of Snowpark Python, Snowflake's new arbitrary compute environment for Python!
Your ML model is in a lambda function in the data science AWS account. The Redshift cluster is in the engineering AWS account. You want to call your models to make your predictions in AWS Redshift. What to do?! A guide to cross-account calls from Redshift to Lambda.
For years we've struggled to get our ML models out of our Jupyter notebooks and into production cloud environments. Finally we built a solution. Here's how it works.