How Inventa Uses Machine Learning and Modelbit to Reduce Predicted Shipping Times by 66%

By
A headshot of Daniel McAuley
Daniel McAuley, Head of Data, and
A headshot of Yuri Quintanilha
, Data Scientist

The problem: merchants want supplies faster!

At Inventa, our online platform matches retailers to wholesale suppliers. In the checkout flow, we show merchants how long it’ll take to get their orders. This is a more challenging task in Brazil than in the US because there are hundreds of transport companies and most of them don’t provide APIs or other data on shipping estimates.

Before using machine learning, we accepted the supplier’s guesstimated shipping times, which can be quite long! Even if the item would really arrive much sooner than guessed, long estimated shipping times were a top merchant complaint.

We knew that by deploying a machine learning model to production, we could give merchants a more accurate prediction, which would make them much happier. Late orders are an obvious problem, but early deliveries also cause issues for inventory management. Finally, the model benefits Inventa because shorter shipping times increase order conversion and on-time deliveries drive repeat buying

Building and training the model

We chose a Scikit-Learn Gradient Boosting Regressor because it allowed us to optimize a custom loss function and is well-suited for delivery time prediction tasks. We have a business need to keep the number of times shipments come on or before the predicted date – our alpha function – above 90%.

We chose features like the supplier’s average shipping time, the states being shipped from and to, and the supplier’s usual transport company. This data is in our Snowflake warehouse, in data models built with dbt. We built and trained the model in Hex, accessing the data via its Snowflake connector.

Deploying the model

At Inventa, we use Modelbit to deploy our models because of its ease of use; its tight integration with Snowflake and dbt; and its support for our Git-based workflows. Specifically we like to deploy to branches and use GitHub Pull Requests to merge deployments to main.

To deploy our trained model, we simply called modelbit.deploy to deploy it to production! We handed off the model’s REST API URL to our engineering team so they could call it from the checkout flow. We made sure to use Modelbit’s dynamic “latest” URL so that we can retrain and redeploy our model without waiting for engineering.

In production, some features, like the shipment destination, come from the REST call. Others, like supplier metadata, come from our Snowflake warehouse. For these, we use a Modelbit dataset to make the metadata highly available in production. We update it on a schedule using a webhook. This approach has the added benefit that the Modelbit cache provides extra redundancy against issues upstream in Snowflake or dbt.

Monitoring the model

We sync the model’s production logs from Modelbit into Snowflake. From there, we’re able to keep an eye on model performance and drift. We’ve built a Hex dashboard that shows our model’s standard deviation of predictions, root mean square error, recent raw log lines, and other information to make sure the model and its input data are behaving as expected. In the future we’ll implement monitoring to alert us as soon as anomalies are detected.

The bottom line: happier merchants

Before this model, the average shipping time prediction was 15 days. Now that we’ve attacked the problem with machine learning in production, the average is 4-5 days – and our accuracy is still above 90%! We coupled the improvement in prediction accuracy with a tweak to the product to show a narrow range of delivery dates. This sets expectations with retailers and was not possible before we had the model. Complaints from merchants about long predicted shipping times are way down. And suppliers are getting more repeat orders! Win win win.

Learn more

Sign up to get news and updates from Modelbit
You have been added to our list!
Something went wrong!