How we made Git work great for machine learning

By
Tom O'Neill, Co-Founder & CTO
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

How we made Git work great for machine learning

Git is a great tool for versioning and collaborating on ML code and notebooks. However, Git is typically a poor solution for large binary files like the predictive and generative models created by ML frameworks. These binary models are just as important as the code that produced them, but they fit poorly in Git: what is an ML team to do?

In this post we’ll explain the customizations we’ve added to Git that makes using Git for both code and models a great experience. 

What’s the goal?

Before we get into the solution, let’s get clear about what we want. We want to track all ML model development, deployment, and versioning with Git so we can use merge requests, branching, and rollbacks. Of course this includes the binary model files, not just the code.

We want Git to work as well for Data Scientists and ML Engineers as it does for software engineers. So why doesn’t it?

Where does Git fall short?

Git was designed for text files. Specifically, source code files which have short lines of text. ML projects have a lot of non-code assets like pickled models and large checkpoint files and these aren’t Git-friendly.

While Git can store binary files, they stick around forever in the repository’s local history. If you’ve ever tried to get a giant file deleted from Git’s history after Jimmy “accidentally” checked it in, you know the pain.

Large files in Git means every future "git clone" or "git pull" can take hours and will fill up your hard drive with old versions of models you’ll probably never need. And the binary files in Git are also useless in code reviews which mostly defeats the purpose of tracking these files in Git in the first place.

Git-ops workflows are also challenging for ML projects. A typical ML project is likely to have historical versions of deployed models running for weeks, maybe months, to A/B test. Software engineers using Git, on the other hand, typically migrate from one version of a service to another quite rapidly. The idea that "HEAD" is the singular “current” version makes much more sense in software projects than it does in ML projects.

When it comes to ML models, standard Git gets slow and bloated, disappoints in code reviews, and isn’t designed for the kind of concurrent versioning that ML projects expect. That’s why we made Git better.

Our solution

We use Git for tracking code, models, and other ML assets. To make this work smoothly we don’t store the binary files in Git. Instead, the binary files are automatically uploaded to S3 during "git add" and a “pointer file” is stored in the Git repository instead.

The Git-LFS project does this, but we decided to write our own because Git-LFS is quite limited. Git-LFS doesn’t work on 2GB+ files on GitHub, which is a non-starter for ML projects since many PyTorch and LLM checkpoints are larger. It also doesn’t encrypt the files before storing them in S3, which is a compliance issue for many businesses. Nor does it compress files before uploading them, which slows down Git pushes and pulls of large files.

We took inspiration from Git-LFS and made something better for Data Scientists and ML Engineers.

Automatic encryption and S3 upload during "git add"

The first part of the improvement comes when we transform binary files on their way into, or out of, the Git repository.

We store binary files in S3 without size limitations, and with encryption and compression using Git’s smudge and clean filters. These “filters” let us act on the files during Git pull and push, transparently to the user experience. When you run "git add" our “clean” filter encrypts and uploads the file to S3 and stores a pointer file in Git. Likewise in the reverse, when you "git pull" our “smudge” filter sees the pointer file and replaces it with the file downloaded and decrypted from S3. And these filters are only run on the current commit, so historical versions don’t slow your "git pull"s or take up space on your hard drive.

Here’s an example showing what happens to a pickled model file that you check it into your Git repository with "git add". The file is transformed by the “clean” filter, storing its binary form in S3 and a pointer file in Git:

This flow allows us to use Git for storing text files and S3 for storing binary files, which combines both their strengths into one polished experience.

The pointer files we store in Git aren’t simply S3 URLs or file hashes. They’re much better.

Describing binary objects for better merge requests

Uploading binary files to S3 solves part of the problem by keeping large binary files out of Git. But we still need to make Git useful for these binary files, especially during code reviews.

Of course, a pointer file containing only a hash of a binary file is about as useless as the binary file itself when it comes to code reviews. That’s why we use a variety of techniques to figure out what’s inside the binary file and add that description to the pointer file in Git. Here’s an example pointer file showing that it points to an XGBoost model:


{
    "contentHash": "sha1:3d817cb8eed0b98fd271fdd578a4830de17bbca0",
    "metadata": {
        "size": 244862,
        "description": {
            "object": {
                "class": "XGBClassifier",
                "module": "xgboost.sklearn",
                "package": "xgboost==1.7.6",
                "description": "XGBClassifier(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, early_stopping_rounds=None, enable_categorical=False, eval_metric='mlogloss', feature_types=None, gamma=None, gpu_id=None, grow_policy=None, importance_type=None, interaction_constraints=None, learning_rate=None, max_bin=None, max_cat_threshold=None, max_cat_to_onehot=None, max_delta_step=None, max_depth=None, max_leaves=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, objective='multi:softprob', predictor=None, ...)"
            }
        }
    },
    "createdAt": "2023-11-07T23:31:44.981Z"
}

That’s a lot more useful in a code review than the original binary file!

These two changes make Git a great experience for ML models and other binary files, so we kept going with other ML resources, like our model registry.

Making a branch-aware model registry

Most model registries are stored in a SQL database. It’s a natural place for models and their metadata if you think of the registry as external to the model development workflow. But is the registry really independent from the Git repository storing the models and the code that makes them?

What if your model registry was backed by Git, instead of SQL? Then you could have branch-aware registries, and test changes to your model registry in your staging environment. A Git-backed registry also means traveling back in time to know what models were running last quarter would be easy. Protected branches means Jimmy cannot break your registry by accident, again. Best of all, you could review model registry changes in the same merge request that contained changes to the inference code using the models!

Since inference code and models can be successfully stored and versioned in Git, that’s where we chose to build our model registry as well. We believe that the registry is as much a part of Git as the models and code that made them.

Our model registries store content like the above pointer file. This way, when working on a branch you can add/edit/remove models in the registry and present the entire change at once in a merge request.

Once you’ve tried a complete Git workflow, from model development to registry changes in shared merge requests, it’ll be hard to go back.

Going operational

With our Git repository ready for ML development the last step is using it for deployment.

Our Git repositories trigger events whenever files change. Just like a CI/CD pipeline, we listen for those events and create new versions of deployments, update model registries, or change DNS records to alias one version to another.

And instead of treating the latest commit in Git as the only “current” version, we treat every version as idempotent. For example, any change to an ML model’s endpoint source code creates a new version of that endpoint, with its own URL. This flow makes keeping many concurrent versions of a deployed model around for A/B tests a natural fit.

By storing everything in Git we can make complex infrastructure changes safe, reviewable, and undoable.

Git is great for ML

With the changes outlined above, Git can be a great tool for Data Scientists and ML Engineers. We’ve built all of this (and more stuff) into Modelbit. Sign up for Modelbit and see what Git can be like once it’s optimized for machine learning.

Deploy Custom ML Models to Production with Modelbit

Join other world class machine learning teams deploying customized machine learning models to REST Endpoints.
Get Started for Free