A swift guide to experiment tracking with MLFlow
A Data Science professional is not unfamiliar with the arduous process of trial and error. In a day-to-day manner we...
A Data Science professional is not unfamiliar with the arduous process of trial and error. In a day-to-day manner we...
A Data Science professional is not unfamiliar with the arduous process of trial and error. In a day-to-day manner we encounter multiple cases where it is necessary to keep track of branching and experimentation of ML pipelines. This quickly gets out of hand when the project becomes large enough, and it is nearly impossible to remember the experiments on an old project.
Our objective is to provide reproducible results with optimized metrics for a solution. With the growing number of experiments being done on the data and the model side, Data Scientists tend to forget all the small changes they did and reproducibility suffers greatly. This is where experiment tracking comes into play.
Experiment tracking is the process of keeping a record of all the relevant information from a Machine Learning experiment. This includes tracking source code, environment modifications, data and model changes, among others. When experimenting and iterating through data and model modifications, things quickly get out of hand and Data Scientists tend to forget what was exactly used for a specific run. Experiment tracking improves greatly on reproducibility, organization and optimization. You might think that there are tools to optimize a ML model such as hyperparameter optimization, but these processes are not automated. The easiest experiment tracking solution that first comes to mind is spreadsheets. But it is error prone as the user has to log everything manually, and there is no standard format of the sheet with no knowledge of the data and preprocessing steps used. This is where MLFlow comes to play. It's an open-source platform for the machine learning lifecycle, addressing the whole process of building and maintaining models. MLFlow is composed of four main modules:
We’ll be focusing mainly on tracking, as that is of the most interest for Data scientist. Models and registry will be addressed. The MLFlow Tracking module allows you to organize your experiments into units referred as ‘runs’. With each run you can track:
MLFlow automatically logs other extra information such as: source code, author, git version of the code and the execution time and date. Tracking component provides API in different languages like Python, REST, R and Java.
We will be exploring MLFlow in Python, and for that we need to install MLFlow itself and a backend. sqlalchemy will be used as a backend solution to store runs.
After the installation, you can run the UI server locally with an SQLite backend for model registry:
This tells MLFlow where we want to store all the artifacts and metadata for the experiments. In this case it’s an SQL database, but it can also be a remote server, Databricks workspace, or any other database.
By following the link generated, we are greeted with the home screen:
With the server up and running, we can move on to experimenting. In order to link up our code to the mlflow server we’ll need the experiment name and tracking uri (Universal Resource Identifier). The tracking URI is the URI of our backend, which is sqlite:///mlflow.db in our case. The experiment name is the name of the task under which all the different models and experiments will be located. In order to connect your code to the backend, you should initialize a connection by adding:
If you are running a new experiment name, MLFlow will automatically create it for you with the given name.
With a connected MLFlow backend, we can start tracking. The dataset we’ll use is a diabetes dataset, where the target is the progression of the disease.
We’ll create a RandomForestRegressor as our model. To start tracking, we can either use a context manager or manually start and stop runs. Using a context manager fits the best for our case, as it handles all the opening and closing logic behind MLFlow.
The next steps is to log all the necessary information. For instance, we can log the parameters, set tags, the metadata, as well as the model and the data used.
As we might be training different models on a single experiment, it’s a good practice to log the model name as a tag, so we can easily search for it in the MLFlow backend. To do that, we add:
This tag will be present in the MLFlow logs as:
Next, we would like to log some metrics to evaluate the experiment and log the parameters used for the model. To do so, we can use the mlflow.log_metric and mlflow.log_param or mlflow.log_params to log a dict.
Now, running the experiment we can see all the metrics and parameters logged in the run on the UI side:
where we can see all the model parameters:
as well as all the metrics and tags:
Now that we have a way of logging the parameters, metrics and metadata, we would also want to log data and model to stay consistent and allow reproducibility. These two fall under the artifact section. MLFlow has a nice way of logging and saving models for PyTorch, Scikit-Learn, XGBoost and many others.
To save the scikit-learn model and the data, we add the following:
The way artifacts are stored is that we have to save them somewhere to log the path to them in MLFlow. We can see all the artifacts and models in the lower sections of the run on the web UI:
MLFlow also provides a snippet of code on how to run any model that’s been tracked, be it a PyTorch Neural Network or a Scikit-Learn model through a simple API. All that’s needed is the run id, and we can fetch it with MLFlow and use it within a few lines of code. MLFlow also gathers all the used requirements and places them into a python_env.yaml, which allows the user to install and use your environment easily.
We are also presented with the option to register the model if we are satisfied with the results. This can further be moved to the dev stage, staging and eventually production.
With this we can easily share the model with other teams and move it forward, or compare different models.
MLFlow also provides us with automatic logging, where it logs nearly everything there is in the run. This is not always desirable, as there is a lot of noise, but it’s a fast way to track your experiment
MLFlow might seem like a suitable solution, but it has a list of limitations as well. One of which is due to users and authentication. It does not have any notion of teams, users, or authentication. So you have to look for workarounds if you want to use it in a team.
The lack of authentication also raises security alarms, so it must be used carefully and accessed through a VPN, if you have it linked to a deployment worker. A workaround for this is to use the paid version of Databricks, which includes a ML platform with MLFlow integrated. This provides a notion of users, teams and authentication.
The second limiting factor is data versioning, specifically data versioning. If you want to have full reproducibility, you have to use external data versioning tools.
Finally, there is no model & data monitoring system built in. MLFlow is only focused on experiment tracking and model management. NannyML is a great solution for post-deployment model monitoring and performance estimation.
Some of the biggest alternatives for MLFlow are: Neptune, Comet, Weights & Biases. All the approaches have their ups and downs, and are best for specific use cases. But MLFlow is currently the only open-source solution that is free to use, and provides a rich interface over experiment tracking and a community behind it which constantly does the updating and patching.
I hope this short tutorial and brief introduction into MLFlow has helped you get started with using modern tools for experiment tracking, and not logging everything on a piece of paper or in excel. I recommend you check the MLFlow official documentation, which is friendly and easy to follow. Happy experimenting and be sure to follow us to keep up with the advances in AI, tutorials for tools data scientist use every day and lots of other fun and educational content.
Thank you for reading!
References
[1] https://mlflow.org/docs/latest/index.html
[2] https://neptune.ai/blog/ml-experiment-tracking
[4] https://www.databricks.com/product/managed-mlflow