This article is about MLflow – an open-source MLOps tool. If you’ve never heard of it, here’s a tutorial.
I am focusing on MLflow Tracking -functionality that allows logging and viewing parameters, metrics, and artifacts (files) for each of your model/experiment.
When you log the models you experiment with, you can then summarize and analyze your runs within the MLflow UI (and beyond). You can understand which of them performed the best, troubleshoot, and select the best candidates for deployment. I am using the tool daily and discovered many features that made my life much easier. Enjoy!

1. Interactive Artifacts – HTML, GeoJSON and other
Artifact viewer is a great feature designed to drill down into the model you log. You can save files with any format and they will be available to download, but only some of the files are previewed or rendered in the artifact viewer window. As of mid-2020, they are (source: FileUtils.ts):
Text: .txt, .log, .py, .js, .yaml, .yml, .json, .csv, .tsv, .md, .rst, .jsonnet Image: .jpg, .bmp, .jpeg, .png, .gif, .svg Interactive: .html, .geojson
Here’s a sample snippet saving some of these files:
Here’s what the UI looks like for this run, rendering very nicely in the Artifact Viewer Window:

On top of that, because it can render iframe in HTML, you can actually embed a website (e.g. a dashboard) using this snippet. This could be leveraged very nicely if your dashboards are parametrized through URI.

Limitations: if your embedded website requires outside context or some authentication, it might not work. You can read more about it in this issue.
This use case is really pushing the tool to its limit and seems a bit hack-y, but it’s possible. I can’t imagine lots of users embedding websites like that.
2. Artifacts Organised with Folders
When you have many artifacts in your run, you might want to organize them in folders. Here’s how:
Results in:

3. Runs Organised in Sections Using nested=True
Sometimes your run has many sections and each of them has its own set of parameters and metrics. You can separate them using nested runs.
Look at this nicely separated run tree:

Limitations: As you can see above, UI shows the tree structure only for 1st level of nesting, however, the Parent Run
property is correct for any nested run. I hope the UI reflects a full nested tree in the next versions of MLflow.
4. Querying Runs Programmatically with pandas
So you ran your experiments, did your batch analysis, and MLflow automatic plots are great, but you’d like something more. You can export your runs and experiments into a pandas df, together with all parameters, metrics, and artifacts!
mlflow.search_runs(experiment_ids=["your_exprmnt_id"])
This will get you a nice pandas data frame with all the information you need!

5. Correcting Runs
Imagine you ran your experiment some time ago, but you found a tedious bug in your code. For example, you forgot to divide seconds by 3600 to get hours. Here’s how to correct it:
You can correct, add to, or delete any MLflow run any time after it is complete. Get the run_id either from the UI or by using the search_runs API explained above.
6. MLflow is Not Only for ML
(More of an observation than a tip)
All of us programmers are making experiments: tweaking input parameters to optimize the output metrics.
I found MLflow Tracking & UI very useful in many non-ML experiments, e.g. profiling algorithms or more general AI. Thanks to automated plotting and summary, it serves its purpose beyond Machine Learning. This is mainly thanks to the ease of use of the API and a simplistic but functional UI. The logging capabilities of MLflow are almost unlimited, and the automated plotting is simple but informative!