Disclaimer: I am co-creator of Aim.

AI researchers take on increasingly ambitious problems. As a result demand for AI compute and data multiplies month over month.
With more compute resources and more data available AI engineers run not only longer but also a lot more experiments than they used to.
Usually, AI research starts with setting up the data pipeline (or several versions of them) followed by an initial group of experiments to test the pipeline, architecture and detect basic bugs. Once that’s established, the rounds of experiments begin!
Then folks play with different moving pieces (datasets, pipeline, hyperparameters, architecture etc) either by hand or using techniques like grid search for instance …
Many end up with a messy pile of experiments with many dimensions.
Now, two weeks before your product launch or conference submission deadline, you need to go through it all, find patterns and decide on the next set of experiments.
How fast is the transition from one round of experiments to another?
After several rounds, the experiments accumulate and you are left alone with several dozen runs to decipher and discover trends and identify the next set of experiments – such a mess! The problem keeps growing bigger.

All you are doing is to try to answer questions as basic as: which max_k
parameters I have used and how they correlate with the validation loss
and bleu
metrics? Which group of experiments were the best compared to others and why/how it happened?
These are relatively easy questions to answer for 10 experiments, what if you have 200?
It takes hours and lots of patience!
Losing time on repetitive drudgery is the worst…
Not anymore!
Aim

Now we can compare hundreds of experiments in minutes with Aim.
How?
It does experiment metric manipulations (grouping, aggregations, etc) and has advanced search to help with quick and efficient comparison/analysis. It’s easy to integrate with the training code and use straight away.
This means you can focus on your research, run as many experiments as you like and spend dramatically less time between rounds of experiments. Aim will help you iterate much faster!
You need to know only two methods to get these superpowers – aim.set_params
and aim.track
:
After you have integrated these two functions to your training code run your training and in your terminal run aim up
.
These three features will help you compare faster: search, group and aggregate your metrics/experiments.
Search
To search your experiments you need to know python (as the Query Language on Aim is super-pythonic!) and the parameters you have passed Aim via aim.set_params(...)
.
Select set of metrics and provide in the SELECT and provide filter criteria in the IF input.

Group
For your searched/selected experiment metrics, group them by different parameters to see the effects. This means you are one button away from revealing how parameters are affecting your training runs.

Aggregate
Aggregate the grouped metrics to see trends per param (or set of params). this makes 100s of experiments look like one and gives clear idea about the effectiveness of your ideas.

These are just a few examples of metric manipulations that Aim helps with to quickly analyze training runs and move on.
All of this took only a couple lines of code and a couple clicks!
Now, you can have faster iterations between your rounds of experiments and focus your time on tasks that matter most for research as opposed to delving into large tooltips for hours and writing regular expressions to long experiment names.