OpenML: Machine Learning as a community

Neeratyoy Mallik
Towards Data Science
8 min readOct 26, 2019

--

OpenML is an online Machine Learning (ML) experiments database accessible to everyone for free. The core idea is to have a single repository of datasets and results of ML experiments on them. Despite having gained a lot of popularity in recent years, with a plethora of tools now available, the numerous ML experimentations continue to happen in silos and not necessarily as one whole shared community.

In this post, we shall try to get a brief glimpse of what OpenML offers and how it can fit our current Machine Learning practices.

Let us jump straight at getting our hands dirty by building a simple machine learning model. If it is simplicity we are looking for, it has to be the Iris dataset that we shall work with. In the example script below, we are going to load the Iris dataset available with scikit-learn, use 10-fold cross-validation to evaluate a Random Forest of 10 trees. Sounds trivial enough and is indeed less than 10 lines of code.

A simple script and we achieve a mean accuracy of 95.33%. That was easy. It is really amazing how far we have come with ML tools that make it easy to get started. As a result, we have hundreds of thousands of people working with these tools every day. That inevitably leads to the reinvention of the wheel. The tasks that each individual ML practitioner performs often have significant overlaps and can be omitted by reusing what someone from the community has done already. At the end of the day, we didn’t build a Random Forest model all the way from scratch. We gladly reused code written by generous folks from the community. The special attribute of our species is the ability to work as a collective wherein our combined intellect becomes larger than the individual sum of parts. Why not do the same for ML? I mean, can I see what other ML practitioners have done to get better scores on the Iris dataset?

Answering this is one of the targets of this post. We shall subsequently explore if this can be done, with the help of OpenML. However, first, we shall briefly familiarize ourselves with few terminologies and see how we can split the earlier example we saw into modular components.

OpenML Components

Image Source: https://medium.com/open-machine-learning/openml-1e0d43f0ae13

Dataset: OpenML houses over 2k+ active datasets for various regression, classification, clustering, survival analysis, stream processing tasks and more. Any user can upload a dataset. Once uploaded, the server computes certain meta-features on the dataset — Number of classes, Number of missing values, Number of features, etc. With respect to our earlier example, the following line is the equivalent of fetching a dataset from OpenML.

X, y = datasets.load_iris(return_X_y=True)

Task: A task is linked to a specific dataset, defining what the target/dependent variable is. Also specifies evaluation measures such as — accuracy, precision, area under curve, etc. or the kind of estimation procedure to be used such as — 10-fold cross-validation, n% holdout set, etc. With respect to our earlier example, the parameters to the following function call capture the idea of a task.

scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')

Flow: Describes the kind of modelling to be performed. It could be a flow or a series of steps, i.e., a scikit-learn pipeline. For now, we have used a simple Random Forest model which is the flow component here.

clf = RandomForestClassifier(n_estimators=10, max_depth=2)

Run: Pairs a flow and task together which results in a run. The run has the predictions which are turned into evaluations by the server. This is effectively captured by the execution of the line:

scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')

Now, this may appear a little obfuscating given that we are trying to compartmentalize a simple 10-line code which works just fine. However, if we take a few seconds to go through the 4 components explained above, we can see that it makes our training of a Random Forest on Iris a series of modular tasks. Modules are such a fundamental concept in Computer Science. They are like Lego blocks. Once we have modules, it means we can plug and play at ease. The code snippet below attempts to rewrite the earlier example using the ideas of the OpenML components described, to give a glimpse of what we can potentially gain during experimentations.

We can, therefore, compose various different tasks, flows, which are independent operations. Runs can then pair any such task and flow to construct an ML workflow and return the evaluated scores. This approach can help us define such components one-time, and we can extend this for any combination of a dataset, model, and for any number of evaluations in the future. Imagine if the entire ML community defines such tasks and various simple to complicated flows that they use in their daily practice. We can build custom working ML pipeline and even get to compare performances of our techniques on the same task with others! OpenML aims exactly for that. In the next section of this post, we shall scratch the surface of OpenML to see if we can actually do with OpenML what it promises.

Using OpenML

OpenML-Python can be installed using pip or by cloning the git repo and installing the current development version. So shall we then install OpenML? ;) It will be beneficial if the code snippets are tried out as this post is read. A consolidated Jupyter notebook with all the code can be found here.

Now that we have OpenML, let us jump straight into figuring out how we can get the Iris dataset from there. We can always browse the OpenML website and search for Iris. That is the easy route. Let us get familiar with the programmatic approach and learn how to fish instead. The OpenML-Python API can be found here.

Retrieving Iris from OpenML

In the example below, we will list out all possible datasets available in OpenML. We can choose the output format. I’ll go with dataframe so that we obtain a pandas DataFrame and can get a neat tabular representation to search and sort specific entries.

The column names indicate that they contain the meta-information about each of the datasets, and at this instance, we have access to 2958 datasets as indicated by the shape of the dataframe. We shall try searching for ‘iris’ in the column name and also use the version column to sort the results.

Okay, so the iris dataset with the version as 1 has an ID of 61. For verification, we can check the website for dataset ID 61. We can see that it is the original Iris dataset which is of interest to us — 3 classes of 50 instances, with 4 numeric features. However, we shall retrieve the same information, as promised, programmatically.

With the appropriate dataset available, let us briefly go back to the terminologies we discussed earlier. We have only used the dataset component so far. The dataset component is closely tied with the task component. To reiterate, the task would describe how the dataset will be used.

Retrieving relevant tasks from OpenML

We shall firstly list all available tasks that work with the Iris dataset. However, we are only treating Iris as a supervised classification problem and hence will filter accordingly. Following which, we will collect only the task IDs of the tasks relevant to us.

That settles the task component too. Notice how for one dataset (61), we obtain 11 task IDs which are of interest to us. This should illustrate the one-to-many relationship that dataset-task components can have. We have 2 more components to explore — flows, runs. We could list out all possible flows and filter out the ones we want, i.e., Random Forest. However, let us instead fetch all the evaluations made on the Iris dataset using the 11 tasks we collected above.

We shall subsequently work with the scikit-learn based task which has been uploaded/used the most. We shall then further filter out the list of evaluations from the selected task (task_id=59 in this case), depending on if Random Forest was used.

Retrieving top-performing models from OpenML

Since we are an ambitious bunch of ML practitioners who settle for nothing but the best, and also since most results will not be considered worth the effort if not matching or beating state-of-the-art, we shall aim for the best scores. We’ll sort the filtered results we obtained based on the score or ‘value and then extract the components from that run — task and flow.

Okay, let’s take a pause and re-assess. From multiple users across the globe, who had uploaded runs to OpenML, for a Random Forest run on the Iris, the best score seen till now is 96.67%. That is certainly better than the naive model we built at the beginning to achieve 95.33%. We had used a basic 10-fold cross-validation to evaluate a Random Forest of 10 trees with a max depth of 2. Let us see, what the best run uses and if it differs from our approach.

As evident, our initial approach is different on two fronts. We didn’t explicitly use stratified sampling for our cross-validation. While the Random Forest hyperparameters are slightly different too (max_depth=None). That definitely sounds like a to-do, however, there is no reason why we should restrict ourselves to Random Forests. Remember, we are aiming big here. Given the number of OpenML users, there must be somebody who got a better score on Iris with some other model. Let us then retrieve that information. Programmatically, of course.

In summary, we are now going to sort the performance of all scikit-learn based models on Iris dataset as per the task definition with task_id=59.

The highest score obtained among the uploaded results is 98.67% using a variant of SVM. However, if we check the corresponding flow description, we see that it is using an old scikit-learn version (0.18.1) and therefore may not be possible to replicate the exact results. However, in order to improve from our score of 95.33%, we should try running a nu-SVC on the same problem and see where we stand. Let’s go for it. Via OpenML, of course.

Running best performing flow on the required task

Lo and behold! We hit the magic number. I personally would have never tried out NuSVC and would have stuck around tweaking hyperparameters of the Random Forest. This is a new discovery of sorts for sure. I wonder though if anybody has tried XGBoost on Iris?

In any case, we can now upload the results of this run to OpenML using:

r.publish()

One would need to sign-in to https://www.openml.org/ and generate their respective apikey. The results would then be available for everyone to view and who knows, you can have your name against the best-ever performance measured on the Iris dataset!

This post was in no ways intended to be a be-all-end-all guide to OpenML. The primary goal was to help form an acquaintance with the OpenML terminologies, introduce the API, establish connections with the general ML practices, and give a sneak-peek into the potential benefits of working together as a community. For a better understanding of OpenML, please explore the documentation. If one desires to continue from the examples given in this post and explore further, kindly refer to the API.

OpenML-Python is an open-source project and contributions from everyone in the form of Issues and Pull Requests are most welcome. Contribution to the OpenML community is in fact not limited to code contribution. Every single user can make the community richer by sharing data, experiments, results, using OpenML.

As ML practitioners, we may be dependent on tools for our tasks. However, as a collective, we can juice out its potential to a larger extent. Let us together, make ML more transparent, more democratic!

Special thanks to Heidi, Bilge, Sahithya, Matthias, Ashwin for the ideas, feedback, and support.

--

--