Classify Songs Genres From Audio Data

These recommendations are so on point! How does this playlist know me so well?

Akash Dubey
Towards Data Science

--

Photo by bruce mars on Unsplash

Introduction

Over the past few years, streaming services with huge catalogs have become the primary means through which most people listen to their favorite music. But at the same time, the sheer amount of music on offer can mean users might be a bit overwhelmed when trying to look for newer music that suits their tastes.

For this reason, streaming services have looked into means of categorizing music to allow for personalized recommendations. One method involves direct analysis of the raw audio information in a given song, scoring the raw data on a variety of metrics. In this article, I’ll be examining data compiled by a research group known as The Echo Nest. Our goal is to look through this dataset and classify songs as being either ‘Hip-Hop’ or ‘Rock’ — all without listening to a single one ourselves. In doing so, we will learn how to clean our data, do some exploratory data visualization, and use feature reduction towards the goal of feeding our data through some simple machine learning algorithms, such as decision trees and logistic regression.

So, let's get started

  1. Loading and preparing the dataset

To begin with, let’s load the metadata about our tracks alongside the track metrics compiled by The Echo Nest. A song is about more than its title, artist, and the number of listens. We have another dataset that has musical features of each track such as danceability and acousticness on a scale from -1 to 1. These exist in two different files, which are in different formats - CSV and JSON. While CSV is a popular file format for denoting tabular data, JSON is another common file format in which databases often return the results of a given query.

Also, Let's check how our data frame looks like

2. Pairwise relationships between continuous variables

We typically want to avoid using variables that have strong correlations with each other, hence avoiding feature redundancy for a few reasons:

  • To keep the model simple and improve interpretability (with many features, we run the risk of overfitting).
  • When our datasets are very large, using fewer features can drastically speed up our computation time.

To get a sense of whether there are any strongly correlated features in our data, we will use built-in functions in the pandas package.

From the above plot, We can see that there is not a much strong correlation between any of the features. So, we need not remove any feature from our data.

3. Normalizing the feature data

As mentioned earlier, it can be particularly useful to simplify our models and use as few features as necessary to achieve the best result. Since we didn’t find any particular strong correlations between our features, we can instead use a common approach to reduce the number of features called principal component analysis (PCA).

It is possible that the variance between genres can be explained by just a few features in the dataset. PCA rotates the data along the axis of highest variance, thus allowing us to determine the relative contribution of each feature of our data towards the variance between classes.

However, since PCA uses the absolute variance of a feature to rotate the data, a feature with a broader range of values will overpower and bias the algorithm relative to the other features. To avoid this, we must first normalize our data. There are a few methods to do this, but a common way is through standardization, such that all features have a mean = 0 and standard deviation = 1 (the resultant is a z-score).

Let’s check how our data frame looks like after standardization

4. Principal component analysis on our scaled data

Now that we have preprocessed our data, we are ready to use PCA to determine by how much we can reduce the dimensionality of our data. We can use scree-plots and cumulative explained ratio plots to find the number of components to use in further analyses.

Scree-plots display the number of components against the variance explained by each component, sorted in descending order of variance. Scree-plots help us get a better sense of which components explain a sufficient amount of variance in our data. When using scree plots, an ‘elbow’ (a steep drop from one data point to the next) in the plot is typically used to decide on an appropriate cutoff.

Unfortunately, there does not appear to be a clear elbow in this scree plot, which means it is not straightforward to find the number of intrinsic dimensions using this method.

5. Further visualization of PCA

But all is not lost! Instead, we can also look at the cumulative explained variance plot to determine how many features are required to explain, say, about 90% of the variance (cutoffs are somewhat arbitrary here, and usually decided upon by ‘rules of thumb’). Once we determine the appropriate number of components, we can perform PCA with that many components, ideally reducing the dimensionality of our data.

Now we can use the lower dimensional PCA projection of the data to classify songs into genres. To do that, we first need to split our dataset into ‘train’ and ‘test’ subsets, where the ‘train’ subset will be used to train our model while the ‘test’ dataset allows for model performance validation.

6. Train a decision tree to classify the genre

In this article, we will be using a simple algorithm known as a decision tree. Decision trees are rule-based classifiers that take in features and follow a ‘tree structure’ of binary decisions to ultimately classify a data point into one of two or more categories. In addition to being easy to both use and interpret, decision trees allow us to visualize the ‘logic flowchart’ that the model generates from the training data.

Although our tree’s performance is decent, it’s a bad idea to immediately assume that it’s, therefore, the perfect tool for this job — there’s always the possibility of other models that will perform even better! It’s always a worthwhile idea to at least test a few other algorithms and find the one that’s best for our data.

7. Compare our decision tree model to a logistic regression

Sometimes simplest is best, and so we will start by applying logistic regression. Logistic regression makes use of what’s called the logistic function to calculate the odds that a given data point belongs to a given class. Once we have both models, we can compare them on a few performance metrics, such as false positive and false negative rate (or how many points are inaccurately classified).

Both our models do similarly well, boasting an average precision of 87% each. However, looking at our classification report, we can see that rock song are fairly well classified, but hip-hop songs are disproportionately misclassified as rock songs.

Why might this be the case?

Well, just by looking at the number of data points we have for each class, we see that we have far more data points for the rock classification than for hip-hop, potentially skewing our model’s ability to distinguish between classes. This also tells us that most of our model’s accuracy is driven by its ability to classify just rock songs, which is less than ideal.

8. Balance our data for greater performance

To account for this, we can weight the value of a correct classification in each class inversely to the occurrence of data points for each class. Since a correct classification for “Rock” is not more important than a correct classification for “Hip-Hop” (and vice versa), we only need to account for differences in the sample size of our data points when weighing our classes here, and not relative importance of each class.

We’ve now balanced our dataset, but in doing so, we’ve removed a lot of data points that might have been crucial to training our models. Let’s test to see if balancing our data improves model bias towards the “Rock” classification while retaining overall classification performance.

9. Does balancing our dataset improve model bias?

Note that we have already reduced the size of our dataset and will go forward without applying any dimensionality reduction. In practice, we would consider dimensionality reduction more rigorously when dealing with vastly large datasets and when computation times become prohibitively large.

Success! Balancing our data has removed bias towards the more prevalent class. To get a good sense of how well our models are actually performing, we can apply what’s called cross-validation(CV). This step allows us to compare models in a more rigorous fashion.

10. Using cross-validation to evaluate our models

Since the way our data is split into train and test sets can impact model performance, CV attempts to split the data multiple ways and test the model on each of the splits. Although there are many different CV methods, all with their own advantages and disadvantages, we will use what’s known as K-fold cross-validation here. K-fold first splits the data into K different, equally sized subsets. Then, it iteratively uses each subset as a test set while using the remainder of the data as train sets. Finally, we can then aggregate the results from each fold for a final model performance score.

Now, that we have performed k fold cross-validation on our dataset, we can be pretty sure that our model will generalize 72% of the times on the future unseen data points.

Source: https://www.datacamp.com/projects/449

--

--