Feature Engineering Examples: Binning Categorical Features

How to use NumPy or Pandas to quickly bin categorical features

Max Steele (they/them)
Towards Data Science

--

Working with categorical data for machine learning (ML) purposes can sometimes present tricky issues. Ultimately these features need to be numerically encoded in some way so that an ML algorithm can actually work with them.

You’ll also want to consider additional methods for getting your categorical features ready for modeling. For example, your model performance may benefit from binning categorical features. This essentially means lumping multiple categories together into a single category. By applying domain knowledge, you may be able to engineer new categories and features that better represent the structure of your data.

In this post, we’ll briefly cover why binning categorical features can be beneficial. Then we’ll walk through three different methods for binning categorical features with specific examples using NumPy and Pandas.

Photo by Efe Kurnaz on Unsplash

Why Bin Categories?

With categorical features, you may encounter problems with rare labels, categories/groups that are extremely uncommon within your dataset. This issue is often related to features having high cardinality — in other words, many different categories.

Having too many categories, and especially rare categories, leads to a noisy dataset. It can be difficult for an ML algorithm to cut through this noise and learn from the more meaningful signals in the data.

High cardinality can also exacerbate the curse of dimensionality if you choose to one hot encode your categorical features. If the original variable has 50 different categories, you’re basically adding 49 columns to your dataset.

Having too many categories can also lead to issues when training and testing your model. It’s completely possible that a category will show up in the test set, but not in the training set. Your model would have no idea how to handle that category because it has never “seen” it before.

One way to address these problems is by engineering new features that have fewer categories. This can be accomplished through binning (grouping) multiple categories into a single category.

In the following examples, we’ll be exploring and engineering features from a dataset with information about voter demographics and participation. I’ve selected 3 categorical variables to work with:

  1. party_cd: a registered voter’s political party affiliation
  2. voting_method: how a registered voter cast their ballot in the election
  3. birth_state: the U.S. state or territory where a registered voter was born
Screenshot of first 5 rows of DataFrame — Image by author

If you want to start applying these methods to your own projects, you’ll just need to make sure you have both NumPy and Pandas installed, then import both.

Using np.where() to Bin Categories

First, let’s check out why I chose party_cd. The image below shows how many individual voters belong to each political party.

Seaborn countplot showing distribution of voters by political party — Image by author

There are so few registered Libertarians, Constitutionalists, and members of the Green Party that we can barely see them on the graph. These would be good examples of rare labels. For the purposes of this post, we’ll define rare labels as those that make up less than 5% of observations. This is a common threshold for defining rare labels, but ultimately that’s up to your discretion.

Let’s look at a breakdown of the actual numbers:

Raw count and percentage of registered voters belonging to each party — Image by author

Those three categories each make up far less than 5% of the population. Even if we lumped them all together into a single category, that new category would still represent less than 1% of voters.

“REP” and “DEM” represent the two major political parties, whereas “UNA” represents voters that registered as unaffiliated with a political party. So here, it could make sense to lump in our three rare labels into that unaffiliated group so that we have three categories: one for each of the two major parties, and a third representing individuals that chose not to align with either major party.

This can be accomplished very easily with np.where() which takes 3 arguments:

  1. a condition
  2. what to return if the condition is met
  3. what to return if the condition is not met

The following code creates a new feature, party_grp, from the original party_cd variable using np.where():

The condition it checks is whether or not the original value is in the list ['REP', 'DEM']. If it is, then np.where() simply returns the original party code (although I’ve had it returned as title case because I personally hate looking at things written in all caps). If the original party code is not in that list, np.where() returns “Other”. Our newly engineered party_grp feature is now much more balanced without any rare labels:

Raw count and percentage of registered voters belonging to each party — Image by author

Mapping Categories into New Groups with map()

Next up, let’s take a look at the distribution of voting_method:

Seaborn countplot showing distribution of voters by voting method — Image by author

Not the prettiest of graphs, but we get the picture. We have 8 different categories of voting method. I would hazard a guess that half of them meet our definition of rare labels.

Raw count and percentage of registered voters casting a ballot by each method — Image by author

Yup! Four of our categories are rare labels. Now we could just group them all into an “Other” category and call it a day, but this may not be the most appropriate method.

Based on research I did into how these methods are coded, I know that “Absentee” means someone voted early. So we could group any “Absentee” method into an “Early” category, group “In-Person” and “Curbside” into an “Election Day” category, leave “No Vote” as its own category, and group “Provisional” and “Transfer” into an “Other” category.

The following code accomplishes this by first defining a dictionary using the original voting_method categories as keys. The value for each key is the new category we actually want.

That last line creates a new column, vote_method_cat, based on the original values in the voting_method column. It does so by applying Pandas’ map() method to the original column, and feeding in our vote_method_map to translate from key to corresponding value.

Raw count and percentage of registered voters casting a ballot by each method — Image by author

Now we’ve gotten rid of all but one of our rare labels. Ultimately I chose to drop those 733 “Other” votes. Voting method was actually the target variable I was trying to predict and what I was really interested in was how people chose to vote. Provisional and transfer ballots are more reflective of the process and regulations surrounding voting, but my question was specifically about a voter’s active choice.

So not only can you think about engineering predictive features to better represent the underlying structure of your data, you can consider how best to represent your target variable relative to your specific question.

Applying a Custom Function with apply()

Finally, we’re going to work on binning birth_state. This variable has 57 categories: one for each state, one for missing information, one for each U.S. territory, and a final category for individuals born outside the United States.

So the graph looks comically terrible:

Seaborn countplot showing distribution of voters by where they were born — Image by author

If you ever see a graph like this while exploring categorical features, that’s a good indication you should consider binning that variable if you intend to use it as a feature in your model.

Below is the breakdown of the 15 most common categories of birth_state:

Raw count and percentage of registered voters by where they were born — Image by author

North Carolina is the most common state, which makes sense since this data is for voters in a specific county in NC. Then we see lots of missing values. New Yorkers and people born outside the U.S. also make up a decent portion of the population. The remaining 53 categories are rare labels based on our definition and will introduce a lot of noise into our modeling efforts.

Let’s group states by U.S. Census region (Northeast, South, Midwest, West). We’ll also group people born in U.S. territories or outside the country into an “Other” group, and leave “Missing” as its own category.

We’ll do this by defining our own custom function to translate from state to region, then apply that function to our original variable to get our new feature. Here’s one way you could write a function to check each state and return the desired region/category:

And now to use Pandas’ apply() method to create our new feature:

Raw count and percentage of registered voters by where they were born — Image by author

Much better! We’ve gone from 57 total categories with 53 rare labels to only 6 categories that still hold a lot of meaning and only one of them meets our definition of a rare label. We could consider additional grouping, but you get the point.

To Recap

We covered:

  • What it means to bin categorical features
  • Why and when you might want to bin categorical features
  • 3 methods for binning categorical features (np.where(), Pandas map(), custom function with Pandas apply())

I hope you found this informative and are able to apply something you learned to your own work. Thanks for reading!

--

--