The world’s leading publication for data science, AI, and ML professionals.

Data Leakage in Machine Learning

How to detect and avoid data leakage

Photo by Drew Beamer on Unsplash
Photo by Drew Beamer on Unsplash

Data leakage occurs when the data used in training process contains information about what the model is trying to predict. It sounds like "cheating" but we are not aware of it so it is better to call it "leakage". Data leakage is a serious and widespread problem in data mining and machine learning which needs to be handled well to obtain a robust and generalized predictive model.

There are different reasons for data leakage. Some of them are very obvious but some are harder to spot at first glance. In this post, I will explain the reasons of data leakage, how it misleads, and the ways to detect and avoid data leakage.

You probably know them but I just want to mention about two terms that I will often use in this post:

  • Target variable: What the model is trying to predict
  • Features: The data used by the model to predict the target variable

Data Leakage Examples

Obvious cases

The most obvious cause of data leakage is to include target variable as a feature which completely destroys the purpose of "prediction". This is likely to be done by mistake but make sure target variable is distinguished from the features.

Another common cause of data leakage is to include test data with training data. It is very important to test the models with new, previously unseen data. Including test data in training process would defeat this purpose.

These two cases are not very likely to occur because they can easily be spotted. The more dangerous causes are the ones which are able to sneak.

Giveaway features

Giveaway features are the features that expose information about the target variable and would not be available after the model is deployed.

  • Example: Consider we are building a model to predict a certain medical condition. A feature indicating whether a patient had a surgery related to that medical condition causes data leakage and should never be included as a feature in the training data. Indication of a surgery is highly predictive of the medical condition and would probably not be available in all cases. If we already know that a patient had a surgery related to a medical condition, we may not even need a predictive model to start with.
  • Example: Consider a model that predicts if a user will stay on a website. Including features that expose information about future visits will cause data leakage. We should only use features about the current session because information about the future sessions are not normally available after the model is deployed.

Leakage during preprocessing

There are many preprocessing steps to explore or clean the data.

  • Finding parameters for normalizing or rescaling
  • Min/max values of a feature
  • Distribution of a feature variable to estimate missing values
  • Removing outliers

These steps should be done using only the training set. If we use entire dataset to perform these operations, data leakage may occur. Applying preprocessing techniques to entire dataset will cause the model to learn not only training set but also test set. We all know test set should be new, previously unseen data.

When dealing with time-series data, we should pay more attention to data leakage. For example, if we somehow use data from the future when doing computations for current features or predictions, it is higly likely to end up with a leaked model.


How to Detect and Avoid Data Leakage

As a general, if the model is too good to be true, we should get suspicious. The model might be somehow memorizing the feature-target relations instead of learning and generalizing.

During the exploratory data analysis process, we may detect features that are very highly correlated with the target variable. Of course, some features are more correlated than others but a surprisingly high correlation needs to checked and handled carefully. We should pay close attention to those features.

After the model is trained, if there are features with very high weights, we should pay close attention. They might be leaky features.

In order to minimize or avoid leakage, we should try to set aside a validation set in addition to training and test sets if possible. The validation set can be used as a final step and mimic the real-life scenario.

When working with time series data, a cutoff value on time might be very useful because it will prevent us from getting any information after the time of prediction.

It is common to use cross-validation in training process especially when the data is limited. Cross-validation splits the data in k folds and iterates over the entire data set k times and each time using k-1 fold training and 1 fold for testing. The advantage of cross-validation is that it allows using the entire dataset for both training and testing. However, if you get suspicious about data leakage, it is better to scale/normalize data and compute parameters on each fold separately.


Conclusion

Data leakage is a widespread issue in predictive analytics. We train models with known data and expect the model to perform predictions on previously unseen data. For a model to have a good performance in those predictions, it must generalize well. Data leakage prevents a model to generalize well and thus cause false assumptions about the performance of a model. In order to obtain a robust and generalized predictive model, we should pay close attention to detect and avoid data leakage.


Thank you for reading. Please let me know if you have any feedback.


Related Articles