The world’s leading publication for data science, AI, and ML professionals.

If your models are underperforming, build better datasets – Sarem Seitz

It's not your model's fault if your data is poor

Photo by Johannes Plenio on Unsplash
Photo by Johannes Plenio on Unsplash

Working with Data can be hard. You might spend hours on your model or analysis without getting any reasonable results. At that point, it might be tempting to blame your performance issues on the wrong choice of method. After all, there are so many algorithms out there, there must exist that one candidate that will solve problems, right?

More often than not though, the underlying issue is the data itself. In fact, you can often get quite far with very simple models as long as you have a good dataset to work with. Thus, in this article, we will explore four ways to improve the latter.

First, we will look at the rather unsurprising approach to ‘just’ increase the size of data available. While this is indeed an obvious solution, there are some interesting considerations that we will explore. Second, we will consider ways to improve the quality of a dataset – i.e. how to build ‘better’ datasets in the closer sense.

How to build better datasets - a simplistic overview (Image by author)
How to build better datasets – a simplistic overview (Image by author)

Wider data – the blessing of dimensionality

Is there a way to improve a dataset so much that a simple if-else rule would outperform a sophisticated Deep Learning model? The answer is ‘yes’. Consider the following, single-dimensional, binary classification problem:

Toy classification problem - can you find a rule that differentiates blue (class 0) and red dots (class 1)? (Image by author)
Toy classification problem – can you find a rule that differentiates blue (class 0) and red dots (class 1)? (Image by author)

Ask yourself if the best model at your disposal could perform reasonably well here. Unfortunately, the conditional class distribution appears to be completely random. Even with state-of-the-art models and high-end hardware you would not be able to build a reasonably predictive solution.

What if I told you that I created the dataset without using any random values? Here is the resolution:

Accounting for a second variable suddenly makes the problem trivial (Image by author)
Accounting for a second variable suddenly makes the problem trivial (Image by author)

Leaving out a crucial second variable turned a simple predictive problem into a hard one. Now, what are the practical consequences of this trivial example?

While your current features might appear sound, you could still lack other, less obvious but, nonetheless, crucial ones. For example, forecasting product sales might be tough without seasonality and weekend features. Also, as this toy example shows, two or more features might even only be predictive in interaction with each other.

In reverse, is it always a good idea to add more and more features? Of course, it is not. As you probably know, a big part of predictive modelling is variable selection. Including even more candidate features just for the sake of it will certainly make that step much more tedious.

If the extra feature looks promising though, it could make the all difference. Always question if the data you are being handed are sufficient to solve your problem.

Why are more dimensions better? Some theory

As a simplistic example, consider three variables, X,Y,Z, with Z being the target variable. Also, let all three variables follow a multivariate Gaussian distribution. We have for the mean vector and covariance matrix:

(Image by author)
(Image by author)

Applying the law for conditional Gaussian variance twice, we get:

(Image by author)
(Image by author)

This implies that using more explanatory variables reduces predictive uncertainty under two conditions:

  1. Relevancy: All explanatory variables are correlated with the target
  2. Non-redundancy: The explanatory variables are not highly correlated with each other

Also, as these lecture slides show for Linear Regression, you need to be aware of the curse of dimensionality. A considerable increase of model complexity requires either more data-points or stronger regularization. Otherwise, you might end up with a worse model than before.

Where should I expect missing columns?

  • Incomplete information are everywhere: You can almost always find information gaps in your data, if you think long enough. Unfortunately, collecting more data is not always trivial and often impossible. Try to find a sweet spot between too little information and too much effort or costs.
  • Image data: Here, the equivalent of unobserved columns are unobserved pixels. Higher resolution images might be the answer. However, be aware of the curse of dimensionality.

How to get more dimensions – and how to get the right ones:

  • Work closely with domain experts or become one yourself: Subject matter experts can often pinpoint exactly what information is necessary to model a given problem.
  • Be creative with regards to alternative data: Wallstreet can be a motivating example when it comes to the creative usage of alternative datasets. Some hedge funds, for example, are known to use parking lot satellite data to forecast quarterly sales figures of retail companies.
  • Increase granularity: As mentioned in the images example, using data at a more granular level can add crucial information to your model. Consider BERT and most other modern NLP algorithms that often operate on word-pieces rather than full words as their inputs.

Longer data – if you can’t connect the dots, how could your model?

As anyone working with data will know, it is always better to have more datapoints than less. Additional data storage is cheap in most situations. Thus, you should rather be in a position where you can exclude data from your model than to not have that data in the first place.

Which function best describes the data? With only two data-points, it is hard to tell - even for the most advanced AI. (Image by author)
Which function best describes the data? With only two data-points, it is hard to tell – even for the most advanced AI. (Image by author)
With 10 data-points, things look much clearer. In high dimensions, you need considerably more observations for a similar effect. (Image by author)
With 10 data-points, things look much clearer. In high dimensions, you need considerably more observations for a similar effect. (Image by author)

Let us look at some theoretical considerations:

Why more data is better – from a mean-squared error perspective

Consider the core concept of modern Machine Learning, empirical risk minimization. We have a loss function between actual target and predicted target:

A common choice is the square loss

(Image by author)
(Image by author)

Ideally, we want to choose an optimal candidate model that minimizes the expected loss (a.k.a. risk) over the data-generating distribution:

(Image by author)
(Image by author)

As the data-generating distribution is usually unknown, we need to estimate actual risk through the empirical risk:

(Image by author)
(Image by author)

With the square loss from before, we obtain the popular mean-squared error objective:

(Image by author)
(Image by author)

In the general case, the empirical risk estimator has the following statistical properties:

(Image by author)
(Image by author)

In plain english, the empirical risk estimator is

  • Unbiased – on average, optimizing for empirical risk is equivalent to optimizing for the true risk
  • Consistent – with increasing sample size, large deviations between empirical risk and true risk become less likely

As a caveat, large sample size only guarantees you that you CAN better find the true risk optimal model. If your search algorithm is bad, you might still end up worse than with less samples but a good search strategy. The problem of multiple local optima in Deep Learning is an example thereof.

Also, theoretically, if mean or variance of the true risk don’t exist, any empirical risk based optimization will be flawed. This could happen if your data is heavy-tailed. Nassim Taleb has some interesting views on this problem in this video.

Where you might lose some observations for your model

  • Sensitive data: There are situations where law or other policies don’t allow access to the full dataset. Federated learning could be a solution in this case.
  • Opt-out or opt-in policies: If your users don’t want to have their data being collected, you can’t do much besides accepting their decision. In that case you have to live with less data and make the best out of it.
  • Data loss or deletion: Ideally, unintended data loss should never happen. Since we don’t live in a perfect world, though, always consider this worst case scenario.

How to get more observations or deal with too few

  • Raise the sample rate for data collection: If possible, try to increase the frequency of data collection – for example if you are working with sensor data. You can always switch to lower sample rates later on but never vice-versa.
  • Decrease the dimensionality of your data: Usually, the more complex your model the more data you need. If you have to work with fewer data-points, decreasing the dimensionality of your data could improve model accuracy.
  • Use model regularization and prior knowledge: While regularization is commonly taught, it goes much deeper than just using an L1/L2 norm. Bayesian Machine Learning, for example, is a mathematically sound framework for regularization via prior knowledge. This can go far beyond standard regularization techniques.

Less noisy data – give me a signal

When it comes to noise, we need to distinguish between two types:

  • Predictive noise: A better term would be ‘randomness’. While you might observe a target variable without distortion, you cannot predict it with certainty. Including more predictive features can improve this typed of noise.
  • Perturbing noise: Also termed measurement error. This is the kind of noise we want to discuss in this section. Instead of observing the variable itself, we observe some distorted version thereof. As an example, think of collecting human motion data in the midst of an earthquake.

As you can probably imagine, noisy data should be avoided or noise be minimized. Below is a simple Linear Regression example of what happens to prediction quality under noise.

We start with the following – noiseless – data generating model:

(Image by author)
(Image by author)

Instead of the raw target and feature, we observe noisy versions thereof:

(Image by author)
(Image by author)

Now, we visualize two scenarios:

  1. Truly random (zero-mean) noise: The errors are ‘cancelled out’ on average. This might happen when you take images with a camera shaken at random.
  2. Systematic (non zero-mean) noise: Your observations are distorted on average. Stains on a camera lens could cause this for image data.

Let us compare the effects of random and systematic measurement error on the regression example:

For zero-mean Gaussian noise, predictions get less accurate (Image by author)
For zero-mean Gaussian noise, predictions get less accurate (Image by author)
If the measurement error is systematic, predictions become worse even faster (Image by author)
If the measurement error is systematic, predictions become worse even faster (Image by author)

In the non-zero mean noise scenario, model distortion is considerably worse than in the zero-mean case. For real-world data, the consequences might be more or less severe. Either way, noise effects will definitely be less easy to analyze than under lab conditions.

An in-depth view on Gaussian data with Gaussian noise

Let us consider another simplistic, bi-variate Normal example with mean and covariance as follows:

(Image by author)
(Image by author)

If use linear regression, we get – for arbitrarily many data-points – the following parameters:

(Image by author)
(Image by author)

Now, we pollute both variables with independent Gaussian noise:

(Image by author)
(Image by author)

This results in a – distorted – bi-variate Gaussian distribution:

(Image by author)
(Image by author)

This allows us to derive the regression parameters under

(Image by author)
(Image by author)

What these formulas imply

  1. Zero-mean noise in the target: If only the target is corrupted by zero-mean noise, your parameters will still be correct if the sample size is large enough. With increasing noise variance, you might need a larger sample size.
  2. Zero-mean noise in the input: In this case, the predictive power of the input feature is lessened in relation to the amount of noise. Depending on the severity, noise reduction could thus turn a formerly useless feature into a highly predictive one.
  3. Non-zero mean noise: Your parameter estimates and thus your predictions will be biased. You should avoid such systematic measurement error at all costs.

Of course, noise in the real world is generally much more complex. Noise could be varying over time or pollute your variables only in one direction. The above example should give you just a rough idea why it is important to limit the impact of measurement error.

Where can I expect noisy data?

  • Sensor and image data: Polluted sensors or camera lenses can easily introduce unnecessary noise to your data. Also, noise might hint at technical incapabilities or impurities of your data collection device. You might have to get better one if your current level of measurement error is intolerable.
  • Textual data: Social media data can be particularly noisy – user mentions on Twitter or URLs within text easily bias NLP tools.

How to reduce noise from two angles

  • Ex-ante denoising: Ideally, you should prevent measurement error before it can even enter your datasets. For computer vision problems, this might be as simple as keeping your camera lenses clean.
  • Ex-post denoising: If you cannot avoid measurement error from happening, you need to resort to the countless methods for data-denoising. A quick google search should be a good starting point.

Better sampled data – staying true to the data generating process

Now we get to the most subtle form of potential dataset improvements. If the above issues were obvious or at least can be anticipated, this might not be the case here. Incorrectly sampled data could look totally fine yet still result in erroneous models or wrong conclusions.

In an ideal world, we can draw truly random samples from the underlying data generating distribution. In reality, however, perfect random sampling is close to impossible. This will happen at the very latest when you predict a future data based on past data.

In that case, the data generating distribution stretches arbitrarily far into the future. However, since you cannot collect data from the future (yet), your model will be biased towards the past. A model that predicts buyer preferences well today might struggle to deal with future shifts in consumer behaviour.

This is the infamous distributional or domain shift problem. No matter how superior your models are at the moment, you could see their performance vanish at any point in time. Luckily, it is also a well known problem and there exist many approaches to mitigate it to some extent.

Keep in mind, though, that domain shift is not the only instance of sampling bias. The distribution might be perfectly stable but your sampling process itself could still be flawed.

Empirical risk minimization with a distorted sampling distribution

Consider again the statistical properties of the empirical risk estimator:

(Image by author)
(Image by author)

A crucial detail in these formulas is p(x,y). Unless your samples came from the true data generating process, your risk estimate will be flawed. If our sampling distribution is different, say q(x,y), there is no way to guarantee that we are optimizing for the correct risk anymore. This can be exemplified in a simple though experiment:

A concrete example of a biased sampling process without domain shift

Imagine you had a camera in your garden and want to classify animals that are playing inside of it. Thus, you aim to build and train some convolutional neural network classifier.

Presume that there are four types of possible animals, cats, dogs, rabbits and horses. For simplicity, we also assume that each one is equally likely to appear in your garden:

Distribution of the four animals that might be playing around in your garden. Each animal is equally likely to occur. (Image by author; see additional sources at the end)
Distribution of the four animals that might be playing around in your garden. Each animal is equally likely to occur. (Image by author; see additional sources at the end)

Being a true enthusiast, you spend the next few days taking a lot of pictures of the respective animals. However, since you were primarily focusing on cats, the amount of cat pictures collected turns out to be much bigger. The distribution of pictures in your sample might look like this:

Distribution of the four animals based on your cat-biased photography skills (Image by author; see additional sources at the end)
Distribution of the four animals based on your cat-biased photography skills (Image by author; see additional sources at the end)

Now, we have a divergence between the distribution of animals in the garden and the distribution of animal images in the sample. Due to a biased sampling process, the chance of cats ending up in the training set is much larger. The sample was not taken fully at random:

The big 'picture' of what went wrong - there is a mismatch between the distributions of your training set and of your actual evaluation set. (Image by author; see additional sources at the end)
The big ‘picture’ of what went wrong – there is a mismatch between the distributions of your training set and of your actual evaluation set. (Image by author; see additional sources at the end)

To simplify things further, consider you had only two candidate computer vision models. Of course, in reality you usually have an infinite number of candidate models. For Neural Networks, for example, each possible parameter configuration is a separate candidate. Your search algorithm for the optimal model is typically gradient descent.

Anyway, presume that our two candidates have the following properties:

Candidate model 1 classifies all images correctly, except for cat images (Image by author; additional sources at the end)
Candidate model 1 classifies all images correctly, except for cat images (Image by author; additional sources at the end)
Candidate model 2 mis-classifies all images from dogs and horses (Image by author; additional sources at the end)
Candidate model 2 mis-classifies all images from dogs and horses (Image by author; additional sources at the end)

The effect of a biased sampling process

In order to select the best model out of the two, you apply empirical risk minimization. Let’s say you want to minimize a zero-one loss:

(Image by author)
(Image by author)

Now we can calculate the expected empirical risk of candidate model 1 given the natural distribution:

(Image by author)
(Image by author)

If we do this for both candidates and distributions, we obtain the following:

(Image by author)
(Image by author)

Clearly, candidate 1 is preferable once we want to use our model ‘in production’. Due to our biased sampling process, however, the inferior candidate model 2 appears more attractive. According to the above formulas, variance of the risk estimator decreases with larger samples. As a result, more data-points will actually increase the chance of selecting the wrong model.

In practice, things are, of course, much more complex. Even with mildly biased data, you might still end up with a sufficiently powerful model. On the other hand, you can never guarantee that your models won’t suffer from distributional shift over time. This makes the issue of sampling bias a permanent theme that you should always keep in mind.

Where you might encounter biased sampling

  • Data is acquired over time: Again, distributional shift. This is true for practically every Machine Learning dataset. As long as the distributional shift is not too unstable, you can usually handle this in a reasonable manner.
  • Data is systematically altered or deleted: This might be the case when you have users that are opting out via GDPR or related forms. If this process is not independent of your users’ characteristics, expect some bias in your sampling process.

How to mitigate or reduce the impact of biased sampling

  • Monitor your models as closely as possible: If you are following MLOps best practices, you should already be familiar with this point.
  • Include the biasing variable in your model: For example, accounting for timestamp as a separate variable might mitigate the problem of domain shift over time. As long as the pattern of distributional change itself remains constant, this could be a viable solution. Keep in mind, however, that there is still no guarantee for the presumption of constant domain shift.
  • Monitor and optimize the sampling procedure: If you can control the sampling process, ensure that it comes as close to the ideal random sampling as possible.
  • Consider online learning: In order to quickly adapt to a changing distribution, you should update your models as frequently as possible. As a matter of fact, online learning is the fastest way to do so. If you can afford the additional effort of updating models in real-time, you should give this idea a try.
  • Consider re-weighting or re-sampling: If all the above is not feasible, you could still try methods such as inverse probability weighting.

Conclusion

If you have been reading up until here, you have probably realized that there is always room for improvement when it comes to data. While you cannot optimize single datasets all day long, their quality is, nevertheless, essential for effective Data Science and Machine Learning. Thus, if your model just doesn’t seem to improve, consider a closer look at the inputs.


Image sources

  1. Horse – Photo by Helena Lopes on Unsplash
  2. Dog – Photo by Marliese Streefland on Unsplash
  3. Cat – Photo by Raoul Droog on Unsplash
  4. Rabbit – Photo by Gary Bendig on Unsplash

References

[1] Groves, Robert M., et al. Survey methodology. John Wiley & Sons, 2011.

[2] Mohri, Mehryar, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018.

[3] Wooldridge, Jeffrey M. Introductory econometrics: A modern approach. Cengage learning, 2015.


Originally published at https://sarem-seitz.com on March 31, 2022.


Related Articles