Regularization in Deep Learning — L1, L2, and Dropout

A Guide on the Theory and Practicality of the most important Regularization Techniques in Deep Learning

Artem Oppermann
Towards Data Science

Regularization is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain. In this article, we will address the most popular regularization techniques which are called L1, L2, and dropout.

Table of Content

  1. Recap: Overfitting
  2. What is Regularization?
  3. L2 Regularization
  4. L1 Regularization
  5. Why do L1 and L2 Regularizations work?
  6. Dropout
  7. Take-Home-Message

1. Recap: Overfitting

One of the most important aspects when training neural networks is avoiding overfitting. We have addressed the issue of overfitting in more detail in this article.

However let us do a quick recap: Overfitting refers to the phenomenon where a neural network models the training data very well but fails when it sees new data from the same problem domain. Overfitting is caused by noise in the training…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Responses (6)

What are your thoughts?