Regularization in Deep Learning — L1, L2, and Dropout
A Guide on the Theory and Practicality of the most important Regularization Techniques in Deep Learning
Regularization is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain. In this article, we will address the most popular regularization techniques which are called L1, L2, and dropout.
Table of Content
- Recap: Overfitting
- What is Regularization?
- L2 Regularization
- L1 Regularization
- Why do L1 and L2 Regularizations work?
- Dropout
- Take-Home-Message
1. Recap: Overfitting
One of the most important aspects when training neural networks is avoiding overfitting. We have addressed the issue of overfitting in more detail in this article.
However let us do a quick recap: Overfitting refers to the phenomenon where a neural network models the training data very well but fails when it sees new data from the same problem domain. Overfitting is caused by noise in the training…