7

Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.

My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)

Thanks in advance!

2 Answers2

5

Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.

L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).

L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.

Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting

All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.

leonard
  • 136
  • 5
1

Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).

The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.

Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.

To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.

Peter
  • 7,896
  • 5
  • 23
  • 50