8

In order to converge to the optimum properly, there have been invented different algorithms that use adaptive learning rate, such as AdaGrad, Adam, and RMSProp. On the other hand, there is a learning rate scheduler such as power scheduling and exponential scheduling.

However, I don't understand at what kind of situations you should use one over the other. I feel that using adaptive learning rate optimization algorithm such as Adam is simpler and easier to implement than using learning rate scheduler.

So how can you use it apart properly, depending on what kind of problems?

Blaszard
  • 911
  • 1
  • 13
  • 30

1 Answers1

4

I'm not sure about other fields but recently in the field of deep neural network training there is this arXiv submission, The Marginal Value of Adaptive Gradient Methods in Machine Learning.

Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.

Emre
  • 10,541
  • 1
  • 31
  • 39
derekhh
  • 211
  • 1
  • 2