Abstract
Training a neural network is a complex and time-consuming process because of many combinations of hyperparameters that have to be adjusted and tested. One of the most crucial hyperparameters is the learning rate which controls the speed and direction of updates to the weights during training. We proposed an adaptive scheduler called Gradient-based Learning Rate scheduler (GLR) that significantly reduces the tuning effort thanks to a single user-defined parameter. GLR achieves competitive results in a very wide set of experiments compared to the state-of-the-art schedulers and optimizers. The computational cost of our method is trivial and can be used to train different network topologies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
It consists of only one linear layer.
- 6.
It is generated from VGG11 removing 4 convolutional layers.
References
Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Bottou, L.: Online learning and stochastic approximations. Online Learn. Neural Netw. 17, 142 (1998)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Guo, T., Dong, J., Li, H., Gao, Y.: Simple convolutional neural network on image classification. In: 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), pp. 721–724 (2017). https://doi.org/10.1109/ICBDA.2017.8078730
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 558–567 (2019)
Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2016)
Hutter, F., Lücke, J., Schmidt-Thieme, L.: Beyond manual tuning of hyperparameters. KI - Künstl. Intell. 29, 329–337 (2015)
Khodamoradi, A., Denolf, K., Vissers, K., Kastner, R.C.: ASLR: an adaptive scheduler for learning rate. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2021). https://doi.org/10.1109/IJCNN52387.2021.9534014
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR (2015)
Konar, J., Khandelwal, P., Tripathi, R.: Comparison of various learning rate scheduling techniques on convolutional neural network. In: 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS) (2020). https://doi.org/10.1109/SCEECS48394.2020.94
Krizhevsky, A.: Learning multiple layers of features from tiny images. Toronto University, ON, Canada - Master’s thesis (2009)
Lewkowycz, A.: How to decay your learning rate. ArXiv abs/2103.12682 (2021)
Martens, J.: Deep learning via hessian-free optimization. In: International Conference on Machine Learning (2010)
Martens, J., Grosse, R.: Optimizing neural networks with Kronecker-factored approximate curvature. In: International Conference on Machine Learning (2015)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999). https://doi.org/10.1007/978-0-387-40065-5
Reddi, S.J., Kale, S., Kumar, S.: On the convergence of ADAM and beyond. ArXiv abs/1904.09237 (2018)
Reed, R., MarksII, R.J.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press (1999)
Ruder, S.: An overview of gradient descent optimization algorithms. ArXiv abs/1609.04747 (2016)
Schmidt, R.M., Schneider, F., Hennig, P.: Descending through a crowded valley-benchmarking deep learning optimizers. In: International Conference on Machine Learning (2021)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015) (2015)
Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (BMVC) (2016). https://doi.org/10.5244/C.30.87
Acknowledge financial support from
PNRR MUR project PE0000013-FAIR
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Napoli Spatafora, M.A., Ortis, A., Battiato, S. (2023). GLR: Gradient-Based Learning Rate Scheduler. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14233. Springer, Cham. https://doi.org/10.1007/978-3-031-43148-7_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-43148-7_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43147-0
Online ISBN: 978-3-031-43148-7
eBook Packages: Computer ScienceComputer Science (R0)