Efficient Full-Matrix Adaptive Regularization
Abstract
Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide a novel theoretical analysis for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of the efficient computation of the inverse square root of a low-rank matrix. Our preliminary experiments show improved iteration-wise convergence rates across synthetic tasks and standard deep learning benchmarks, and that the more carefully-preconditioned steps sometimes lead to a better solution.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2018
- DOI:
- 10.48550/arXiv.1806.02958
- arXiv:
- arXiv:1806.02958
- Bibcode:
- 2018arXiv180602958A
- Keywords:
-
- Computer Science - Machine Learning;
- Mathematics - Optimization and Control;
- Statistics - Machine Learning
- E-Print:
- Updated to ICML 2019 camera-ready version. Title of preprint was "The Case for Full-Matrix Adaptive Regularization"