# Online mirror descent and dual averaging: keeping pace in the dynamic case

H. Fang, N J. A. Harvey, V. S. Portella, M. P. Friedlander
Journal of Machine Learning Research, 2022

## Abstract

Online mirror descent (OMD) and dual averaging (DA) are two fundamental algorithms for online convex optimization. They are known to have very similar (or even identical) performance guarantees in most scenarios when a fixed learning rate is used. However, for dynamic learning rates OMD is provably inferior to DA. It is known that, with a dynamic learning rate, OMD can suffer linear regret, even in common settings such as prediction with expert advice. This hints that the relationship between OMD and DA is not fully understood at present. In this paper, we modify the OMD algorithm by a simple technique that we call stabilization. We give essentially the same abstract regret bound for stabilized OMD and DA by modifying the classical OMD convergence analysis in a careful and modular way, yielding proofs that we believe to be clean and flexible. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications even under dynamic learning rates. We also shed some light on the similarities between OMD and DA and show simple conditions under which stabilized OMD and DA generate the same iterates.

## BiBTeX

@article{fang2022online,
author  = {Huang Fang and Nicholas J. A. Harvey and Victor S. Portella and Michael P. Friedlander},
title   = {Online Mirror Descent and Dual Averaging: Keeping Pace in the Dynamic Case},
journal = {Journal of Machine Learning Research},
year    = {2022},
volume  = {23},
number  = {121},
pages   = {1-38},
url     = {http://jmlr.org/papers/v23/21-1027.html}
}