Tail bounds for stochastic approximation

M. P. Friedlander, G. Goh
arXiv:1304.5586, 2013

arXiv

Abstract

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each iteration, the expected rate of convergence matches that of the underlying deterministic gradient method. Conditions are given under which this happens with overwhelming probability.

BiBTeX

@misc{friedlander2013tail,
  author  =  {M. P. Friedlander and G. Goh},
  title   =  {Tail bounds for stochastic approximation},
  year    = 2013,
  month   = {April},
  eprint={1304.5586},
  archivePrefix={arXiv},
  primaryClass={cs.IR}
}