Date of Award
Master of Science in Data Science - (M.S.)
Ji Meng Loh
In this thesis, Stochastic Gradient Descent (SGD), an optimization method originally popular due to its computational efficiency, is analyzed using Markov chain methods. We compute both numerically, and in some cases analytically, the stationary probability distributions (invariant measures) for the SGD Markov operator over all step sizes or learning rates. The stationary probability distributions provide insight into how the long-time behavior of SGD samples the objective function minimum.
A key focus of this thesis is to provide a systematic study in one dimension comparing the exact SGD stationary distributions to the Fokker-Planck diffusion approximation equations —which are commonly used in the literature to characterize the SGD probability distribution in the limit of small step sizes/learning rates. While various error estimates for the diffusion approximation have recently been established, they are often in a weak sense and not in a strong maximum norm. Our study shows that the diffusion approximation converges with a slow rate in the maximum norm to the true stationary distribution. In addition to large quantitative errors, the exact SGD probability distribution exhibits fundamentally different behavior to the diffusion approximation: they can have compact or singular supports; and there can be multiple invariant measures for non-convex objective functions (when the diffusion approximation only has one).
Finally, we use the Markov operator to establish additional results: (1) we show that for quadratic objective functions the SGD expected value is the objective function minimum for any step size. This has the practical implication that time average SGD solutions converge to the minimum even when the SGD iterates never reach or access the minimum. (2) We provide a simple approach to formally derive Fokker-Planck diffusion approximations using only basic calculus (e.g., integration by parts and Taylor expansions), which may be of interest to the engineering community. (3) We observe that the stationary distributions of the Markov operator lead to additional Fokker-Planck equations with simpler diffusion coefficients than what is currently in the literature.
McCann, William Joseph, "Stationary probability distributions of stochastic gradient descent and the success and failure of the diffusion approximation" (2021). Theses. 1835.