Date of Award
8-2022
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Electrical Engineering
Committee Chair/Advisor
Yongqiang Wang
Committee Member
Yingjie Lao
Committee Member
Yongkai Wu
Abstract
Stepsizes for optimization problems play a crucial role in algorithm convergence, where the stepsize must undergo tedious manual tuning to obtain near-optimal convergence. Recently, an adaptive method for automating stepsizes was proposed for centralized optimization. However, this method is not directly applicable to decentralized optimization because it allows for heterogeneous agent stepsizes. Furthermore, directly using consensus between agent stepsizes to mitigate stepsize heterogeneity can decrease performance and even lead to divergence.
This thesis proposes an algorithm to remedy the tedious manual tuning of stepsizes in decentralized optimization. Our proposed algorithm automates the stepsize and uses dynamic consensus between agents’ stepsizes with a simple filter to reduce stepsize heterogeneity. Without using a simple filter, we show experimentally that consensus between agents can cause divergence due to rapid changes in the local stepsize. Furthermore, we support our algorithm with theoretical guarantees and experimental results. We present experiments on standard machine learning problems like logistic regression, matrix factorization (gradients are not globally Lipschitz), and CIFAR-10 image classification.
Recommended Citation
Liggett, Benjamin, "Distributed Learning with Automated Stepsizes" (2022). All Theses. 3883.
https://open.clemson.edu/all_theses/3883
Included in
Artificial Intelligence and Robotics Commons, Controls and Control Theory Commons, Theory and Algorithms Commons