Date of Award

8-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mathematical Sciences

Committee Chair/Advisor

Yuyuan Ouyang

Committee Member

Yibo Xu

Committee Member

Hao Hu

Committee Member

Matthew Saltzman

Committee Member

Boshi Yang

Abstract

The purpose of this dissertation is to explore the first-order methods that can be used to solve an approximate solution for convex smooth problems with homogeneous linear constraints. It consists of three interconnected research projects.

In the first project, we study the problem of computing the projection of a given vector onto the kernel of a symmetric positive semi-definite matrix. The complexity of an algorithm for computing a numerical solution is evaluated by the total number of matrix-vector multiplications required for computing an approximate solution. Such problems arise commonly in consensus optimization, in which the total number of matrix-vector multiplications is the same as the total number of communications needed to reach a consensus. We show that an accelerated gradient descent algorithm can compute an $\varepsilon$-approximate solution with at most $\cO(\log(1/\varepsilon))$ matrix-vector multiplications.

In the second project, we study convex smooth optimization with general homogeneous linear constraints. This optimization problem includes the decentralized multi-agent optimization problem as a special case. Motivated by \cite{lihuan2020decentralized}, we show the classical accelerated gradient method can be adapted to solve this problem. The adaptation we make is to allow the accelerated gradient method to solve the subproblem approximately. In order to solve the subproblem, we call the algorithm designed in the first chapter. Our proposed algorithm is called the accelerated penalty method (APM). We show that to compute an $\varepsilon$-approximate solution when the objective function is strongly convex, the total number of gradient evaluations of the objective function and matrix-vector multiplications involving the constraint matrix are bounded by $\cO(\log(1/\varepsilon))$ and $\cO(\log^2(1/\varepsilon))$ respectively. When the objective function is non-strongly convex, the corresponding complexity bounds are $\cO(1/\sqrt{\varepsilon})$ and $\cO((1/\sqrt{\varepsilon})\log(1/\varepsilon))$ respectively.

In the third project, we show that one other variant of the accelerated gradient method could be adapted to develop an APM analogy. It should be noted that the adaptation is non-trivial since APM is not a straightforward implementation of the accelerated gradient method but requires a penalty strategy and inexact solutions to the subproblem. We show that by a slightly different penalty strategy, we can develop a second version of the accelerated penalty method (APM2). Our proposed APM2 has the same order of complexity as that of APM.

Available for download on Sunday, August 31, 2025

Share

COinS