Open Access
Description:
In distributed optimization and control, each network node performs local computation based on its own information and information received from its neighbors through a communication network to achieve a global objective. Although many distributed optimization and control algorithms have been proposed, core theoretical problems with important practical relevance remain. For example, what convergence properties can be obtained for nonconvex problems? How to tackle time-varying cost and constraint functions? Can these algorithms work under limited communication resources? This thesis contributes to answering these questions by providing new algorithms with better convergence rates under less information exchange than existing algorithms. It consists of three parts. In the first part, we consider distributed nonconvex optimization problems. It is hard to solve these problems and often only stationary points can be found. We propose distributed primal--dual optimization algorithms under different information feedback settings. Specifically, when full-information feedback or deterministic zeroth-order oracle feedback is available, we show that the proposed algorithms converge sublinearly to a stationary point if each local cost function is smooth. They converge linearly to a global optimum if the global cost function also satisfies the Polyak--{\L}ojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition in the literature for proving linear convergence of distributed optimization algorithms. When stochastic gradient feedback or stochastic zeroth-order oracle feedback is available, we show that the proposed algorithms achieve linear speedup convergence rates, meaning that the convergence rates decrease linearly with the number of computing nodes. In the second part, distributed online convex optimization problems are considered. For such problems, the cost and constraint functions are revealed at the end of each time slot. We focus on time-varying coupled inequality ...
Publisher:
KTH, Reglerteknik ; Stockholm, Sweden
Year of Publication:
2020
Document Type:
Doctoral thesis, monograph ; info:eu-repo/semantics/doctoralThesis ; text ; [Doctoral and postdoctoral thesis]
Language:
eng
Subjects:
Distributed nonconvex optimization ; distributed online convex optimization ; distributed event-triggered control ; primal-dual algorithm ; stochastic gradient descent ; zeroth-order algorithm ; Control Engineering ; Reglerteknik ; Computational Mathematics ; Beräkningsmatematik
Rights:
info:eu-repo/semantics/openAccess
Relations:
TRITA-EECS-AVL ; 2020:45
Content Provider:
Kungliga Tekniska Högskolan, Stockholm: KTHs Publikationsdatabas DiVA
Further nameRoyal Institute of Technology, Stockholm: KTHs Publication Database DiVA  Flag of Sweden