Distributed Optimization with Gradient Tracking over Heterogeneous Delay-Prone Directed Networks

Abstract

In this paper, we address the distributed optimization problem over unidirectional networks with possibly time-invariant heterogeneous bounded transmission delays. In particular, we propose a modified version of the Accelerated Distributed Directed OPTimization (ADD-OPT) algorithm, herein called Robustified ADD-OPT (R-ADD-OPT), which is able to solve the distributed optimization problem, even when the communication links suffer from heterogeneous but bounded transmission delays. We show that if the gradient step-size of the R-ADD-OPT algorithm is within a certain range, which also depends on the maximum time delay in the network, then the nodes are guaranteed to converge to the optimal solution of the distributed optimization problem. The range of the gradient step-size that guarantees convergence can be computed a priori based on the maximum time delay in the network.

Publication
In IEEE European Control Conference (ECC)
Evagoras Makridis
Evagoras Makridis
PhD Student | Distributed Decision and Control of Networked Systems

My research interests include autonomous systems in networks, distributed optimization, and data-driven sequential decision-making (Reinforcement Learning), with applications in quadrotor navigation, and resource management.