Submitted by admin on Fri, 10/25/2024 - 05:30

In this paper, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of algorithm for decentralized stochastic optimization. Considering communication and synchronization costs are the major bottlenecks for decentralized optimization, we attempt to reduce these costs from an algorithmic design aspect, in particular, we are able to reduce the number of agents involved in one round of update via randomization. Our major contribution is to develop a class of accelerated randomized decentralized algorithms for solving general convex composite problems. We establish $\mathcal {O}(1/\epsilon)$ (resp., $\mathcal {O}(1/\sqrt {\epsilon })$ ) communication complexity and $\mathcal {O}(1/\epsilon ^{2})$ (resp., $\mathcal {O}(1/\epsilon)$ ) sampling complexity for solving general convex (resp., strongly convex) problems. It worths mentioning that our proposing algorithm only sublinear depends on the Lipschitz constant if there is a smooth component presented in the objective function. Moreover, we also conduct some preliminary numerical experiments to demonstrate the advantages of our proposing algorithms over the state-of-the-art synchronous decentralized algorithm.

Guanghui Lan
Yi Zhou