Το work with title Distributed optimization algorithms: Performance analysis and application in machine learning problems by Chariompolis Ioannis is licensed under Creative Commons Attribution 4.0 International
Bibliographic Citation
Ioannis Chariompolis, "Distributed optimization algorithms: Performance analysis and application in machine learning problems", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2025
https://doi.org/10.26233/heallink.tuc.102659
Distributed optimization allows a set of agents to collectively solve a global minimization problem, defined as the numerical average of each agent’s local objective function. In this thesis we examine distributed optimization algorithms in the setting where the agents communicate via a network, where each agent can only communicate with its immediate neighbors, and can not share its objective function (and by extension its training data) directly. More specifically, we investigate how these algorithms can be applied to the classical Machine Learning framework of training a parametrized model to predict labels y from input data x. Typically, distributed optimization algorithms over networks have two main steps: the pooling and then averaging of the parameters of each agent’s neighbors via the consensus protocol, and a gradient descent step towards the minimum of the local function of each agent. We also consider algorithms that build on this scheme by incorporating correction terms, acceleration via momentum, dual ascent, and multiple communication rounds per gradient computation. We experimentally evaluate the performance of a number of distributed algorithms for functions of different classes, namely smooth, and strongly convex functions, as well as objectives with constraints.