×

Primal-dual stochastic distributed algorithm for constrained convex optimization. (English) Zbl 1452.90245

Summary: This paper investigates distributed convex optimization problems over an undirected and connected network, where each node’s variable lies in a private constrained convex set, and overall nodes aim at collectively minimizing the sum of all local objective functions. Motivated by a variety of applications in machine learning problems with large-scale training sets distributed to multiple autonomous nodes, each local objective function is further designed as the average of moderate number of local instantaneous functions. Each local objective function and constrained set cannot be shared with others. A primal-dual stochastic algorithm is presented to address the distributed convex optimization problems, where each node updates its state by resorting to unbiased stochastic averaging gradients and projects on its private constrained set. At each iteration, for each node the gradient of one local instantaneous function selected randomly is evaluated and the average of the most recent stochastic gradients is used to approximate the true local gradient. In the constrained case, we show that with strong-convexity of the local instantaneous function and Lipschitz continuity of its gradient, the algorithm converges to the global optimization solution almost surely. In the unconstrained case, an explicit linear convergence rate of the algorithm is provided. Numerical experiments are presented to demonstrate correctness of the theoretical results.

MSC:

90C25 Convex programming
90C15 Stochastic programming

Software:

Saga; DILAND
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Raffard, R. L.; Tomlin, C. J.; Boyd, S. P., Distributed optimization for cooperative agents: application to formation flight, Proceedings of the 43rd IEEE Conference on Decision and Control, 2453-2459 (2014)
[2] Wang, Y.; Ma, Z.; Chen, G., Distributed control of cluster lag consensus for first-order multi-agent systems on QUAD vector fields, J. Frankl. Inst., 355, 15, 7335-7353 (2018) · Zbl 1398.93028
[3] Dai, L.; Cao, Q.; Xia, Y.; Gao, Y., Distributed MPC for formation of multi-agent systems with collision avoidance and obstacle avoidance, J. Frankl. Inst., 354, 4, 2068-2085 (2017) · Zbl 1378.93009
[4] Rabbat, M.; Nowak, R., Distributed optimization in sensor networks, Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, 20-27 (2004)
[5] Xiao, L.; Boyd, S.; Lall, S., A scheme for robust distributed sensor fusion based on average consensus, Proceedings of the Fourth International Symposium on Information Processing in Sensor Networks, 63-70 (2005)
[6] Li, H.; Liu, S.; Soh, Y. C.; Xie, L., Event-triggered communication and data rate constraint for distributed optimization of multi-agent systems, IEEE Trans. Syst. Man Cybern. Sys., 48, 11, 1908-1919 (2018)
[7] Wang, B.; Li, Z.; Yan, X., Multi-subspace factor analysis integrated with support vector data description for multimode process, J. Frankl. Inst., 355, 15, 7664-7690 (2018) · Zbl 1398.93234
[8] Brockherde, F.; Vogt, L.; Li, L., Bypassing the kohn-sham equations with machine learning, Nat. Commun., 8, 872 (2017)
[9] Cavalcante, R.; Yamada, I.; Mulgrew, B., An adaptive projected subgradient approach to learning in diffusion networks, IEEE Trans. Signal Process., 57, 7, 2762-2774 (2009) · Zbl 1391.90639
[10] Chiang, M.; Hande, P.; Lan, T., Power control in wireless cellular networks, Found. Trends Netw., 2, 4, 381-533 (2008)
[11] Khan, U. A.; Kar, S.; Moura, J. M., DILAND: an algorithm for distributed sensor localization with noisy distance measurements, IEEE Trans. Signal Process., 58, 3, 1940-1947 (2010) · Zbl 1392.94628
[12] He, Y.; Lee, I.; Guan, L., Distributed algorithms for network lifetime maximization in wireless visual sensor networks, IEEE Trans. Circuits Syst. Video Technol., 199, 5, 704-718 (2009)
[13] Zhao, J.; Wen, Y.; Shang, R., Optimizing sensor node distribution with genetic algorithm in wireless sensor network, Proceedings of the International Symposium on Neural Networks, 3176, 242-247 (2004)
[14] H. Li, Q. Lu, T. Huang, Convergence analysis of a distributed optimization algorithm with a general unbalanced directed communication network, IEEE Trans. Netw. Sci. Eng., Regular Paper doi:10.1109/TNSE.2018.2848288.
[15] Nedíc, A.; Ozdaglar, A., Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control, 54, 1, 48-61 (2009) · Zbl 1367.90086
[16] Yuan, K.; Ling, Q.; Yin, W., On the convergence of decentralized gradient descent, SIAM J. Optim., 26, 3, 1835-1854 (2016) · Zbl 1345.90068
[17] Jakovetic, D.; Xavier, J.; Moura, J. M.F., Fast distributed gradient methods, IEEE Trans. Autom. Control, 59, 5, 1131-1146 (2014) · Zbl 1360.90292
[18] Li, H.; Lu, Q.; Huang, T., Distributed projection subgradient algorithm over time-varying general unbalanced directed graphs, IEEE Trans. Autom. Control, 64, 3, 1309-1316 (2019) · Zbl 1482.90230
[19] Xi, C.; Wu, Q.; Khan, U. A., On the distributed optimization over directed networks, Neurocomputing, 267, 508-515 (2017)
[20] Li, H.; Lu, Q.; Liao, X.; Huang, T., Accelerated convergence algorithm for distributed constrained optimization under time-varying general directed graphs, IEEE Trans. Syst. Man Cybern. Syst., 48, 11, 1908-1919 (2018)
[21] Boyd, S.; Parikh, N.; Chu, E.; Peleato, B., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3, 1, 1-122 (2011) · Zbl 1229.90122
[22] Magnusson, S.; Weeraddana, P. C.; Fischione, C., A distributed approach for the optimal power-flow problem based on ADMM and sequential convex approximations, IEEE Trans. Control Netw. Syst., 2, 3, 238-253 (2015) · Zbl 1370.90045
[23] Mota, J. F.; Xavier, J. M.; Aguiar, P. M., DADMM: a communication-efficient distributed algorithm for separable optimization, IEEE Trans. Signal Process., 61, 10, 2718-2723 (2013) · Zbl 1393.94059
[24] Ling, Q.; Ribeiro, A., Decentralized linearized alternating direction method of multipliers, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 5447-5451 (2014)
[25] Shi, W.; Ling, Q.; Wu, G.; Yin, W. T., EXTRA: An exact first-order algorithm for decentralized consensus optimization, SIAM J. Optim., 25, 2, 944-966 (2015) · Zbl 1328.90107
[26] Nedíc, A.; Ozdaglar, A., Subgradient methods for saddle-point problems, J. Optim. Theory Appl., 142, 1, 205-228 (2009) · Zbl 1175.90415
[27] Zhu, M.; Martínez, S., On distributed convex optimization under inequality and equality constraints, IEEE Trans. Autom. Control, 57, 1, 151-164 (2012) · Zbl 1369.90129
[28] Chang, T.; Nedíc, A.; Scaglione, A., Distributed constrained optimization by consensus-based primal-dual perturbation method, IEEE Trans. Autom. Control, 59, 6, 1524-1538 (2014) · Zbl 1360.68775
[29] Lei, J.; Chen, H.; Fang, H., Primal-dual algorithm for distributed constrained optimization, Syst. Control Lett., 96, 110-117 (2016) · Zbl 1347.93019
[30] Ram, S.; Nedíc, A.; Veeravalli, V., Distributed stochastic subgradient projection algorithms for convex optimization, J. Optim. Theory Appl., 147, 3, 515-545 (2010) · Zbl 1254.90171
[31] Johansson, B.; Rabi, M.; Johansson, M., A simple peer-to-peer algorithm for distributed optimization in sensor networks, Proceedings of the 46th IEEE Conference on Decision and Control, 4705-4710 (2017)
[32] Blatt, D.; Hero, O.; Gauchman, H., A convergent incremental gradient method with a constant step size, SIAM J. Optim., 18, 1, 25-51 (2017) · Zbl 1154.90015
[33] Lee, S.; Nedíc, A., Distributed random projection algorithm for convex optimization, IEEE J. Sel. Top. Signal Process., 7, 2, 221-229 (2013)
[34] Johnson, R.; Zhang, T., Accelerating stochastic gradient descent using predictive variance reduction, Adv. Neural Inf. Process. Syst., 315-323 (2013)
[35] Defazio, A.; Bach, F.; Lacoste-Julien, S., SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives, Adv. Neural Inf. Process. Syst., 1646-1654 (2014)
[36] Mokhtari, A.; Ribeiro, A., DSA: decentralized double stochastic averaging gradient algorithm, J. Mach. Learn. Res., 17, 1-35 (2016) · Zbl 1360.68699
[37] Ram, S. S.; Nedíc, A.; Veeravalli, V., Stochastic incremental gradient descent for estimation in sensor networks, Proceedings of the Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers, 582-586 (2007)
[38] Wai, H. T.; Freris, N. M.; Nedíc, A.; Scaglione, A., SUCAG: stochastic unbiased curvature-aided gradient method for distributed optimization (2018), arXiv:1803.08198
[39] Roux, N.; Schmidt, M.; Bach, F. R., A stochastic gradient method with an exponential convergence rate for finite training sets, Adv. Neural Inf. Process. Syst., 2663-2671 (2012)
[40] De, S.; Goldstein, T., Effcient distributed SGD with variance reduction, Proceedings of the IEEE 16th International Conference on Data Mining, 111-120 (2016)
[41] Boyd, S.; Vandenberghe, L., Convex Optimization (2004), Cambridge University press: Cambridge University press Cambridge, UK · Zbl 1058.90049
[42] 168-17
[43] Liu, Q.; Wang, J., A second-order multi-agent network for bound-constrained distributed optimization, IEEE Trans. Autom. Control, 60, 12, 3310-3315 (2015) · Zbl 1360.90202
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.