×

A randomized incremental primal-dual method for decentralized consensus optimization. (English) Zbl 1470.90049

MSC:

90C06 Large-scale problems in mathematical programming
90C25 Convex programming
90C30 Nonlinear programming

Software:

Saga; D-ADMM; MLbase
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Aybat, N. S., Wang, Z., Lin, T. and Ma, S., Distributed linearized alternating direction method of multipliers for composite convex consensus optimization, IEEE Trans. Automat. Control63(1) (2018) 5-20. · Zbl 1390.90420
[2] Bertsekas, D. P. and Tsitsiklis, J. N., Parallel and Distributed Computation: Numerical Methods, Vol. 23 (Prentice Hall, Englewood Cliffs, NJ, 1989). · Zbl 0743.65107
[3] P. Bianchi, W. Hachem and F. Iutzeler, A stochastic coordinate descent primal-dual algorithm and applications to large-scale composite optimization, preprint (2014), arXiv 1407.0898. · Zbl 1359.90090
[4] Bianchi, P., Hachem, W. and Iutzeler, F., A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization, IEEE Trans. Automat. Control61(10) (2016) 2947-2957. · Zbl 1359.90090
[5] Blatt, D., Hero, A. O. and Gauchman, H., A convergent incremental gradient method with a constant step size, SIAM J. Optim.18(1) (2007) 29-51. · Zbl 1154.90015
[6] Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn.3(1) (2011) 1-122. · Zbl 1229.90122
[7] Chang, T.-H., Hong, M. and Wang, X., Multi-agent distributed optimization via inexact consensus admm, IEEE Trans. Signal Process.63(2) (2015) 482-497. · Zbl 1393.90124
[8] Combettes, P. L. and Pesquet, J.-C., Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping, SIAM J. Optim.25(2) (2015) 1221-1248. · Zbl 1317.65135
[9] Defazio, A., Bach, F. and Lacoste-Julien, S., SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, in NIPS Proc. Conf. Advances in Neural Information Processing Systems (Montréal, Canada, 2014), pp. 1646-1654.
[10] J. C. Duchi, S. Chaturapruek and C. Ré, Asynchronous stochastic convex optimization, preprint (2015), arXiv:1508.00882.
[11] Erseghe, T., Zennaro, D., Dall’Anese, E. and Vangelista, L., Fast consensus by the alternating direction multipliers method, IEEE Trans. Signal Process.59(11) (2011) 5523-5537. · Zbl 1391.93137
[12] Forero, P. A., Cano, A. and Giannakis, G. B., Consensus-based distributed support vector machines, J. Mach. Learn. Res.11 (2010) 1663-1707. · Zbl 1242.68222
[13] Gan, L., Topcu, U. and Low, S. H., Optimal decentralized protocol for electric vehicle charging, IEEE Trans. Power Syst.28(2) (2013) 940-951.
[14] Hazan, E. and Luo, H., Variance-reduced and projection-free stochastic optimization, in Int. Conf. Machine Learning (New York, USA, 2016), pp. 1263-1271.
[15] Hong, M. and Chang, T.-H., Stochastic proximal gradient consensus over random networks, IEEE Trans. Signal Process.65(11) (2017) 2933-2948. · Zbl 1414.94247
[16] Hong, M., Luo, Z.-Q. and Razaviyayn, M., Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim.26(1) (2016) 337-364. · Zbl 1356.49061
[17] Iutzeler, F., Ciblat, P., Hachem, W. and Jakubowicz, J., New broadcast based distributed averaging algorithm over wireless sensor networks, in 2012 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Kyoto, Japan, 2012), pp. 3117-3120.
[18] Jakovetić, D., Moura, J. M. and Xavier, J., Linear convergence rate of a class of distributed augmented lagrangian algorithms, IEEE Trans. Automat. Control60(4) (2015) 922-936. · Zbl 1360.90199
[19] Johnson, R. and Zhang, T., Accelerating stochastic gradient descent using predictive variance reduction, in NIPS Proc. Conf. Advances in Neural Information Processing Systems (Lake Tahoe, USA, 2013), pp. 315-323.
[20] Kraska, T., Talwalkar, A., Duchi, J. C., Griffith, R., Franklin, M. J. and Jordan, M. I., Mlbase: A distributed machine-learning system, in CIDR, Vol. 1 (2013), pp. 2-1.
[21] G. Lan, S. Lee and Y. Zhou, Communication-efficient algorithms for decentralized and stochastic optimization, preprint (2017), arXiv:1701.03961. · Zbl 1437.90125
[22] Lan, G. and Zhou, Y., An optimal randomized incremental gradient method, Math. Program.171(1-2) (2017) 1-49.
[23] Lan, G. and Zhou, Y., Random gradient extrapolation for distributed and stochastic optimization, SIAM J. Optim.28(4) (2018) 2753-2782. · Zbl 1401.90156
[24] Li, M., Andersen, D. G., Smola, A. J. and Yu, K., Communication efficient distributed machine learning with the parameter server, in NIPS Proc. Conf. Advances in Neural Information Processing Systems (Montreal, Canada, 2014), pp. 19-27.
[25] Lin, H., Mairal, J. and Harchaoui, Z., A universal catalyst for first-order optimization, in NIPS Proc. Conf. Advances in Neural Information Processing Systems (Montreal, Canada, 2015), pp. 3384-3392.
[26] Ling, Q., Shi, W., Wu, G. and Ribeiro, A., Dlm: Decentralized linearized alternating direction method of multipliers, IEEE Trans. Signal Process.63(15) (2015) 4051-4064. · Zbl 1394.94328
[27] Lo, C.-H. and Ansari, N., Decentralized controls and communications for autonomous distribution networks in smart grid, IEEE Trans. Smart Grid4(1) (2013) 66-77.
[28] Makhdoumi, A. and Ozdaglar, A., Broadcast-based distributed alternating direction method of multipliers, in 2014 52nd Annual Allerton Conf. Communication, Control, and Computing (Allerton) (IEEE, Monticello, Illinois, USA, 2014), pp. 270-277.
[29] Mokhtari, A. and Ribeiro, A., Dsa: Decentralized double stochastic averaging gradient algorithm, J. Mach. Learn. Res.17(1) (2016) 2165-2199. · Zbl 1360.68699
[30] Mota, J. F., Xavier, J. M., Aguiar, P. M. and Puschel, M., D-admm: A communication-efficient distributed algorithm for separable optimization, IEEE Trans. Signal Process.61(10) (2013) 2718-2723. · Zbl 1393.94059
[31] Nedić, A., Asynchronous broadcast-based convex optimization over a network, IEEE Trans. Automat. Control56(6) (2011) 1337-1351. · Zbl 1368.90126
[32] Nedić, A. and Olshevsky, A., Distributed optimization over time-varying directed graphs, IEEE Trans. Automat. Control60(3) (2015) 601-615. · Zbl 1360.90262
[33] Nedic, A. and Ozdaglar, A., Distributed subgradient methods for multi-agent optimization, IEEE Trans. Automat. Control54(1) (2009) 48-61. · Zbl 1367.90086
[34] J.-C. Pesquet and A. Repetti, A class of randomized primal-dual algorithms for distributed optimization, preprint (2014), arXiv:1406.6404. · Zbl 1336.65113
[35] Rabbat, M. and Nowak, R., Distributed optimization in sensor networks, in Proc. 3rd Int. Symp. Information Processing in Sensor Networks (ACM, 2004), pp. 20-27.
[36] Schizas, I. D., Ribeiro, A. and Giannakis, G. B., Consensus in ad hoc WSNS with noisy links—part I: Distributed estimation of deterministic signals, IEEE Trans. Signal Process.56(1) (2008) 350-364. · Zbl 1390.94395
[37] Schmidt, M., Le Roux, N. and Bach, F., Minimizing finite sums with the stochastic average gradient, Math. Program.162(1-2) (2017) 83-112. · Zbl 1358.90073
[38] Shi, W., Ling, Q., Wu, G. and Yin, W., Extra: An exact first-order algorithm for decentralized consensus optimization, SIAM J. Optim.25(2) (2015) 944-966. · Zbl 1328.90107
[39] Shi, W., Ling, Q., Yuan, K., Wu, G. and Yin, W., On the linear convergence of the admm in decentralized consensus optimization, IEEE Trans. Signal Process.62(7) (2014) 1750-1761. · Zbl 1394.94532
[40] Sirb, B. and Ye, X., Decentralized consensus algorithm with delayed and stochastic gradients, SIAM J. Optim.28(2) (2018) 1232-1254. · Zbl 1396.65098
[41] Song, W.-Z., Huang, R., Xu, M., Ma, A., Shirazi, B. and LaHusen, R., Air-dropped sensor network for real-time high-fidelity volcano monitoring, in Proc. 7th Int. Conf. Mobile Systems, Applications, and Services (ACM, 2009), pp. 305-318.
[42] Wei, E. and Ozdaglar, A., On the \(o(1=k)\) convergence of asynchronous distributed alternating direction method of multipliers, in Global Conf. Signal and Information Processing (GlobalSIP), 2013 IEEE (IEEE, 2013), pp. 551-554.
[43] Wu, T., Yuan, K., Ling, Q., Yin, W. and Sayed, A. H., Decentralized consensus optimization with asynchrony and delays, in Proc. IEEE Asilomar Conf. Signals, Systems, and Computers (Pacific Grove, CA, USA, 2016).
[44] Xiao, L. and Zhang, T., A proximal stochastic gradient method with progressive variance reduction, SIAM J. Optim.24(4) (2014) 2057-2075. · Zbl 1321.65016
[45] Zhang, G. and Heusdens, R., Distributed optimization using the primal-dual method of multipliers, IEEE Trans. Signal Inform. Process. Networks4(1) (2017) 173-187.
[46] Zhang, R. and Kwok, J., Asynchronous distributed admm for consensus optimization, in Proc. 31st Int. Conf. Machine Learning (ICML-14) (2014), pp. 1701-1709.
[47] Zhao, L., Song, W.-Z., Ye, X. and Gu, Y., Asynchronous broadcast-based decentralized learning in sensor networks, Int. J. Parallel, Emergent Distrib. Syst.33(6) (2017), pp. 1-19.
[48] S. Zheng and J. T. Kwok, Stochastic variance-reduced admm, preprint (2016), arXiv:1604.07070.
[49] Zhu, H., Cano, A. and Giannakis, G. B., Distributed consensus-based demodulation: Algorithms and error analysis, IEEE Trans. Wireless Commun.9(6) (2010), 2044-2054.
[50] Zhu, M. and Martínez, S., Distributed Optimization-Based Control of Multi-Agent Networks in Complex Environments (Springer, 2015). · Zbl 1415.93013
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.