×

Incremental gradient-free method for nonsmooth distributed optimization. (English) Zbl 06794128

Summary: In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov’s random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed \(l_1\)-regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.

MSC:

47N10 Applications of operator theory in optimization, convex analysis, mathematical programming, economics
49J52 Nonsmooth analysis
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] A. M. Bagirov, A derivative-free method for linearly constrained nonsmooth optimization,, J. Ind. Manag. Optim., 2, 319 (2006) · Zbl 1122.90097
[2] D. P. Bertsekas, Stochastic optimization problems with nondifferentiable cost functionals,, J. Optim. Theory Appl., 12, 218 (1973) · Zbl 0248.90043
[3] D. P. Bertsekas, <em>Parallel and Distributed Computation: Numerical Methods</em>,, Athena Scientific (1989) · Zbl 0743.65107
[4] D. P. Bertsekas, <em>Convex Analysis and Optimization</em>,, Athena Scientific (2003) · Zbl 1140.90001
[5] D. P. Bertsekas, Incremental proximal methods for large scale convex optimization,, Math. Program. B., 129, 163 (2011) · Zbl 1229.90121
[6] A. R. Conn, <em>Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization</em>,, SIAM (2009) · Zbl 1163.49001
[7] J. C. Duchi, Dual averaging for distributed optimization: Convergence analysis and network scaling,, IEEE Trans. Autom. Control., 57, 592 (2012) · Zbl 1369.90156
[8] J. C. Duchi, Randomized smoothing for stochastic optimization,, SIAM J. Optim., 22, 674 (2012) · Zbl 1267.65063
[9] X. X. Huang, A smoothing scheme for optimization problems with Max-Min constraints,, J. Ind. Manag. Optim., 3, 209 (2007) · Zbl 1170.90489
[10] J. Hiriart-Urruty, <em>Convex Analysis and Minimization Algorithms I</em>,, Springer (1996)
[11] X. Zhang, Binary artificial algae algorithm for multidimensional knapsack problems,, Applied Soft Computing, 43, 583 (2016)
[12] B. Johansson, A randomized incremental subgradient method for distributed optimization in networked systems,, SIAM J. Optim., 20, 1157 (2009) · Zbl 1201.65100
[13] K. C. Kiwiel, Convergence of approximate and incremental subgradient methods for convex optimization,, SIAM J. Optim., 14, 807 (2004) · Zbl 1063.90039
[14] J. Y. Li, Gradient-free method for nonsmooth distributed optimization,, J. Glob. Optim., 61, 325 (2015) · Zbl 1341.90135
[15] J. Y. Li, Distributed proximal-gradient method for convex optimization with inequality constraints,, ANZIAM J., 56, 160 (2014) · Zbl 1315.90029
[16] A. Nedić, Convergence rate of incremental subgradient algorithm,, in Stochastic Optimization: Algorithms and Applications (eds. S. Uryasev and P. M. Pardalos), 223 (2001) · Zbl 0984.90033
[17] A. Nedić, Incremental subgradient methods for nondifferentiable optimization,, SIAM J. Optim., 12, 109 (2001) · Zbl 0991.90099
[18] A. Nedić, Distributed subgradient methods for multi-agent optimization,, IEEE Trans. Autom. Control., 54, 48 (2009) · Zbl 1367.90086
[19] Y. Nesterov, <em>Random Gradient-Free Minimization of Convex Functions</em>,, Technical report (2011) · Zbl 1380.90220
[20] B. T. Polyak, Robust identification,, Automatica, 16, 53 (1980) · Zbl 0431.93054
[21] M. G. Rabbat, Quantized incremental algorithms for distributed optimization,, IEEE J. Sel. Areas Commun., 23, 798 (2005)
[22] S. S. Ram, Incremental stochastic subgradient algorithms for convex optimization,, SIAM J. Optim., 20, 691 (2009) · Zbl 1231.90312
[23] Q. J. Shi, Normalized incremental subgradient algorithm and its application,, IEEE Signal Processing, 57, 3759 (2009) · Zbl 1391.90480
[24] R. L. Sheu, Maximum folw problem in the distribution network,, J. Ind. Manag. Optim., 2, 237 (2006) · Zbl 1122.90089
[25] M. V. Solodov, Incremental gradient algorithms with stepsizes bounded away from zero,, Comput. Optim. Appl., 11, 28 (1998) · Zbl 0915.65061
[26] D. M. Yuan, Gradient-free method for distributed multi-agent optimization via push-sum algorithms,, Int. J. Robust Nonlinear Control, 25, 1569 (2015) · Zbl 1317.93273
[27] Q. Long, A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization,, J. Ind. Manag. Optim., 10, 1279 (2014) · Zbl 1292.90313
[28] G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations,, J. Ind. Manag. Optim., 6, 149 (2010) · Zbl 1187.65055
[29] C. J. Yu, A new exact penalty function method for continuous inequality constrained optimization problems,, J. Ind. Manag. Optim., 6, 895 (2010) · Zbl 1203.90010
[30] F. Yousefian, On stochastic gradient and subgradient methods with adaptive steplength sequences,, Automatica, 48, 56 (2012) · Zbl 1244.93178
[31] J. Li, A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints,, Comp. Opt. Appl., 64, 671 (2016) · Zbl 1352.90071
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.