zbMATH — the first resource for mathematics

An incremental decomposition method for unconstrained optimization. (English) Zbl 1334.90071
Summary: In this work we consider the problem of minimizing a sum of continuously differentiable functions. The vector of variables is partitioned into two blocks, and we assume that the objective function is convex with respect to a block-component. Problems with this structure arise, for instance, in machine learning. In order to advantageously exploit the structure of the objective function and to take into account that the number of terms of the objective function may be huge, we propose a decomposition algorithm combined with a gradient incremental strategy. Global convergence of the proposed algorithm is proved. The results of computational experiments performed on large-scale real problems show the effectiveness of the proposed approach with respect to existing algorithms.

90C06 Large-scale problems in mathematical programming
Full Text: DOI
[1] Bertsekas, D. P., Nonlinear programming, (1999), Athena Scientific USA · Zbl 1015.90077
[2] Bertsekas, D. P.; Tsitsiklis, J. N., Parallel and distributed computation: numerical methods, (1989), Prentice-Hall Inc Upper Saddle River, NJ, USA · Zbl 0743.65107
[3] Bertsekas, D. P.; Tsitsiklis, J. N., Gradient convergence in gradient methods with errors, SIAM J. Optim., 10, 3, 627-642, (2000) · Zbl 1049.90130
[4] D.P. Bertsekas, Optimization for machine learning, in: S. Sra, S. Nowozin, S.J. Wright (Eds.), Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey, 2012, pp. 85-118 (Chapter 4).
[5] J. Bi, T. Zhang, Support vector machines with input data uncertainty, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2004.
[6] Bishop, C. M., Pattern recognition and machine learning, (2006), Springer · Zbl 1107.68072
[7] Björck, A. A., Numerical Methods for Least Squares Problems, vol. 51, (1996), Society for Industrial Mathematics · Zbl 0847.65023
[8] Blatt, D.; Hero, A. O.; Gauchman, H., A convergent incremental gradient method with a constant step size, SIAM J. Optim., 18, 1, 29-51, (2007) · Zbl 1154.90015
[9] S. Ding, H. Zhao, Y. Zhang, X. Xu, R. Nie, Extreme learning machine: algorithm, theory and applications, Art. Intell. Rev., in press.
[10] Grippo, L.; Sciandrone, M., Globally convergent block-coordinate techniques for unconstrained optimization, Optim. Methods Softw., 10, 4, 587-637, (1999) · Zbl 0940.65070
[11] Grippo, L., Convergent on-line algorithms for supervised learning in neural networks, IEEE Trans. Neural Netw., 10, 1284-1299, (2000)
[12] G. Scutari, F. Facchinei, P. Song, D. Palomar, J.S. Pang, Decomposition by partial linearization: parallel optimization of multi-agent systems, in: IEEE Transactions on Signal Processing, submitted for publication. · Zbl 1394.94507
[13] Solodov, M. V., Incremental gradient algorithms with stepsizes bounded away from zero, Comput. Optim. Appl., 11, 23-35, (1998) · Zbl 0915.65061
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.