zbMATH — the first resource for mathematics

An optimal method for stochastic composite optimization. (English) Zbl 1273.90136
The stochastic composite optimization problem under consideration consists in minimizing the sum \(f+h\) of two convex functions \(f,h\) defined on a compact convex set \(X\subset \mathbb{R}^{n}\), assuming that \(\nabla f\) and \(h\) are globally Lipschitz. The author proposes two subgradient-type methods, namely, a modified version of the mirror-descent SA method due to A. Nemirovski, A. Juditsky, G. Lan and A. Shapiro [SIAM J. Optim. 19, No. 4, 1574–1609 (2009; Zbl 1189.90109)], which substantially improves the rate of convergence, and an accelerated stochastic approximation method, which can achieve an optimal rate of convergence. He illustrates the advantages of the latter method over other existing methods by discussing its performance for a particular class of stochastic optimization problems.

90C15 Stochastic programming
90C25 Convex programming
62L20 Stochastic approximation
Full Text: DOI
[1] Auslender A., Teboulle M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006) · Zbl 1113.90118 · doi:10.1137/S1052623403427823
[2] Bauschke H.H., Borwein J.M., Combettes P.L.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42, 596–636 (2003) · Zbl 1049.90053 · doi:10.1137/S0363012902407120
[3] Becker S., Bobin J., Candes E.: Nesta: A Fast and Accurate First-Order Method for Sparse Recovery, Manuscript. California Institute of Technology, Pasadena (2009)
[4] Ben-Tal A., Nemirovski A.: Non-euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005) · Zbl 1066.90079 · doi:10.1007/s10107-004-0553-4
[5] Benveniste, A., Métivier, M., Priouret, P.: Algorithmes adaptatifs et approximations stochastiques. Masson, 1987. English translation: Adaptive Algorithms and Stochastic Approximations. Springer (1993)
[6] Bertsekas D.: Nonlinear Programming, 2nd edn. Athena Scientific, New York (1999) · Zbl 1015.90077
[7] Bregman L.M.: The relaxation method of finding the common point convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Phys. 7, 200–217 (1967) · Zbl 0186.23807 · doi:10.1016/0041-5553(67)90040-7
[8] d’Aspremont A.: Smooth optimization with approximate gradient. SIAM J. Optim. 19, 1171–1183 (2008) · Zbl 1180.90378 · doi:10.1137/060676386
[9] d’Aspremont A., Banerjee O., El Ghaoue L.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30, 56–66 (2008) · Zbl 1156.90423 · doi:10.1137/060670985
[10] Ermoliev Y.: Stochastic quasigradient methods and their application to system optimization. Stochastics 9, 1–36 (1983) · Zbl 0512.90079 · doi:10.1080/17442508308833246
[11] Gaivoronski A.: Nonstationary stochastic programming problems. Kybernetika 4, 89–92 (1978)
[12] Juditsky A., Nazin A., Tsybakov A.B., Vayatis N.: Recursive aggregation of estimators via the mirror descent algorithm with average. Probl. Inf. Transm. 41, n.4 (2005) · Zbl 1123.62044
[13] Juditsky, A., Nemirovski, A., Tauvel, C.: Solving Variational Inequalities with Stochastic Mirror-Prox Algorithm, Manuscript. Georgia Institute of Technology. Submitted to SIAM J. Control Optim Atlanta (2008)
[14] Juditsky A., Rigollet P., Tsybakov A.B.: Learning by mirror averaging. Ann. Stat. 36, 2183–2206 (2008) · Zbl 1274.62288 · doi:10.1214/07-AOS546
[15] Kiwiel K.C.: Proximal minimization methods with generalized bregman functions. SIAM J. Control Optim. 35, 1142–1168 (1997) · Zbl 0890.65061 · doi:10.1137/S0363012995281742
[16] Kleywegt A.J., Shapiro A., Homem de Mello T.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12, 479–502 (2001) · Zbl 0991.90090 · doi:10.1137/S1052623499363220
[17] Kushner H., Yin J., Shapiro G.: Approximation and Recursive Algorithms and Applications, vol. 35 of Applications of Mathematics. Springer, New York (2003) · Zbl 1026.62084
[18] Lan, G., Lu, Z., Monteiro, R.D.C.: Primal-dual first-order methods with $${{\(\backslash\)mathcal O}(1/\(\backslash\)epsilon)}$$ iteration-complexity for cone programming. Math. Program. (2009, to appear)
[19] Lan, G., Monteiro, R.D.C.: Iteration-Complexity of First-Order Penalty Methods for Convex Programming, Manuscript. School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta (June, 2008) · Zbl 1282.90129
[20] Lan G., Monteiro, R.D.C.: Iteration-Complexity of First-Order Augmented Lagrangian Methods for Convex Programming, Manuscript. School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta (May, 2009) · Zbl 1348.90654
[21] Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of robust stochastic approximation method. submitted to Math. Program. (2008). http://www.optimization-online.org
[22] Lewis A.S., Wright S.J.: A Proximal Method for Composite Minimization, Manuscript. Cornell University, Ithaca (2009)
[23] Linderoth J., Shapiro A., Wright S.: The empirical behavior of sampling methods for stochastic programming. Ann. Oper. Res. 142, 215–241 (2006) · Zbl 1122.90391 · doi:10.1007/s10479-006-6169-8
[24] Lu Z.: Smooth optimization approach for sparse covariance selection. SIAM J. Optim. 19, 1807–1827 (2009) · Zbl 1179.90257 · doi:10.1137/070695915
[25] Lu, Z., Monteiro, R.D.C., Yuan, M.: Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression, Manuscript. School of ISyE, Georgia Tech, Atlanta (January, 2008) · Zbl 1246.90120
[26] Lu Z., Nemirovski A., Monteiro R.D.C.: Large-scale semidefinite programming via saddle point mirror-prox algorithm. Math. Program. 109, 211–237 (2007) · Zbl 1148.90009 · doi:10.1007/s10107-006-0031-2
[27] Mak W.K., Morton D.P., Wood R.K.: Monte carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 24, 47–56 (1999) · Zbl 0956.90022 · doi:10.1016/S0167-6377(98)00054-6
[28] Monteiro, R.D.C., Svaiter B.F.: On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean, Manuscript. School of ISyE, Georgia Tech, Atlanta (March, 2009) · Zbl 1230.90200
[29] Nemirovski A.: Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004) · Zbl 1106.90059 · doi:10.1137/S1052623403425629
[30] Nemirovski A., Juditsky A., Lan G., Shapiro A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009) · Zbl 1189.90109 · doi:10.1137/070704277
[31] Nemirovski A., Yudin D.: Problem Complexity and Method Efficiency in Optimization. Wiley-Interscience Series in Discrete Mathematics, vol. XV. Wiley, New York (1983)
[32] Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence O(1/k 2). Doklady AN SSSR 269:543–547 (1983). Translated as Soviet Math. Docl
[33] Nesterov Y.E.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Massachusetts (2004) · Zbl 1086.90045
[34] Nesterov Y.E.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005) · Zbl 1079.90102 · doi:10.1007/s10107-004-0552-5
[35] Nesterov Y.E.: Primal-dual subgradient methods for convex problems. Math. Program. 120, 221–259 (2006) · Zbl 1191.90038 · doi:10.1007/s10107-007-0149-x
[36] Nesterov, Y.E.: Gradient Methods for Minimizing Composite Objective Functions. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (2007, September)
[37] Nesterov Y.E.: Smoothing technique and its applications in semidefinite optimization. Math. Program. 110, 245–259 (2007) · Zbl 1126.90058 · doi:10.1007/s10107-006-0001-8
[38] Peña J.: Nash equilibria computation via smoothing techniques. Optima 78, 12–13 (2008)
[39] Pflug, G.C.: Optimization of Stochastic Models. In: The Interface Between Simulation and Optimization. Kluwer, Boston (1996) · Zbl 0909.90220
[40] Polyak B.T.: New stochastic approximation type procedures. Automat. i Telemekh 7, 98–107 (1990)
[41] Polyak B.T., Juditsky A.B.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30, 838–855 (1992) · Zbl 0762.62022 · doi:10.1137/0330046
[42] Robbins H., Monro S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951) · Zbl 0054.05901 · doi:10.1214/aoms/1177729586
[43] Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970) · Zbl 0193.18401
[44] Ruszczyński A., Sysk W.: A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems. Math. Program. Study 28, 113–131 (1986) · Zbl 0597.90064 · doi:10.1007/BFb0121128
[45] Shapiro A.: Monte carlo sampling methods. In: Ruszczyński, A., Shapiro, A. (eds) Stochastic Programming, North-Holland, Amsterdam (2003) · Zbl 1053.90103
[46] Shapiro A., Nemirovski A.: On complexity of stochastic programming problems. In: Jeyakumar, V., Rubinov, A.M. (eds) Continuous Optimization: Current Trends and Applications, pp. 111–144. Springer, Berlin (2005) · Zbl 1115.90041
[47] Spall J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley, Hoboken (2003) · Zbl 1088.90002
[48] Strassen V.: The existence of probability measures with given marginals. Ann. Math. Stat. 30, 423–439 (1965) · Zbl 0135.18701 · doi:10.1214/aoms/1177700153
[49] Teboulle M.: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069–1083 (1997) · Zbl 0890.90151 · doi:10.1137/S1052623495292130
[50] Tseng P.: On Accelerated Proximal Gradient Methods for Convex-Concave Optimization, Manuscript. University of Washington, Seattle (May 2008)
[51] Verweij B., Ahmed S., Kleywegt J.A., Nemhauser G., Shapiro A.: The sample average approximation method applied to stochastic routing problems: a computational study. Comput. Optim. Appl. 24, 289–333 (2003) · Zbl 1094.90029 · doi:10.1023/A:1021814225969
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.