×

zbMATH — the first resource for mathematics

Algorithms for stochastic optimization with function or expectation constraints. (English) Zbl 1443.90263
Summary: This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal \(\mathcal{O}(1/\epsilon^2)\) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where \(\epsilon\) denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to \(\mathcal{O}(1/\epsilon)\) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving function or expectation constrained stochastic optimization are presented in the literature.

MSC:
90C15 Stochastic programming
90C25 Convex programming
90C06 Large-scale problems in mathematical programming
90C22 Semidefinite programming
49M37 Numerical methods based on nonlinear programming
Software:
Pegasos
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Auslender, A.; Teboulle, M., Interior gradient and proximal methods for convex and conic optimization, SIAM J. Optim., 16, 3, 697-725 (2006) · Zbl 1113.90118
[2] Bauschke, H.; Borwein, J.; Combettes, P., Bregman monotone optimization algorithms, SIAM J. Control Optim., 42, 596-636 (2003) · Zbl 1049.90053
[3] Beck, A.; Ben-Tal, A.; Guttmann-Beck, N.; Tetruashvili, L., The comirror algorithm for solving nonsmooth constrained convex problems, Oper. Res. Lett., 38, 6, 493-498 (2010) · Zbl 1202.90209
[4] Benveniste, A., Métivier, M., Priouret, P.: Algorithmes adaptatifs et approximations stochastiques. Masson, 1987. (English translation: Adaptive Algorithms and Stochastic Approximations). Springer, Belin (1993)
[5] Bregman, LM, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Comput. Math. Math. Phys., 7, 3, 200-217 (1967)
[6] Chapelle, O.; Scholkopf, B.; Zien, A., Semi-supervised Learning (2006), Cambridge, Mass: MIT Press, Cambridge, Mass
[7] Chen, Y.; Lan, G.; Ouyang, Y., Optimal primal-dual methods for a class of saddle point problems, SIAM J. Optim., 24, 4, 1779-1814 (2014) · Zbl 1329.90090
[8] Duchi, JC; Bartlett, PL; Wainwright, MJ, Randomized smoothing for stochastic optimization, SIAM J. Optim., 22, 674-701 (2012) · Zbl 1267.65063
[9] Duchi, J.C., Shalev-shwartz, S., Singer, Y., Tewari, A.: Composite objective mirror descent. In: Proceedings of the Twenty Third Annual Conference on Computational Learning Theory (2010)
[10] Ermoliev, Y., Stochastic quasigradient methods and their application to system optimization, Stochastics, 9, 1-36 (1983) · Zbl 0512.90079
[11] Gaivoronski, A., Nonstationary stochastic programming problems, Kybernetika, 4, 89-92 (1978)
[12] Ghadimi, S.; Lan, G., Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework, SIAM J. Optim., 22, 1469-1492 (2012) · Zbl 1301.62077
[13] Ghadimi, S.; Lan, G., Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, II: shrinking procedures and optimal algorithms, SIAM J. Optim., 23, 2061-2089 (2013) · Zbl 1293.62167
[14] Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic optimization. Technical report, Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, USA, June 2013 · Zbl 1335.62121
[15] Goldfarb, D.; Iyengar, G., Robust portfolio selection problems, Math. Oper. Res., 28, 1, 1-38 (2003) · Zbl 1082.90082
[16] Jiang, H., Shanbhag, U.V.: On the solution of stochastic optimization and variational problems in imperfect information regimes. arXiv preprint arXiv:1402.1457 (2014) · Zbl 1356.90097
[17] Kleywegt, AJ; Shapiro, A.; de Mello, TH, The sample average approximation method for stochastic discrete optimization, SIAM J. Optim., 12, 479-502 (2001) · Zbl 0991.90090
[18] Lan, G., An optimal method for stochastic composite optimization, Math. Program., 133, 1, 365-397 (2012) · Zbl 1273.90136
[19] Lan, G.; Lu, Z.; Monteiro, RDC, Primal-dual first-order methods with \({{{\cal{O}}}}(1/\epsilon )\) iteration-complexity for cone programming, Math. Program., 126, 1-29 (2011)
[20] Lan, G.; Nemirovski, AS; Shapiro, A., Validation analysis of mirror descent stochastic approximation method, Math. Program., 134, 425-458 (2012) · Zbl 1273.90154
[21] Nedic, A.: On stochastic subgradient mirror-descent algorithm with weighted averaging. Technical report (2012) · Zbl 1297.90119
[22] Nemirovski, A.; Juditsky, A.; Lan, G.; Shapiro, A., Robust stochastic approximation approach to stochastic programming, SIAM J. Optim., 19, 4, 1574-1609 (2009) · Zbl 1189.90109
[23] Nemirovski, A.; Shapiro, A., Convex approximations of chance constrained programs, SIAM J. Optim., 17, 4, 969-996 (2006) · Zbl 1126.90056
[24] Nesterov, YE, Introductory Lectures on Convex Optimization: A Basic Course (2004), Massachusetts: Kluwer Academic Publishers, Massachusetts · Zbl 1086.90045
[25] Pflug, G.; Pflug, G., Optimization of stochastic models, The Interface Between Simulation and Optimization (1996), Boston: Kluwer, Boston · Zbl 0423.62066
[26] Polyak, B., New stochastic approximation type procedures, Autom. i Telemekh., 7, 98-107 (1990)
[27] Polyak, B.; Juditsky, A., Acceleration of stochastic approximation by averaging, SIAM J. Control Optim., 30, 838-855 (1992) · Zbl 0762.62022
[28] Polyak, BT, A general method of solving extremum problems, Doklady Akademii Nauk SSSR, 174, 1, 33 (1967)
[29] Robbins, H.; Monro, S., A stochastic approximation method, Ann. Math. Stat., 22, 400-407 (1951) · Zbl 0054.05901
[30] Rockafellar, R.; Uryasev, S., Optimization of conditional value-at-risk, J. Risk, 2, 21-41 (2000)
[31] Ruszczyński, A.; Sysk, W., A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems, Math. Program. Study, 28, 113-131 (1986) · Zbl 0597.90064
[32] Schmidt, M., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. Technical report, September 2013 · Zbl 1417.90099
[33] Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for svm. In: ICML, pp. 807-814 (2007) · Zbl 1211.90239
[34] Shapiro, A.; Ruszczyński, A.; Shapiro, A., Monte carlo sampling methods, Stochastic Programming (2003), Amsterdam: North-Holland Publishing Company, Amsterdam · Zbl 1053.90103
[35] Spall, JC, Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control (2005), New York: Wiley, New York
[36] Teboulle, M., Convergence of proximal-like algorithms, SIAM J. Optim., 7, 1069-1083 (1997) · Zbl 0890.90151
[37] Wang, W.; Ahmed, S., Sample average approximation of expected value constrained stochastic programs, Oper. Res. Lett., 36, 5, 515-519 (2008) · Zbl 1210.90131
[38] Xiao, L., Dual averaging methods for regularized stochastic learning and online optimization, J. Mach. Learn. Res., 12, 2543-2596 (2010) · Zbl 1242.62011
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.