zbMATH — the first resource for mathematics

Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity. (English) Zbl 1421.90115
90C25 Convex programming
90C52 Methods of reduced gradient type
65K15 Numerical methods for variational inequalities and related problems
Full Text: DOI arXiv
[1] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. W. Suter, From error bounds to the complexity of first-order descent methods for convex functions, Math. Program., 165 (2017), pp. 471–507, . · Zbl 1373.90076
[2] S. Bubeck, Convex optimization: Algorithms and complexity, Found. Trends Mach. Learn., 8 (2015), pp. 231–357. · Zbl 1365.90196
[3] J. Burke and M. Ferris, Weak sharp minima in mathematical programming, SIAM J. Control Optim., 31 (1993), pp. 1340–1359, . · Zbl 0791.90040
[4] F. Clarke, Optimization and Nonsmooth Analysis, Classics Appl. Math., SIAM, Philadelphia, PA, 1990.
[5] D. Davis and B. Grimmer, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, preprint, , 2017.
[6] D. Drusvyatskiy and A. S. Lewis, Error bounds, quadratic growth, and linear convergence of proximal methods, Math. Oper. Res., 43 (2018), pp. 919–948.
[7] J. Duchi and Y. Singer, Efficient online and batch learning using forward backward splitting, J. Mach. Learn. Res., 10 (2009), pp. 2899–2934. · Zbl 1235.62151
[8] B. Grimmer, Radial subgradient method, SIAM J. Optim., 28 (2018), pp. 459–469, .
[9] E. Hazan and S. Kale, Beyond the regret minimization barrier: Optimal algorithms for stochastic strongly-convex optimization, J. Mach. Learn. Res., 15 (2014), pp. 2489–2512. · Zbl 1319.90050
[10] S. Lacoste-Julien, M. Schmidt, and F. Bach, A Simpler Approach to Obtaining an \(O(1/t)\) Convergence Rate for the Projected Stochastic Subgradient Method, preprint, , 2012.
[11] H. Lu, “Relative-Continuity” for Non-Lipschitz Non-Smooth Convex Optimization Using Stochastic (or Deterministic) Mirror Descent, preprint, , 2017.
[12] Z.-Q. Luo and P. Tseng, Error bounds and convergence analysis of feasible descent methods: A general approach, Ann. Oper. Res., 46 (1993), pp. 157–178, . · Zbl 0793.90076
[13] I. Necoara, Y. Nesterov, and F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Math. Program., 175 (2018), pp. 69–107, . · Zbl 1412.90111
[14] A. Nedić and S. Lee, On stochastic subgradient mirror-descent algorithm with weighted averaging, SIAM J. Optim., 24 (2014), pp. 84–107, .
[15] A. Nemirovskii and Y. Nesterov, Optimal methods of smooth convex minimization, USSR Comput. Math. Math. Phys., 25 (1985), pp. 21–30.
[16] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, 1st ed., Springer, New York, 2004. · Zbl 1086.90045
[17] A. Rakhlin, O. Shamir, and K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, in Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, Omnipress, Madison, WI, 2012, pp. 1571–1578.
[18] J. Renegar, “Efficient” subgradient methods for general convex optimization, SIAM J. Optim., 26 (2016), pp. 2649–2676, . · Zbl 1351.90129
[19] H. Robbins and S. Monro, A stochastic approximation method, Ann. Math. Stat., 22 (1951), pp. 400–407. · Zbl 0054.05901
[20] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter, Pegasos: Primal estimated sub-gradient solver for SVM, Math. Program., 127 (2011), pp. 3–30. · Zbl 1211.90239
[21] N. Z. Shor, Minimization methods for non-differentiable functions, in Minimization Methods for Non-Differentiable Functions, Springer Ser. Comput. Math. 3, Springer, Berlin, 1985, pp. 22–47.
[22] N. Z. Shor, Subgradient and \(\epsilon\)-subgradient methods, in Nondifferentiable Optimization and Polynomial Problems, Nonconvex Optim. Appl. 24, Springer, Boston, MA, 1998, pp. 35–70.
[23] J. Yu, S. Vishwanathan, S. Günter, and N. N. Schraudolph, A quasi-Newton approach to nonsmooth convex optimization problems in machine learning, J. Mach. Learn. Res., 11 (2010), pp. 1145–1200. · Zbl 1242.90296
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.