zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Self-adaptive inexact proximal point methods. (English) Zbl 1147.90414
Summary: We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form μ(𝐱)=βf(𝐱) η where η[0,2) and β>0 is a constant, we obtain convergence to the set of minimizers that is linear for η=0 and β sufficiently small, superlinear for η(0,1), and at least quadratic for η[1,2). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems.
MSC:
90C55Methods of successive quadratic programming type
Software:
CG_DESCENT
References:
[1]Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995) · Zbl 0886.65058 · doi:10.1145/200979.201043
[2]Combettes, P.L., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control 43, 731–742 (2004) · Zbl 1078.47003 · doi:10.1137/S0363012903427336
[3]Fan, J., Yuan, Y.: On the convergence of the Levenberg-Marquardt method without nonsingularity assumption. Computing 74, 23–39 (2005) · Zbl 1076.65047 · doi:10.1007/s00607-004-0083-1
[4]Ha, C.D.: A generalization of the proximal point algorithm. SIAM J. Control 28, 503–512 (1990) · Zbl 0699.49037 · doi:10.1137/0328029
[5]Hager, W.W., Zhang, H.: CG_DESCENT user’s guide. Tech. Rep., Department of Mathematics, University of Florida, Gainesville (2004)
[6]Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005) · Zbl 1093.90085 · doi:10.1137/030601880
[7]Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113–137 (2006) · Zbl 05458419 · doi:10.1145/1132973.1132979
[8]Humes, C., Silva, P.: Inexact proximal point algorithms and descent methods in optimization. Optim. Eng. 6, 257–271 (2005) · Zbl 1080.90072 · doi:10.1007/s11081-005-6798-9
[9]Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13, 1080–1097 (2003) · Zbl 1053.90134 · doi:10.1137/S1052623401399587
[10]Kaplan, A., Tichatschke, R.: Proximal point methods and nonconvex optimization. J. Glob. Optim. 13, 389–406 (1998) · Zbl 0916.90224 · doi:10.1023/A:1008321423879
[11]Li, D., Fukushima, M., Qi, L., Yamashita, N.: Regularized Newton methods for convex minimization problems with singular solutions. Comput. Optim. Appl. 28, 131–147 (2004) · Zbl 1056.90111 · doi:10.1023/B:COAP.0000026881.96694.32
[12]Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control 22, 277–293 (1984) · Zbl 0533.49028 · doi:10.1137/0322019
[13]Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Francaise Inf. Rech. Oper. Ser. R-3 4, 154–158 (1970)
[14]Martinet, B.: Determination approachée d’un point fixe d’une application pseudo-contractante. C.R. Séances Acad. Sci. 274, 163–165 (1972)
[15]Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27, 170–191 (2002) · Zbl 1082.90582 · doi:10.1287/moor.27.1.170.331
[16]Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 2, 97–116 (1976) · Zbl 0402.90076 · doi:10.1287/moor.1.2.97
[17]Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control 14, 877–898 (1976) · Zbl 0358.90053 · doi:10.1137/0314056
[18]Tseng, P.: Error bounds and superlinear convergence analysis of some Newton-type methods in optimization. In: Pillo, G.D., Giannessi, F. (eds.) Nonlinear Optimization and Related Topics, pp. 445–462. Kluwer, Dordrecht (2000)
[19]Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969) · Zbl 0177.20603 · doi:10.1137/1011036
[20]Wolfe, P.: Convergence conditions for ascent methods II: some corrections. SIAM Rev. 13, 185–188 (1971) · Zbl 0216.26901 · doi:10.1137/1013035
[21]Yamashita, N., Fukushima, M.: The proximal point algorithm with genuine superlinear convergence for the monotone complementarity problem. SIAM J. Optim. 11, 364–379 (2000) · Zbl 1004.47031 · doi:10.1137/S105262349935949X
[22]Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg-Marquardt method. In: Topics in Numerical Analysis. Comput. Suppl., vol. 15, pp. 239–249. Springer, New York (2001)