zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Convergence rate analysis of nonquadratic proximal methods for convex and linear programming. (English) Zbl 0845.90099
Summary: The ϕ-divergence proximal method is an extension of the proximal minimization algorithm, where the usual quadratic proximal term is substituted by a class of convex statistical distances, called ϕ-divergences. We study the convergence rate of this nonquadratic proximal method for convex and particularly linear programming. We identify a class of ϕ-divergences for which superlinear convergence is attained both for optimization problems with strongly convex objectives at the optimum and linear programming problems, when the regularization parameters tend to zero. We prove also that with regularization parameters bounded away from zero, convergence is at least linear for a wider class of ϕ-divergences, when the method is applied to the same kinds of problems. We further analyze the associated class of augmented Lagrangian methods for convex programming with nonquadratic penalty terms, and prove convergence of the dual generated by these methods for linear programming problems under a weak nondegeneracy assumption.
MSC:
90C25Convex programming
90C05Linear programming
90C30Nonlinear programming