First-order methods in optimization.

*(English)*Zbl 1384.65033This book provides a comprehensive study of simple methods in optimization which exploit information on values and gradients of the functions comprising the model that requires low iteration cost and memory storage. The first seven chapters provide the essential theoretical background by reviewing vector spaces, extended real-valued functions, subgradients, conjugate functions, smoothness and strong convexity, the proximal operator and symmetric spectral functions. Chapter 8 starts with primal and dual projected subgradient methods which also include the stochastic and incremental projected subgradient methods. The mirror descent method which is a generalization of the projected subgradient method to the non-Euclidean setting is presented in Chapter 9. The proximal gradient method and some of its variants and extensions are covered in Chapter 10. Chapter 11 is devoted to the block proximal gradient method which is an example of a functional decomposition method. The dual-based proximal gradient methods are presented in Chapter 12. Chapter 13 considers the generalized conditional gradient method which does not require the evaluation of the orthogonal projection operator at each iteration. Chapter 14 discusses the alternating minimization method in which a block is successively picked in a cyclic manner and the new value of the chosen block is set to be a minimizer of the objective with respect to the chosen block. Finally, Chapter 15 investigates the alternating direction method of multipliers by replacing the exact minimization in the primal update step by one iteration of the alternating minimization method.

Reviewer: Hang Lau (MontrĂ©al)

##### MSC:

65K05 | Numerical mathematical programming methods |

65-02 | Research exposition (monographs, survey articles) pertaining to numerical analysis |

90C52 | Methods of reduced gradient type |

90C25 | Convex programming |