Fu, Anqi; Zhang, Junzi; Boyd, Stephen Anderson accelerated Douglas-Rachford splitting. (English) Zbl 1458.90511 SIAM J. Sci. Comput. 42, No. 6, A3560-A3583 (2020). Summary: We consider the problem of nonsmooth convex optimization with linear equality constraints, where the objective function is only accessible through its proximal operator. This problem arises in many different fields such as statistical learning, computational imaging, telecommunications, and optimal control. To solve it, we propose an Anderson accelerated Douglas-Rachford splitting (A2DR) algorithm, which we show either globally converges or provides a certificate of infeasibility/unboundedness under very mild conditions. Applied to a block separable objective, A2DR partially decouples so that its steps may be carried out in parallel, yielding an algorithm that is fast and scalable to multiple processors. We describe an open-source implementation and demonstrate its performance on a wide range of examples. Cited in 11 Documents MSC: 90C25 Convex programming 90C53 Methods of quasi-Newton type 49J52 Nonsmooth analysis 65K05 Numerical mathematical programming methods 68W10 Parallel algorithms in computer science 68W15 Distributed algorithms 97N80 Mathematical software, computer programs (educational aspects) Keywords:Anderson acceleration; nonsmooth convex optimization; parallel and distributed optimization; proximal oracles; stabilization; global convergence; pathological settings Software:CRAIG; SuperSCS; CVXPY; OSQP; SuperMann; Anderson; GitHub; POGS; SCS; glasso; LSQR PDF BibTeX XML Cite \textit{A. Fu} et al., SIAM J. Sci. Comput. 42, No. 6, A3560--A3583 (2020; Zbl 1458.90511) Full Text: DOI arXiv References: [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, U.S. National Bureau of Standards, Washington, DC, 1964. · Zbl 0171.38503 [2] A. Agrawal, R. Verschueren, S. Diamond, and S. Boyd, A rewriting system for convex optimization problems, J. Control Decis., 5 (2018), pp. 42-60. [3] A. C. Aitken, On Bernoulli’s numerical solution of algebraic equations, Proc. Roy. Soc. Edinburgh, 46 (1927), pp. 289-305. · JFM 52.0098.05 [4] A. Ali, E. Wong, and J. Z. Kolter, A semismooth Newton method for fast, generic convex programming, in Proceedings of the International Conference on Machine Learning, 2017, pp. 70-79. [5] D. G. Anderson, Iterative procedures for nonlinear integral equations, J. Assoc. Comput. Mach., 12 (1965), pp. 547-560. · Zbl 0149.11503 [6] N. S. Aybat, Z. Wang, T. Lin, and S. Ma, Distributed linearized alternating direction method of multipliers for composite convex consensus optimization, IEEE Trans. Automat. Control, 63 (2018), pp. 5-20. · Zbl 1390.90420 [7] F. Bach, Acceleration without pain, https://francisbach.com/acceleration-without-pain, Feb. 4, 2020. [8] O. Banerjee, L. E. Ghaoui, and A. d’Aspremont, Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data, J. Mach. Learn. Res., 9 (2008), pp. 485-516. · Zbl 1225.68149 [9] P. Bansode, K. C. Kosaraju, S. R. Wagh, R. Pasumarthy, and N. M. Singh, Accelerated distributed primal-dual dynamics using adaptive synchronization, IEEE Access, 7 (2019), pp. 120424-120440. [10] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2011), pp. 1-122. · Zbl 1229.90122 [11] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004. · Zbl 1058.90049 [12] C. Brezinski, M. Redivo-Zaglia, and Y. Saad, Shanks sequence transformations and Anderson acceleration, SIAM Rev., 60 (2018), pp. 646-669, https://doi.org/10.1137/17M1120725. · Zbl 1395.65001 [13] X. Chen and C. T. Kelley, Convergence of the EDIIS algorithm for nonlinear equations, SIAM J. Sci. Comput., 41 (2019), pp. A365-A379, https://doi.org/10.1137/18M1171084. · Zbl 1408.65030 [14] A. Defazio, A simple practical accelerated method for finite sums, in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 676-684. [15] S. Diamond and S. Boyd, CVXPY: A Python-embedded modeling language for convex optimization, J. Mach. Learn. Res., 17 (2016), pp. 1-5. · Zbl 1360.90008 [16] C. Evans, S. Pollock, L. G. Rebholz, and M. Xiao, A Proof that Anderson Acceleration Increases the Convergence Rate in Linearly Converging Fixed Point Methods (But Not in Quadratically Converging Ones), preprint, https://arxiv.org/abs/1810.08455, 2018. · Zbl 1433.65102 [17] H. Fang and Y. Saad, Two classes of multisecant methods for nonlinear acceleration, Numer. Linear Algebra Appl., 16 (2009), pp. 197-221. · Zbl 1224.65134 [18] C. Fougner and S. Boyd, Parameter selection and preconditioning for a graph form solver, in Emerging Applications of Control and Systems Theory, R. Tempo, S. Yurkovich, and P. Misra, eds., Lect. Notes Control Inf. Sci. Proc., Springer, Cham, pp. 41-61. · Zbl 1407.93126 [19] J. Friedman, T. Hastie, and R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso, Biostatistics, 9 (2008), pp. 432-441. · Zbl 1143.62076 [20] J. Friedman, T. Hastie, and R. Tibshirani, A Note on the Group Lasso and a Sparse Group Lasso, preprint, https://arxiv.org/abs/1001.0736, 2010. [21] B. S. He, H. Yang, and S. L. Wang, Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities, J. Optim. Theory Appl., 106 (2000), pp. 337-356. · Zbl 0997.49008 [22] H. He and D. Han, A distributed Douglas-Rachford splitting method for multi-block convex minimization problems, Adv. Comput. Math., 42 (2016), pp. 27-53. · Zbl 1332.90198 [23] S.-J. Kim, K. Koh, S. Boyd, and D. Gorinevsky, \( \ell_1\) trend filtering, SIAM Rev., 51 (2009), pp. 339-360, https://doi.org/10.1137/070690274. [24] K. N. Kudin and G. E. Scuseria, A black-box self-consistent field convergence algorithm: One step closer, J. Chem. Phys., 116 (2002), pp. 8255-8261. [25] Z. Li and J. Li, An Anderson-Chebyshev Mixing Method for Nonlinear Optimization, preprint, https://arxiv.org/abs/1809.02341, 2018. [26] Y. Liu, E. K. Ryu, and W. Yin, A new use of Douglas-Rachford splitting for identifying infeasible, unbounded, and pathological conic programs, Math. Program., 177 (2019), pp. 225-253. · Zbl 1515.47098 [27] J. Loffeld and C. S. Woodward, Considerations on the implementation and use of Anderson acceleration on distributed memory and GPU-based parallel computers, in Advances in the Mathematical Sciences, Springer, Cham, 2016, pp. 417-436. · Zbl 1353.65045 [28] A. J. Macleod, Acceleration of vector sequences by multi-dimensional \(\Delta^2\) methods, Commun. Appl. Numer. Methods, 2 (1986), pp. 385-392. · Zbl 0608.65003 [29] V. V. Mai and M. Johansson, Nonlinear acceleration of constrained optimization algorithms, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 2019, pp. 4903-4907. [30] M. Mešina, Convergence acceleration for the iterative solution of the equations \(X= AX+ f\), Comput. Methods Appl. Mech. Eng., 10 (1977), pp. 165-173. · Zbl 0344.65019 [31] A. Milzarek, X. Xiao, S. Cen, Z. Wen, and M. Ulbrich, A stochastic semismooth Newton method for nonsmooth nonconvex optimization, SIAM J. Optim., 29 (2019), pp. 2916-2948, https://doi.org/10.1137/18M1181249. · Zbl 1434.90108 [32] N. Moehle, E. Busseti, S. Boyd, and M. Wytock, Dynamic energy management, in Large Scale Optimization in Supply Chains and Smart Manufacturing, J. M. Velásquez-Bermúdez, M. Khakifirooz, and M. Fathi, eds., Springer Optim. Appl. 149, Springer, Cham, 2019, pp. 69-126. · Zbl 1446.90158 [33] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer Academic Publishers, Boston, 2004. · Zbl 1086.90045 [34] J. Nocedal and S. Wright, Numerical Optimization, Springer, New York, 2006. · Zbl 1104.65059 [35] B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd, Conic optimization via operator splitting and homogeneous self-dual embedding, J. Optim. Theory Appl., 169 (2016), pp. 1042-1068. · Zbl 1342.90136 [36] B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd, SCS: Splitting conic solver, version 2.1.2. https://github.com/cvxgrp/scs, November 2019. [37] C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8 (1982), pp. 43-71. · Zbl 0478.65016 [38] N. Parikh and S. Boyd, Block splitting for distributed optimization, Math. Program. Comput., 6 (2014), pp. 77-102. · Zbl 1305.90291 [39] N. Parikh and S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), pp. 127-239. [40] Y. Peng, B. Deng, J. Zhang, F. Geng, W. Qin, and L. Liu, Anderson acceleration for geometry optimization and physics simulation, ACM Trans. Graph., 37 (2018), 42. [41] S. Pollock and L. Rebholz, Anderson Acceleration for Contractive and Noncontractive Operators, preprint, https://arxiv.org/abs/1909.04638, 2019. [42] T. Rohwedder and R. Schneider, An analysis for the DIIS acceleration method used in quantum chemistry calculations, J. Math. Chem., 49 (2011), pp. 1889-1914. · Zbl 1252.81135 [43] E. K. Ryu and S. Boyd, A primer on monotone operator methods, Appl. Comput. Math, 15 (2016), pp. 3-43. · Zbl 1342.47066 [44] E. K. Ryu, Y. Liu, and W. Yin, Douglas-Rachford splitting and ADMM for pathological convex optimization, Comput. Optim. Appl., 74 (2019), pp. 747-778. · Zbl 1427.90223 [45] D. Scieur, F. Bach, and A. d’Aspremont, Nonlinear acceleration of stochastic algorithms, in Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, 2017, pp. 3982-3991. [46] D. Scieur, A. d’Aspremont, and F. Bach, Regularized nonlinear acceleration, in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 712-720. [47] D. Shanks, Non-linear transformations of divergent and slowly convergent sequences, J. Math. and Phys., 34 (1955), pp. 1-42. · Zbl 0067.28602 [48] D. A. Smith, W. F. Ford, and A. Sidi, Extrapolation methods for vector sequences, SIAM Rev., 29 (1987), pp. 199-233, https://doi.org/10.1137/1029042. · Zbl 0622.65003 [49] P. Sopasakis, K. Menounou, and P. Patrinos, SuperSCS: Fast and accurate large-scale conic optimization, in Proceedings of the European Control Conference, 2019, pp. 1500-1505. [50] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd, OSQP: An operator splitting solver for quadratic programs, in Proceedings of the UKACC International Conference on Control, 2018, p. 339. [51] A. Themelis and P. Patrinos, SuperMann: A superlinearly convergent algorithm for finding fixed points of nonexpansive operators, IEEE Trans. Automat. Control, 64 (2019), pp. 4875-4890. · Zbl 1482.49019 [52] A. Toth, J. A. Ellis, T. Evans, S. Hamilton, C. T. Kelley, R. Pawlowski, and S. Slattery, Local improvement results for Anderson acceleration with inaccurate function evaluations, SIAM J. Sci. Comput., 39 (2017), pp. S47-S65, https://doi.org/10.1137/16M1080677. · Zbl 1422.65079 [53] A. Toth and C. T. Kelley, Convergence analysis for Anderson acceleration, SIAM J. Numer. Anal., 53 (2015), pp. 805-819, https://doi.org/10.1137/130919398. · Zbl 1312.65083 [54] J. Tuck, S. Barratt, and S. Boyd, A Distributed Method for Fitting Laplacian Regularized Stratified Models, preprint, https://arxiv.org/abs/1904.12017, 2019. [55] H. F. Walker and P. Ni, Anderson acceleration for fixed-point iterations, SIAM J. Numer. Anal., 49 (2011), pp. 1715-1735, https://doi.org/10.1137/10078356X. · Zbl 1254.65067 [56] E. Wei and A. Ozdaglar, Distributed alternating direction method of multipliers, in Proceedings of the IEEE Conference on Decision and Control, 2012, pp. 5445-5450. [57] P. Wynn, On a device for computing the \(e_m(S_n)\) transformation, Math. Comp., 10 (1956), pp. 91-96. · Zbl 0074.04601 [58] M. Wytock, P.-W. Wang, and J. Z. Kolter, Convex Programming with Fast Proximal and Linear Operators, preprint, https://arxiv.org/abs/1511.04815, 2015. [59] X. Xiao, Y. Li, Z. Wen, and L. Zhang, A regularized semi-smooth Newton method with projection steps for composite convex programs, J. Sci. Comput., 76 (2018), pp. 364-389. · Zbl 1394.90534 [60] Z. Xu, G. Taylor, H. Li, M. Figueiredo, X. Yuan, and T. Goldstein, Adaptive consensus ADMM for distributed optimization, in Proceedings of the International Conference on Machine Learning, 2017, pp. 3841-3850. [61] J. Zhang, B. O’Donoghue, and S. Boyd, Globally Convergent type-I Anderson Acceleration for Non-smooth Fixed-Point Iterations, preprint, https://arxiv.org/abs/1808.03971, 2018. [62] J. Zhang, Y. Peng, W. Ouyang, and B. Deng, Accelerating ADMM for efficient simulation and optimization, ACM Trans. Graph., 38 (2019), 163. [63] J. Zhang, C. A. Uribe, A. Mokhtari, and A. Jadbabaie, Achieving acceleration in distributed optimization via direct discretization of the heavy-ball ODE, in Proceedings of the American Control Conference, 2019, pp. 3408-3413. [64] Y. Zhang and M. M. Zavlanos, A consensus-based distributed augmented Lagrangian method, in Proceedings of the IEEE Conference on Decision and Control, 2018, pp. 1763-1768. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.