zbMATH — the first resource for mathematics

Variational networks: an optimal control approach to early stopping variational methods for image restoration. (English) Zbl 1434.68626
Summary: We investigate a well-known phenomenon of variational approaches in image processing, where typically the best image quality is achieved when the gradient flow process is stopped before converging to a stationary point. This paradox originates from a tradeoff between optimization and modeling errors of the underlying variational model and holds true even if deep learning methods are used to learn highly expressive regularizers from data. In this paper, we take advantage of this paradox and introduce an optimal stopping time into the gradient flow process, which in turn is learned from data by means of an optimal control approach. After a time discretization, we obtain variational networks, which can be interpreted as a particular type of recurrent neural networks. The learned variational networks achieve competitive results for image denoising and image deblurring on a standard benchmark data set. One of the key theoretical results is the development of first- and second-order conditions to verify optimal stopping time. A nonlinear spectral analysis of the gradient of the learned regularizer gives enlightening insights into the different regularization properties.
68U10 Computing methodologies for image processing
34H05 Control problems involving ordinary differential equations
49K15 Optimality conditions for problems involving ordinary differential equations
65L05 Numerical methods for initial value problems
68T07 Artificial neural networks and deep learning
94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
Full Text: DOI
[1] Ambrosio, L.; Gigli, N.; Savare, G., Gradient Flows in Metric Spaces and in the Space of Probability Measures (2008), Basel: Birkhäuser, Basel · Zbl 1145.35001
[2] Atkinson, K., An Introduction to Numerical Analysis (1989), Hoboken: Wiley, Hoboken · Zbl 0718.65001
[3] Beck, A.; Teboulle, M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., 2, 183-202 (2009) · Zbl 1175.94009
[4] Benning, M., Celledoni, E., Ehrhardt, M., Owren, B., Schönlieb, C.-B.: Deep learning as optimal control problems: models and numerical methods. arXiv:1904.05657 (2019) · Zbl 1429.68249
[5] Binder, A.; Hanke, M.; Scherzer, O., On the Landweber iteration for nonlinear ill-posed problems, J. Inv. Ill Posed Probl., 4, 5, 381-390 (1996) · Zbl 0879.65037
[6] Butcher, JC, Numerical Methods for Ordinary Differential Equations (2008), Hoboken: Wiley, Hoboken
[7] Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., Holtham, E.: Reversible architectures for arbitrarily deep residual neural networks. In: AAAI Conference on Artificial Intelligence (2018)
[8] Chambolle, A., Caselles, V., Novaga, M., Cremers, D., Pock, T.: An introduction to total variation for image analysis. In: Theoretical Foundations and Numerical Methods for Sparse Recovery. Radon Series on Computational and Applied Mathematics, vol. 9, pp. 263-340 (2009) · Zbl 1209.94004
[9] Chambolle, A.; Pock, T., An introduction to continuous optimization for imaging, Acta Numer., 25, 161-319 (2016) · Zbl 1343.65064
[10] Chen, Y.; Ranftl, R.; Pock, T., Insights into analysis operator learning: from patch-based sparse models to higher-order MRFs, IEEE Trans. Image Process., 99, 1, 1060-1072 (2014) · Zbl 1374.94065
[11] Chen, Y.; Pock, T., Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration, IEEE Trans. Pattern Anal. Mach. Intell., 39, 6, 1256-1272 (2017)
[12] Weinan, E., A proposal on machine learning via dynamical systems, Commun. Math. Stat., 5, 1-11 (2017) · Zbl 1380.37154
[13] Weinan, E.; Han, J.; Li, Q., A mean-field optimal control formulation of deep learning, Res. Math. Sci., 6, 10 (2019) · Zbl 1421.49021
[14] Engl, HW; Hanke, M.; Neubauer, A., Regularization of inverse problems. Volume 375 of Mathematics and its Applications (1996), Dordrecht: Kluwer Academic Publishers Group, Dordrecht
[15] Gazzola, S.; Hansen, PC; Nagy, JG, IR Tools: a MATLAB package of iterative regularization methods and large-scale test problems, Numer. Algorithms, 81, 3, 773-811 (2018) · Zbl 1415.65003
[16] Gilboa, G., Nonlinear Eigenproblems in Image Processing and Computer Vision (2018), Berlin: Springer, Berlin · Zbl 1402.68009
[17] Haber, E.; Ruthotto, L., Stable architectures for deep neural networks, Inverse Probl., 34, 1, 014004 (2017) · Zbl 1426.68236
[18] Hale, JK, Ordinary Differential Equations (1980), New York: Dover Publications, New York
[19] Hammernik, K.; Klatzer, T.; Kobler, E.; Recht, MP; Sodickson, DK; Pock, T.; Knoll, F., Learning a variational network for reconstruction of accelerated MRI data, Magn. Resonance Med, 79, 6, 3055-3071 (2018)
[20] Hansen, P. C.: Discrete inverse problems. In: Fundamentals of Algorithms, vol. 7. Society for Industrial and Applied Mathematics, Philadelphia, PA (2010) · Zbl 1197.65054
[21] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778 (2016)
[22] Ito, K., Kunisch, K.: Lagrange multiplier approach to variational problems and applications. In: Advances in Design and Control, vol. 15. Society for Industrial and Applied Mathematics, Philadelphia, PA (2008) · Zbl 1156.49002
[23] Kaltenbacher, B.; Neubauer, A.; Scherzer, O., Iterative Regularization Methods for Nonlinear Ill-Posed Problems (2008), Berlin: Walter de Gruyter GmbH & Co. KG, Berlin · Zbl 1145.65037
[24] Kobler, E., Klatzer, T., Hammernik, K., Pock, T.: Variational networks: Connecting variational methods and deep learning. In: Pattern Recognition, pp. 281-293. Springer, Berlin (2017)
[25] Landweber, L., An iteration formula for Fredholm integral equations of the first kind, Am. J. Math., 73, 3, 615-624 (1951) · Zbl 0043.10602
[26] LeCun, Y.; Bengio, Y.; Hinton, G., Deep learning, Nature, 521, 436-444 (2015)
[27] Li, Q.; Chen, L.; Tai, C.; E, W., Maximum principle based algorithms for deep learning, J. Mach. Learn. Res., 18, 1-29 (2018) · Zbl 06982921
[28] Li, Q., Hao, S.: An optimal control approach to deep learning and applications to discrete-weight neural networks. arXiv:1803.01299 (2018)
[29] Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: International Conference on Computer Vision (2001)
[30] Matet, S., Rosasco, L., Villa, S., Vu, B. L.: Don’t relax: early stopping for convex regularization. arXiv:1707.05422 (2017)
[31] Perona, P.; Malik, J., Scale-space and edge detection using anisotropic diffusion, IEEE Trans. Pattern Anal. Mach. Intell., 12, 7, 629-639 (1990)
[32] Pock, T.; Sabach, S., Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems, SIAM J. Imag. Sci., 9, 4, 1756-1787 (2016) · Zbl 1358.90109
[33] Prechelt, L., Early Stopping—But When? In Neural Networks: Tricks of the Trade, 53-67 (2012), Berlin: Springer, Berlin
[34] Rieder, A., Keine Probleme mit inversen Problemen (2003), Braunschweig: Friedr. Vieweg & Sohn, Braunschweig
[35] Rosasco, L.; Villa, S., Learning with incremental iterative regularization, Adv. Neural Inf. Process. Syst., 28, 1630-1638 (2015)
[36] Roth, S.; Black, MJ, Fields of experts, Int. J. Comput. Vis., 82, 2, 205-229 (2009)
[37] Raskutti, G., Wainwright, M. J., Yu, B.: Early stopping for non-parametric regression: an optimal data-dependent stopping rule. In: 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1318-1325 (2011) · Zbl 1318.62136
[38] Rudin, L.; Osher, S.; Fatemi, E., Nonlinear total variation based noise removal algorithms, Phys. D, 60, 259-268 (1992) · Zbl 0780.49028
[39] Schulter, S., Leistner, C., Bischof, H.: Fast and accurate image upscaling with super-resolution forests. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3791-3799 (2015)
[40] Teschl, G., Ordinary Differential Equations and Dynamical Systems (2012), Providence: American Mathematical Society, Providence · Zbl 1263.34002
[41] Tikhonov, AN, Nonlinear Ill-Posed Problems (1998), Netherlands: Springer, Netherlands
[42] Yao, Y.; Rosasco, L.; Caponnetto, A., On early stopping in gradient descent learning, Constr. Approx., 26, 2, 289-315 (2007) · Zbl 1125.62035
[43] Zeidler, E., Nonlinear Functional Analysis and its Applications III: Variational Methods and Optimization (1985), New York: Springer, New York · Zbl 0583.47051
[44] Zhang,Y., Hofmann, B.: On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. (2018). 10.1080/00036811.2018.1517412
[45] Zhang, T.; Yu, B., Boosting with early stopping: convergence and consistency, Ann. Stat., 33, 4, 1538-1579 (2005) · Zbl 1078.62038
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.