zbMATH — the first resource for mathematics

Deep neural network approach to forward-inverse problems. (English) Zbl 1442.35474
Summary: In this paper, we construct approximated solutions of Differential Equations (DEs) using the Deep Neural Network (DNN). Furthermore, we present an architecture that includes the process of finding model parameters through experimental data, the inverse problem. That is, we provide a unified framework of DNN architecture that approximates an analytic solution and its model parameters simultaneously. The architecture consists of a feed forward DNN with non-linear activation functions depending on DEs, automatic differentiation [A. G. Baydin et al., J. Mach. Learn. Res. 18, Paper No. 153, 43 p. (2018; Zbl 06982909)], reduction of order, and gradient based optimization method. We also prove theoretically that the proposed DNN solution converges to an analytic solution in a suitable function space for fundamental DEs. Finally, we perform numerical experiments to validate the robustness of our simplistic DNN architecture for 1D transport equation, 2D heat equation, 2D wave equation, and the Lotka-Volterra system.
35Q92 PDEs in connection with biology, chemistry and other natural sciences
Adam; DiffSharp; PyTorch
Full Text: DOI
[1] W. Arloff; K. R. B. Schmitt; L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9, 419-432 (2018)
[2] A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp. · Zbl 06982909
[3] J. Berg and K. Nyström, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685.
[4] J. Berg; K. Nyström, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317, 28-41 (2018)
[5] G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009.
[6] N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1, 290-295 (1990)
[7] R. Courant; K. Friedrichs; H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11, 215-234 (1967) · Zbl 0145.40402
[8] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2, 303-314 (1989) · Zbl 0679.94019
[9] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. · Zbl 1194.35001
[10] G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997, 1-8 (1996)
[11] K. Hornik; M. Stinchcombe; H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2, 359-366 (1989) · Zbl 1383.92015
[12] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980.
[13] I. E. Lagaris; A. Likas; D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9, 987-1000 (1998)
[14] I. E. Lagaris; A. C. Likas; D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11, 1041-1049 (2000)
[15] K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2, 164-168 (1944) · Zbl 0063.03501
[16] L. Jianyu; L. Siwei; Q. Yingjian; H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16, 729-734 (2003)
[17] J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. · Zbl 1398.93346
[18] X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12, 327-343 (1996) · Zbl 0861.41013
[19] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11, 431-441 (1963) · Zbl 0112.10505
[20] W. S. McCulloch; W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5, 115-133 (1943) · Zbl 0063.03860
[21] A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017).
[22] M. Raissi; P. Perdikaris; G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378, 686-707 (2019) · Zbl 1415.68175
[23] S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237.
[24] S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54, 79-94 (2005) · Zbl 1069.65109
[25] P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp.
[26] F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. · Zbl 1299.35321
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.