×

Molecular dynamics. With deterministic and stochastic numerical methods. (English) Zbl 1351.82001

Interdisciplinary Applied Mathematics 39. Cham: Springer (ISBN 978-3-319-16374-1/hbk; 978-3-319-16375-8/ebook). xxii, 443 p. (2015).
Computational molecular dynamics (MD) is a simulation method for studying the physical movement of atoms and molecules, i. e. an \(N\)-body problem, with the aim to determine macroscopic properties and to get an insight into molecular motion on an atomic scale. This method calculates the time dependent behavior of a molecular system. Molecular systems generally consists of a vast number of particles. It is impossible in the majority of cases to determine the properties of the complex systems analytically. Numerical methods are required for the MD simulation.
In the preface the authors point out that MD is used by physicists, chemists, biologists, engineers, and drug designers. The book presents MD from a mathematical background without to be to rigorous, i. e., proofs are largely omitted, the subject is not formulated by theorems, and assumptions are rather selected in such a way, that the presentation can be simplified. The algorithms, examples, and corresponding simulations are in the foreground. Thus, the monograph is convenient for interdisciplinary courses. The authors do not present a comprehensive treatment of MD but, the subject provides a good basis to develop new or improved algorithms and to understand other methods.
The 443 + XXII pages long book consists of a preface, a list of notations, an index of formulations, methods and algorithms, 8 chapters, 3 appendices, 403 references, and an index with relevant terms. The chapters are divided into sections and subsections. Each chapter is finished by exercises. The results of the exercises are not given. The authors prepared a number of codes for the presented algorithms which are based on the free available software packages MATLAB, Octave, and MD.M, including hints how to use them in order to simulate few-atom models (up to dozen atoms) or models with several hundred atoms. Additionally, some open and commercial software packages for specialists are listen.
The atoms and molecules interact for a fixed period of time. The trajectories of the interacting particles are computed taking into account forces between the particles and their potential energies. The authors describe the models in form of deterministic or stochastic differential equations, discuss the appropriate numerical methods, and explain the simulation results. The reader should have some knowledge in numerical analysis, differential equations, probability, and classical and statistical mechanics. Additional texts for reading the book are listen in the preface and the introductory chapter for biological modeling, material science, and engineering. Especially, the monograph of M. Griebel et al. [Numerical simulation in molecular dynamics. Numerics, algorithms, parallelization, applications. Berlin: Springer (2007; Zbl 1131.76001)] is commended to the reader who is interested oneself in parallel computing for large-scale molecular dynamics simulation.
The aim of Chapter 1, the Introduction, is to help the reader to understand the dynamics of a molecular system, a prerequisite for the subsequent chapters about the algorithms and statistical foundations because the applications set the choice of the methods. The authors point out that MD not only describes the move of particles but, the aim consists in the development of results, such as “quantitative predictions of molecular size and shape, flexibilities, interactions with other molecules, behavior under pressure”, and the changes of a molecule over time, which have to be understood in terms of averaged quantities. Applications are shortly discussed to give an impression how the needed number of atoms and the coupling between them influence the computing times. Especially, the computing costs increase dramatically in the simulation of organic substances because the atoms interact electrostatically in this case at a long range. HIV-1 virus particles are considered as an example. The needed computational efficiency requires appropriate models, effective numerical methods, and advanced computational technique .
In the section ’Classical Mechanics Models’ of Chapter 1 it is demonstrated that the classical Newton’s equations for the motion of the nuclei are important for molecular dynamics computations with large number of atoms rather than a quantum mechanics approach. That is illustrated by the simple water molecule which with its 10 electrons and 3 nuclei could be described by the Schrödinger equation with 39 complex-valued variables (the spatial coordinates of the electrons and the nuclei), and a primitive atomic potential. An analytic solution of this partial differential equation is not known and a numerical computation is all but impossible due the rapid growth of basic functions needed for the discretization. The Born-Oppenheimer approximation act on the assumption that the motion of atomic nuclei and electrons in a molecule can be separated which leads to a simplification. Instead to solve the partial differential equation, than the equations of motion (Newton’s second law) of a system of nuclei, i. e., a system of ordinary differential equations (ODE), supplemented by initial value conditions and with an appropriate potential, is to be computed. That leads to an extensive reduction of computational costs. The total number of ODE increases linearly with \(N\) (\(N\) number of nuclei or electrons) and the work to solve the equations numerically goes up polynomially with \(N\) in contrast to a quantum mechanical treatment with its exponential dependence.
The user has to formulate an \(N\)-body problem, where forces between the particles and their potential energies have to be taken into account using molecular mechanics force fields and interatomic potentials, respectively, in order to establish the system of ordinary differential equations.
The used forces and potentials are not quite consistent with the quantum mechanics, but a sufficient accuracy is achieved for many practical problems. A survey of a few potential energy functions and forces, used in MD, are considered, such as Coulombic forces, the Morse potential, the London dispersion, van der Waals forces, the potentials of Lennard-Jones, Yukawa, Stillinger-Weber, Gay-Berne, and so on, including applications.
The \(N\)-body problem, its formulation in terms of its Lagrangian, and the equivalent Hamiltonian formulation are presented in order to put the equations of motion. The Lagrangian can be used for different types of coordinates. The corresponding change of variables induces a transformation of the velocity which can be generalized to systems with constraints. Such ’generalized coordinates’ are needed in selected applications, although the use of Cartesian coordinates is mostly advantageous for the design of simulation methods with good conservation properties. The Hamiltonian description is preferred because (1) the geometric structure associated to classical mechanics is simpler, (2) it is directly related to basic conservation laws which are formulated in terms of the energy, (3) a better comparison with the quantum mechanical approach of a system of particles by the Schrödinger equation is possible, and (4) the Hamiltonian can be described as a Legendre transformation of the Lagrangian.
Other topics are the terms and definitions of the phase space, the existence and uniqueness of solutions, and the flow map. The “phase point” is defined as a point of the phase space which represents “the collection of all position and momenta vectors of the atom” in contrast to the “point” that defines the position of the atom only. It is outlined that the questions of existence and uniqueness of solutions are globally defined for molecular Hamiltonian systems, the solutions remain bounded for all time with the energy constraint. The “flow map” describes the evolution of a set of phase points with the time. Furthermore considered are integrable examples, equilibrium points, and linearizations. Especially crystalline materials, boundary conditions, the influence of lattice vibrations, chaotic trajectories, and the high sensitive dependence of the system on initial value conditions, including its treatment via variational equations and Lyapunov exponents, are discussed in detail, always with the aim to reduce computational times.
Chapter 2 is devoted to numerical integrators for the calculation of the trajectories (presenting the atomic positions and velocities at time) of the system of ordinary differential equations. These systems exhibit many important features, such as conservation properties, a high sensitive dependence on initial value conditions, a vast number of particles, and the requirement to cover a large part of the phase space. The natural question is how and to which extent the reproduction of a long time qualitative behavior can be ensured by the numerical methods.
It is outlined that some one-step methods with constant stepsize fulfill the requirements. Schemes with variable stepsize or extrapolation algorithms, which are advantageous in many applications, are less relevant in molecular dynamics with long time integrations. Practically, one has to perform a number of runs with different stepsizes in the one-step method and to check for instance the fluctuation range in an easily computable quantity in order to make a choice of the stepsize within a tolerable accuracy. To capture the rapid fluctuation of the molecules, small stepsizes in the order of femtoseconds are usually required.
Firstly discussed are the first order Euler method and higher order schemes based on the Taylor series expansion including error growth, convergence, and order of accuracy. They have been turned out in generally to be improper for long time integrations because of growing energy errors.
As a popular scheme for molecular simulation the second order method of Verlet (including generalizations) is presented which gives improved energy accuracies. The Verlet algorithm is an geometric integrator, i. e. a method, that preserves geometric properties of the exact flow of a differential equation. The Verlet method is suitable for Hamiltonian systems, which are restricted by conservation laws or volume preservation. Before a more general discussion is given, the author demonstrates the derivation of Verlets scheme from the variational principle, a step to understand geometric integrators. That is not possible for every numerical integration algorithm.
For understanding geometric integration the reader is introduced into Hamilton’s principle of least action, that provides a method to formulate equations of motion from the Lagrangian, into the property of Hamiltonian systems, which have often a meaning of energy and are connected to conservation laws, into first integrals and their preservation under discretization, and into the Liouville’s theorem to prove that Hamiltonian systems always have volume preserving flows.
Liouville’ theorem is demonstrated for an 1-d oscillator with Lennard-Jones potential, for which any bounded individual trajectory is a periodic orbit because of the energy conservation. The consideration of the propagation for a number of initial conditions shows that all sets maintain the same area within the shapes, a volume conserving property. Based on Liouville’s theorem it is than shown that for instance the Euler method does not conserve the volume in contrast to the asymmetrical variant of Euler’s method.
The next step in the chapter consists in the development of a general technique for the construction of a subclass of geometric integrators, so-called symplectic integrators. A numerical algorithm that conserves the symplectic 2-form is called a symplectic integrator. It is derived that the conservation of the symplectic 2-form is a fundamental property of Hamiltonian equations.
The Verlet method and special Runge-Kutta methods are in certain extent automatically symplectic integrators. But using the general technique, splitting schemes, general composition methods, and multiderivative algorithms are considered to get symplectic integrators.
Chapter 3 is devoted to the analyzing of geometric integrators. In Chapter 2 the reader is confronted with the problem that for a fixed stepsize on the one hand the error in the methods is related to the stepsize by a power law given by the convergence theory and on the other hand the global error rise exponentially in long time simulations of MD problems with fixed stepsize. That is also the case for the Verlet method, but this algorithm exhibits the remarkable energy conservation property in contrast to other schemes. The Verlet method is a symplectic integrator, but why this property leads to an improved energy conservation is discussed in Chapter 3.
It is shown that an symplectic integrator can be interpreted as being equivalent to the flow map of a certain Hamiltonian system. The perception that “not only Hamiltonian flow maps are symplectic, but also near-identity symplectic maps are in an approximate sense Hamiltonian flow maps”, leads to the “existence of perturbed Hamiltonian from which the discrete trajectory may be derived as snapshots of continuous trajectories”. Methods are analyzed using this knowledge.
The authors discuss also the question: whether non-symplectic schemes with a volume preserving property can be constructed. Specially projection techniques with an energy preserving property are considered. But on the whole the authors advice to prefer the symplectic methods in the most cases.
A useful backward error analysis is given for Hamiltonian splitting schemes. Treated are the symplectic Euler method, the Verlet algorithm, and the higher-order symplectic methods of Suzuki-Yoshida and Takahashi-Imada.
Other topics are the consideration of time-reversible methods and specially the numerical treatment of hard-sphere collisions which are not pure Hamiltonian, but their dynamics can be largely interpreted by a Hamiltonian system.
Chapter 4 is devoted to stability thresholds. The author starts with the simple linear system of ODE \(\dot{z} = Az, z \in \mathbb{R}^m, A \in \mathbb{R}^{m \times m}\) and derives the stability region of Euler’s method for this system considering the corresponding eigenvalue problem. It is shown that Euler’s method used for a harmonic oscillator is unstable regardless the stepsize, and it is outlined that in contrast for the symplectic Euler method a linear stability condition exists with the property that this method will be stable choosing a stepsize smaller than the so-called stability threshold. The stability of the Verlet algorithm and other methods can be analyzed in a similar way. The author points to the problem that analyzing the stability for general nonlinear systems is more difficult and numerical experiments are not always avoidable.
Furthermore it is shown that the hope to increase the stability threshold by using implicit schemes does not occur. Alternatively also multiple timestepping methods are considered with the aim to increase the stability threshold. They turn out to be useful for some systems, but also negative examples are outlined. To overcame the last, mollified impulse methods are proposed, and a biomolecule example is cited, which can be solved using a stepsize three times higher compared with Verlet’s scheme.
Another topics are methods for constrained dynamics and rigid body molecular dynamics algorithms.
Phase space distributions and microcanonical averages are the subject of Chapter 5.
In practice one has to taken into account the chaotic behavior of molecular systems, i. e., individual movements of the particles are not relevant. In fact the average motion, i. e., the probability to find a particle at a given position relative to its neighbors, has to be determined. Whereas in the previous chapters the approximations of the Hamiltonian trajectories have been considered, now the “path emanating from the collection of all conditions within in a given set”, the calculation of averages, is of interest, which can be understand as the starting point of statistical mechanics for this subject. In order to describe the relative concentration of the phase points in different regions a probability measure for their distribution, represented by a time and space depended probability density function (pdf) (a macroscopic state), is introduced.
The evolution of the density for Hamiltonian systems is governed by Liouville’s equation, which is discussed after a short introduction of the required function spaces and the corresponding solution operator, that describes how a function of the phase variables propagates under the flow of the differential equation in time or in the phase space. Topics are the microcanonical probability measure and averages, the concept of ergodicity, dynamical averages and numerical computations for representative examples.
After the treatment of isolated collections of particles and the consideration of the chaotic nature of typical systems in the previous sections, Chapter 6, ’The Canonical Distribution and Stochastic Differential Equations’, describes, that any experimental study has to be taken into account additionally the interaction between the so far applied molecular model and the environment of the isolated collections of atoms, including the environment results in an ‘bulk’ system. Thus, two systems have to be considered, a one small and one large system, which interact in a complicated way. The assumption, that instead of the detailed motion of the bulk system an approximation for the modelling of the interaction (for example based on the net energy) can be used, leads to the concept of a thermodynamic reservoir. The corresponding large system consisting of the resolved system plus reservoir is then defined by various thermodynamic parameters resulting in an equilibrium probability distribution for the small system.
The author outlines that it is advantageous to use stochastic differential equations (SDE) in this case rather than deterministic molecular dynamics which would suppress microscopic interactions between the resolved system and the reservoir. Furthermore the molecular model can be good parameterized using SDE. The subject is formulated by the ensemble perspective. The microcanonical and the canonical ensembles are treated including the Gibbs-Boltzmann probability density. In connection with the treatment of SDE the reader is introduced into Brownian motion, random walks, Wiener processes, stochastic integration (Ito formula and Stratonovich integration), the distributional solution of the Ornstein-Uhlenbeck SDE, Langevin’s model for the Brownian motion (Langevin dynamics), the Fokker-Planck equation, and the ergodicity of SDE.
Chapter 7, ‘Numerical Methods for Stochastic Molecular Dynamics’, starts with a discussion of the stochastic dynamic models, treated in the previous chapter, and compare them with non-dynamical alternatives, such as Monte Carlo models, with the result that the dynamic models require only a little system-specific knowledge and can be considered as workhorse methods providing rigorous control of the canonical distribution. They can be easy modified and incorporated for instance into kinetic Monte Carlo and metropolized schemes and are considered as a cornerstone in modern molecular modelling.
The sampling strategies are based on Langevin dynamics, beginning with the numerical analysis of SDE. The effects of the concept of weak and strong accuracy and the sampling error are treated. Topics are splitting methods for Langevin dynamics, error results for the harmonic oscillator, the stochastic position Verlet method, the A, B and O splitting, the Brünger-Brooks-Karplus integrator, the exact Gibbs sampling method, basic approaches for the leading order behavior of the discretization error for the SDE integration schemes, perturbation techniques, superconvergence, the limit method, applying the schemes to a biomolecule, constrained Langevin dynamics, and multiple timestepping.
In Chapter 8, ‘Extended Variable Methods’, the scope of systems, that can be treated by the methods, described in the previous chapter, is extended using auxiliary variables. A number of methods are discussed, such as the Nose-Hoover scheme, the methods of Nose-Poincare and of Nose-Hoover-Langevin, isokinetic methods, and the cell method of Parrinello-Rahman, to name a few. Topics are properties of the methods for thermostatted dynamics, lack of ergodicity, efficiency, and numerical integration in connection with several applications.
Appendix A contains advices for a careful coding of routines for the costly computation of forces.
In Appendix B concepts from probability are summarized. The central limit theorem is formulated and Markov processes are considered.
A number of Monte Carlo methods are outlined in Appendix C: the Metropolis Monte Carlo method, the Monte-Carlo-Markov-chain method, the Metropolis-Hastings Monte Carlo algorithm, the Metropolis-adjusted Langevin algorithm, and hybrid Monte Carlo methods.

MSC:

82-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to statistical mechanics
82B80 Numerical methods in equilibrium statistical mechanics (MSC2010)
82-04 Software, source code, etc. for problems pertaining to statistical mechanics
92C40 Biochemistry, molecular biology
82-08 Computational methods (statistical mechanics) (MSC2010)

Citations:

Zbl 1131.76001
PDFBibTeX XMLCite
Full Text: DOI