×

Efficient implementation of the many-body reactive bond order (REBO) potential on GPU. (English) Zbl 1349.82031

Summary: The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

MSC:

82B80 Numerical methods in equilibrium statistical mechanics (MSC2010)
82B10 Quantum equilibrium statistical mechanics (general)
65Y10 Numerical algorithms for specific classes of architectures
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Allen, M. P.; Tildesley, D. J., Computer Simulation of Liquids (1989), Oxford University Press · Zbl 0703.68099
[2] Rahman, A., Correlations in the motion of atoms in liquid argon, Phys. Rev., 136, 2A, Article A405 pp. (1964)
[3] Verlet, L., Computer “experiments” on classical fluids, I: thermodynamical properties of Lennard-Jones molecules, Phys. Rev., 159, 1, 98 (1967)
[4] Tsai, D. H.; Beckett, C., Shock wave propagation in cubic lattices, J. Geophys. Res., 71, 10, 2601-2608 (1966)
[5] Rahman, A.; Stillinger, F. H., Molecular dynamics study of liquid water, J. Chem. Phys., 55, 7, 3336-3359 (1971)
[6] McCammon, J. A.; Gelin, B. R.; Karplus, M., Dynamics of folded proteins, Nature, 267, 5612, 585-590 (1977)
[7] Holian, B. L.; Straub, G. K., Molecular dynamics of shock waves in three-dimensional solids: transition from nonsteady to steady waves in perfect crystals and implications for the Rankine-Hugoniot conditions, Phys. Rev. Lett., 43, 21, 1598 (1979)
[8] Van Gunsteren, W.; Berendsen, H., Computer simulation as a tool for tracing the conformational differences between proteins in solution and in the crystalline state, J. Mol. Biol., 176, 4, 559-564 (1984)
[9] Levitt, M.; Sharon, R., Accurate simulation of protein dynamics in solution, Proc. Natl. Acad. Sci. USA, 85, 20, 7557-7561 (1988)
[10] Holian, B. L., Modeling shock-wave deformation via molecular dynamics, Phys. Rev. A, 37, 7, 2562 (1988)
[11] Freddolino, P. L.; Arkhipov, A. S.; Larson, S. B.; McPherson, A.; Schulten, K., Molecular dynamics simulations of the complete satellite tobacco mosaic virus, Structure, 14, 3, 437-449 (2006)
[12] Zhao, G.; Perilla, J. R.; Yufenyuy, E. L.; Meng, X.; Chen, B.; Ning, J.; Ahn, J.; Gronenborn, A. M.; Schulten, K.; Aiken, C., Mature HIV-1 capsid structure by cryo-electron microscopy and all-atom molecular dynamics, Nature, 497, 7451, 643-646 (2013)
[13] Abraham, F. F.; Walkup, R.; Gao, H.; Duchaineau, M.; De La Rubia, T. D.; Seager, M., Simulating materials failure by using up to one billion atoms and the world’s fastest computer: Brittle fracture, Proc. Natl. Acad. Sci. USA, 99, 9, 5777-5782 (2002)
[14] Shekhar, A.; Nomura, K.-i.; Kalia, R. K.; Nakano, A.; Vashishta, P., Nanobubble collapse on a silica surface in water: billion-atom reactive molecular dynamics simulations, Phys. Rev. Lett., 111, 18, Article 184503 pp. (2013)
[15] Stone, J. E.; Phillips, J. C.; Freddolino, P. L.; Hardy, D. J.; Trabuco, L. G.; Schulten, K., Accelerating molecular modeling applications with graphics processors, J. Comput. Chem., 28, 16, 2618-2640 (2007)
[16] Liu, W.; Schmidt, B.; Voss, G.; Müller-Wittig, W., Accelerating molecular dynamics simulations using graphics processing units with CUDA, Comput. Phys. Commun., 179, 9, 634-641 (2008)
[17] Friedrichs, M. S.; Eastman, P.; Vaidyanathan, V.; Houston, M.; Legrand, S.; Beberg, A. L.; Ensign, D. L.; Bruns, C. M.; Pande, V. S., Accelerating molecular dynamic simulation on graphics processing units, J. Comput. Chem., 30, 6, 864-872 (2009)
[18] Pronk, S.; Páll, S.; Schulz, R.; Larsson, P.; Bjelkmar, P.; Apostolov, R.; Shirts, M. R.; Smith, J. C.; Kasson, P. M.; van der Spoel, D., GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit, Bioinformatics, Article btt055 pp. (2013)
[19] Salomon-Ferrer, R.; Case, D. A.; Walker, R. C., An overview of the amber biomolecular simulation package, WIREs Comput. Mol. Sci., 3, 2, 198-210 (2013)
[20] Phillips, J. C.; Braun, R.; Wang, W.; Gumbart, J.; Tajkhorshid, E.; Villa, E.; Chipot, C.; Skeel, R. D.; Kale, L.; Schulten, K., Scalable molecular dynamics with NAMD, J. Comput. Chem., 26, 16, 1781-1802 (2005)
[21] Brooks, B. R.; Bruccoleri, R. E.; Olafson, B. D.; David, S.; Swaminathan, S.; Karplus, M., CHARMM: a program for macromolecular energy, minimization, and dynamics calculations, J. Comput. Chem., 4, 187-217 (1983)
[22] Plimpton, S., Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys., 117, 1, 1-19 (1995) · Zbl 0830.65120
[23] Stadler, J.; Mikulla, R.; Trebin, H.-R., IMD: a software package for molecular dynamics studies on parallel computers, Int. J. Mod. Phys. C, 8, 05, 1131-1140 (1997)
[24] Harvey, M. J.; Giupponi, G.; Fabritiis, G. D., ACEMD: accelerating biomolecular dynamics in the microsecond time scale, J. Chem. Theory Comput., 5, 6, 1632-1639 (2009)
[25] Goetz, A. W.; Williamson, M. J.; Xu, D.; Poole, D.; Le Grand, S.; Walker, R. C., Routine microsecond molecular dynamics simulations with AMBER on GPUs, 1: generalized born, J. Chem. Theory Comput., 8, 5, 1542-1555 (2012)
[26] Salomon-Ferrer, R.; Goetz, A. W.; Poole, D.; Le Grand, S.; Walker, R. C., Routine microsecond molecular dynamics simulations with AMBER on GPUs, 2: explicit solvent particle mesh ewald, J. Chem. Theory Comput., 9, 9, 3878-3888 (2013)
[27] Eastman, P.; Friedrichs, M. S.; Chodera, J. D.; Radmer, R. J.; Bruns, C. M.; Ku, J. P.; Beauchamp, K. A.; Lane, T. J.; Wang, L.-P.; Shukla, D., OpenMM 4: a reusable, extensible, hardware independent library for high performance molecular simulation, J. Chem. Theory Comput., 9, 1, 461-469 (2012)
[28] Brown, W. M.; Wang, P.; Plimpton, S. J.; Tharrington, A. N., Implementing molecular dynamics on hybrid high performance computers-short range forces, Comput. Phys. Commun., 182, 4, 898-911 (2011) · Zbl 1221.82008
[29] Brown, W. M.; Kohlmeyer, A.; Plimpton, S. J.; Tharrington, A. N., Implementing molecular dynamics on hybrid high performance computers - particle-particle particle-mesh, Comput. Phys. Commun., 183, 3, 449-459 (2012)
[30] Glaser, J.; Nguyen, T. D.; Anderson, J. A.; Lui, P.; Spiga, F.; Millan, J. A.; Morse, D. C.; Glotzer, S. C., Strong scaling of general-purpose molecular dynamics simulations on GPUs, Comput. Phys. Commun., 192, 97-107 (2015)
[31] Ohira, T.; Ukai, O.; Noda, M., Fundamental processes of microcrystalline silicon film growth: a molecular dynamics study, Surf. Sci., 458, 1, 216-228 (2000)
[32] Liew, K.; Wong, C.; He, X.; Tan, M.; Meguid, S., Nanomechanics of single and multiwalled carbon nanotubes, Phys. Rev. B, Solid State, 69, 11, Article 115429 pp. (2004)
[33] Yoon, T.; Lim, T.; Min, T.; Hung, S.; Jakse, N.; Lai, S., Epitaxial growth of graphene on 6H-silicon carbide substrate by simulated annealing method, J. Chem. Phys., 139, 20, Article 204702 pp. (2013)
[34] Abell, G. C., Empirical chemical pseudopotential theory of molecular and metallic bonding, Phys. Rev. B, 31, 6184-6196 (1985)
[35] Tersoff, J., New empirical approach for the structure and energy of covalent systems, Phys. Rev. B, 37, 12, 6991-7000 (1988)
[36] Brenner, D. W., Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films, Phys. Rev. B, 42, 15, 9458-9471 (1990)
[37] Brenner, D. W.; Shenderova, O. A.; Harrison, J. A.; Stuart, S. J.; Ni, B.; Sinnott, S. B., A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons, J. Phys. Condens. Matter, 14, 4, 783-802 (2002)
[38] Kylasa, S. B.; Aktulga, H. M.; Grama, A. Y., PuReMD-GPU: a reactive molecular dynamics simulation package for GPUs, J. Comput. Phys., 272, 343-359 (2014) · Zbl 1349.82002
[39] Zheng, M.; Li, X.; Guo, L., Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics, J. Mol. Graph. Model., 41, 0, 1-11 (2013)
[40] Keating, P. N., Effect of invariance requirements on the elastic strain energy of crystals with application to the diamond structure, Phys. Rev., 145, 2, 637-645 (1966)
[41] Stillinger, F. H.; Weber, T. A., Computer simulation of local order in condensed phases of silicon, Phys. Rev. B, 31, 8, 5262-5271 (1985)
[42] Stuart, S. J.; Tutein, A. B.; Harrison, J. A., A reactive potential for hydrocarbons with intermolecular interactions, J. Chem. Phys., 112, 14, 6472-6486 (2000)
[43] Schall, J. D.; Gao, G.; Harrison, J. A., Elastic constants of silicon materials calculated as a function of temperature using a parametrization of the second-generation reactive empirical bond-order potential, Phys. Rev. B, 77, 11, Article 115209 pp. (2008)
[44] Schall, J. D.; Harrison, J. A., Reactive bond-order potential for Si-, C-, and H-containing materials, J. Phys. Chem. C, 117, 3, 1323-1334 (2013)
[45] Ni, B.; Lee, K.-H.; Sinnott, S. B., A reactive empirical bond order (REBO) potential for hydrocarbon-oxygen interactions, J. Phys. Condens. Matter, 16, 41, 7261-7275 (2004)
[47] Dagum, L.; Enon, R., OpenMP: an industry standard API for shared-memory programming, IEEE Comput. Sci. Eng., 5, 1, 46-55 (1998)
[48] Hoshino, T.; Maruyama, N.; Matsuoka, S.; Takaki, R., CUDA vs OpenACC: performance case studies with kernel benchmarks and a memory-bound CFD application, (13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing. 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2013 (2013), IEEE), 136-143
[49] Norman, M.; Larkin, J.; Vose, A.; Evans, K., A case study of CUDA FORTRAN and OpenACC for an atmospheric climate kernel, J. Comput. Sci., 9, 1-6 (2015)
[50] Rueda, A. J.; Noguera, J. M.; Luque, A., A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications, Comput. Geosci., 87, 91-100 (2016)
[51] Lindholm, E.; Nickolls, J.; Oberman, S.; Montrym, J., NVIDIA Tesla: a unified graphics and computing architecture, IEEE MICRO, 2, 39-55 (2008)
[52] Nickolls, J.; Buck, I.; Garland, M.; Skadron, K., Scalable parallel programming with CUDA, ACM Queue, 6, 2, 40-53 (2008)
[53] Stone, J. E.; Hardy, D. J.; Ufimtsev, I. S.; Schulten, K., GPU-accelerated molecular modeling coming of age, J. Mol. Graph. Model., 29, 2, 116-125 (2010)
[54] NVIDIA, NVIDIAs next generation CUDA compute architecture: Kepler GK110 (2012), Tech. rep., NVIDIA white paper
[55] Stuart, S. J.; Li, Y.; Kum, O.; Mintmire, J.; Voter, A. F., Reactive bond-order simulations using both spatial and temporal approaches to parallelism, Struct. Chem., 15, 5, 479-486 (2004)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.