Kumar, S. Fundamental Limits to Moore`s Law. arXiv.org. e‐Print archive. 2015.
Waldrop, MM. The chips are down for Moore`s law. Nature News. 2016;530(7589):144–147.
Allen, MP, Tildesley, DJ. Computer simulation of liquids. Oxford, England: Oxford University Press, 2017.
González, M. Force fields and molecular dynamics simulations. École thématique de la Société Française de la Neutronique. 2011;12:169–200.
Paquet, E, Viktor, HL. Molecular dynamics, Monte Carlo simulations, and Langevin dynamics: Computational review. BioMed Res Int. 2015; 2015:183918. http://dx.doi.org/10.1155/2015/183918.
Bowen, JP, Allinger, NL. Molecular mechanics: The art and science of parameterization. Reviews in Computational Chemistry. Hoboken, New Jersey: Wiley, 1991; p. 81–97.
Hornak, V, Abel, R, Okur, A, Strockbine, B, Roitberg, A, Simmerling, C. Comparison of multiple amber force fields and development of improved protein backbone parameters. Proteins. 2006;65(3):712–725.
MacKerell, AD Jr. Empirical force fields for biological macromolecules: Overview and issues. J Comput Chem. 2004;25(13):1584–1604.
Nerenberg, PS, Head‐Gordon, T. New developments in force fields for biomolecular simulations. Curr Opin Struct Biol. 2018;49:129–138.
Pettersson, I, Liljefors, T. Molecular mechanics calculated conformational energies of organic molecules: A comparison of force fields. Reviews in Computational Chemistry. Hoboken, New Jersey: Wiley, 1996; p. 167–189.
Ponder, JW, Case, DA. Force fields for protein simulations. Advances in Protein Chemistry. Volume 66. %3EAmsterdam, NL: Elsevier, 2003; p. 27–85.
Zhu, X, Lopes, PE, MacKerell, AD Jr. Recent developments and applications of the CHARMM force fields. Wiley Interdiscip Rev Comput Mol Sci. 2012;2(1):167–185.
Bisson, M, Bernaschi, M, Melchionna, S. Parallel molecular dynamics with irregular domain decomposition. Commun Comput Phys. 2011;10(4):1071–1088.
Plimpton, S. Fast parallel algorithms for short‐range molecular dynamics. J Comput Phys. 1995;117(1):1–19.
Seckler, S, Tchipev, N, Bungartz, H‐J, Neumann, P. Load balancing for molecular dynamics simulations on heterogeneous architectures. In 2016 IEEE 23rd international conference on high performance computing (HIPC); 2016. p. 101–110.
Yao, Z, Wang, J‐S, Liu, G‐R, Cheng, M. Improved neighbor list algorithm in molecular simulations using cell decomposition and data sorting method. Comput Phys Commun. 2004;161(1–2):27–35.
Alam, SR, Agarwal, PK, Hampton, SS, Ong, H, Vetter, JS. Impact of multicores on large‐scale molecular dynamics simulations. 2008 IEEE International Symposium on Parallel and Distributed Processing; 2008. p. 1–7.
Meyer, R. Efficient parallelization of short‐range molecular dynamics simulations on many‐core systems. Phys Rev E. 2013;88(5):053309.
Tarmyshov, KB, Müller‐Plathe, F. Parallelizing a molecular dynamics algorithm on a multiprocessor workstation using OpenMP. J Chem Inf Model. 2005;45(6):1943–1952.
Tuckerman, ME, Yarne, D, Samuelson, SO, Hughes, AL, Martyna, GJ. Exploiting multiple levels of parallelism in molecular dynamics based calculations via modern techniques and software paradigms on distributed memory computers. Comput Phys Commun. 2000;128(1–2):333–376.
Bowers, KJ, Chow, E, Xu, H, et al. Scalable algorithms for molecular dynamics simulations on commodity clusters. In sc`06: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. New York, NY: ACM Press; 2006.
Ohmura, I, Morimoto, G, Ohno, Y, Hasegawa, A, Taiji, M. MDGRAPE‐4: A special‐purpose computer system for molecular dynamics simulations. Philosop Trans Roy Soc A. 2014;372(2021):20130387.
Phillips, JC, Zheng, G, Kumar, S, Kalé, LV. NAMD: Biomolecular simulation on thousands of processors. Sc`02: Proceedings of the 2002 ACM/IEEE Conference on Supercomputing; 2002. p. 36–36.
Pronk, S, Páll, S, Schulz, R, et al. GROMACS 4.5: A high‐throughput and highly parallel open source molecular simulation toolkit. Bioinformatics. 2013;29(7):845–854.
De Fabritiis, G. Performance of the cell processor for biomolecular simulations. Comput Phys Commun. 2007;176(11–12):660–664.
Shi, G, Kindratenko, VV, Ufimtsev, IS, Martinez, TJ, Phillips, JC, Gottlieb, SA. Implementation of scientific computing applications on the cell broadband engine. Sci Prog. 2009;17(1–2):135–151.
Swaminarayan, S, Kadau, K, Germann, TC, Fossum, GC. 369 Tflop/s molecular dynamics simulations on the Roadrunner general‐purpose heterogeneous supercomputer. Proceedings of the 2008 ACM/IEEE Conference on Supercomputing; 2008. p. 64.
Gu, Y, VanCourt, T, Herbordt, MC. Accelerating molecular dynamics simulations with configurable circuits. IEE Proc Comput Digit Techn. 2006;153(3):189–195.
Scrofano, R, Gokhale, M, Trouw, F, Prasanna, VK. Hardware/software approach to molecular dynamics on reconfigurable computers. 2006 14th Annual IEEE Symposium on Field‐Programmable Custom Computing Machines; 2006. p. 23–34.
Villarreal, J, Najjar, WA. Compiled hardware acceleration of molecular dynamics code. 2008 International conference on field programmable logic and applications; 2008. p. 667–670.
Yang, C, Geng, T, Wang, T, et al. Fully integrated On‐FPGA molecular dynamics simulations. arXiv preprint arXiv:1905.05359. 2019.
Narumi, T, Susukita, R, Ebisuzaki, T, McNiven, G, Elmegreen, B. Molecular dynamics machine: Special‐purpose computer for molecular dynamics simulations. Mol Simulat. 1999;21(5–6):401–415.
Shaw, DE, Deneroff, MM, Dror, RO, et al. Anton, a special‐purpose machine for molecular dynamics simulation. Commun ACM. 2008;51(7):91–97.
Shaw, DE, Dror, RO, Salmon, JK, et al. Millisecond‐scale molecular dynamics simulations on Anton. Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis; 2009. p. 39.
Susukita, R, Ebisuzaki, T, Elmegreen, BG, et al. Hardware accelerator for molecular dynamics: MDGRAPE‐2. Comput Phys Commun. 2003;155(2):115–131.
Khan, MA, Chiu, M, Herbordt, MC. FPGA‐accelerated molecular dynamics. In: Vanderbauwhede, W, Benkrid, K, editors. High‐performance computing using FPGAs. New York: Springer, 2013; p. 105–135.
McClanahan, C. History and Evolution of GPU Architecture. A Survey Paper; 2010. p. 9.
Brodtkorb, AR, Hagen, TR, Sætra, ML. Graphics processing unit (GPU) programming strategies and trends in GPU computing. J Parallel Distrib Comput. 2013;73(1):4–13.
Garland, M, Le Grand, S, Nickolls, J, et al. Parallel computing experiences with CUDA. IEEE Micro. 2008;28(4):13–27.
Nickolls, J, Dally, WJ. The GPU computing era. IEEE Micro. 2010;30(2):56–69.
Baker, JA, Hirst, JD. Molecular dynamics simulations using graphics processing units. Mol Inform. 2011;30(6–7):498–504.
Xu, D, Williamson, MJ, Walker, RC. Advancements in molecular dynamics simulations of biomolecules on graphical processing units. Annual Reports in Computational Chemistry. Volume 6. Amsterdam, NL: Elsevier, 2010a; p. 2–19.
Harvey, M, De Fabritiis, G. A survey of computational molecular science using graphics processing units. Wiley Interdiscip Rev Comput Mol Sci. 2012;2(5):734–742.
Plimpton, SJ, Thompson, AP. Computational aspects of many‐body potentials. MRS Bullet. 2012;37(5):513–521.
Ewald, PP. Die Berechnung optischer und elektrostatischer Gitterpotentiale. Ann Phys. 1921;369(3):253–287.
Darden, T, York, D, Pedersen, L. Particle mesh Ewald: An N log (N) method for Ewald sums in large systems. J Chem Phys. 1993;98(12):10089–10092.
Essmann, U, Perera, L, Berkowitz, ML, Darden, T, Lee, H, Pedersen, LG. A smooth particle mesh Ewald method. J Chem Phys. 1995;103(19):8577–8593.
Hockney, RW, Eastwood, JW. Computer Simulation Using Particles. Boca Raton, Florida: CRC Press, 1988.
Greengard, L, Rokhlin, V. A fast algorithm for particle simulations. J Comput Phys. 1987;73(2):325–348.
Gumerov, NA, Duraiswami, R. Fast multipole methods on graphics processors. J Comput Phys. 2008;227(18):8290–8313.
Lashuk, I, Chandramowlishwaran, A, Langston, H, et al. A massively parallel adaptive fast‐multipole method on heterogeneous architectures. Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis; 2009. p. 58.
Yokota, R, Barba, LA, Narumi, T, Yasuoka, K. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs. Comput Phys Commun. 2013;184(3):445–455.
Yokota, R, Bardhan, JP, Knepley, MG, Barba, LA, Hamada, T. Biomolecular electrostatics using a fast multipole BEM on up to 512 GPUs and a billion unknowns. Comput Phys Commun. 2011;182(6):1272–1283.
Buck, I, Foley, T, Horn, D, et al. Brook for GPUs: Stream computing on graphics hardware. ACM Trans Graph. 2004;23(3):777–786.
Mark, WR, Glanville, RS, Akeley, K, Kilgard, MJ. Cg: A system for programming graphics hardware in a C‐like language. ACM Trans Graph. 2003;22(3):896–907.
Buck, I, Hanrahan, P. Data parallel computation on graphics hardware; 2003. Unpublished report, Jan. http://www.cs.kent.edu/~ssteinfa/groups/FA07Papers/buck2003.pdf
Kupka, S. Molecular Dynamics on Graphics Accelerators. Vienna, Austria: University of Vienna: Web Proceedings of CESCG, 2006.
Yang, J, Wang, Y, Chen, Y. GPU accelerated molecular dynamics simulation of thermal conductivities. J Comput Phys. 2007;221(2):799–804.
Green, MS. Markoff random processes and the statistical mechanics of time‐dependent phenomena. II. Irreversible processes in fluids. J Chem Phys. 1954;22(3):398–413.
Kubo, R. Statistical‐mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. J Physical Soc Japan. 1957;12(6):570–586.
Elsen, E, Vishal, V, Houston, M, Pande, V, Hanrahan, P, Darve, E. N‐body simulations on GPUs. arXiv preprint arXiv:0706.3060. 2007.
Meredith, JS, Alam, SR, Vetter, JS. Analysis of a computational biology simulation technique on emerging processing architectures. 2007 IEEE International Parallel and Distributed Processing Symposium; 2007. p. 1–8.
Liu, W, Schmidt, B, Voss, G, Müller‐Wittig, W. Molecular dynamics simulations on commodity gpus with cuda. International conference on high‐performance computing; 2007. p. 185–196.
Liu, W, Schmidt, B, Voss, G, Müller‐Wittig, W. Accelerating molecular dynamics simulations using graphics processing units with CUDA. Comput Phys Commun. 2008;179(9):634–641.
Ercolessi, F. A molecular dynamics primer. 1997 [Online through Internet Archive Wayback Machine; accessed 28 Mar 2019]. Available from: https://web.archive.org/web/20170125072115/ http://www.fisica.uniud.it/%7Eercolessi/md/.
Van Meel, JA, Arnold, A, Frenkel, D, Portegies Zwart, S, Belleman, RG. Harvesting graphics power for MD simulations. Mol Simulat. 2008;34(3):259–266.
Stone, JE, Phillips, JC, Freddolino, PL, Hardy, DJ, Trabuco, LG, Schulten, K. Accelerating molecular modeling applications with graphics processors. J Comput Chem. 2007;28(16):2618–2640.
NAMD developers. NAMD—Scalable Molecular Dynamics. n.d. [Online; accessed 29 Mar 2019]. Available from: https://www.ks.uiuc.edu/Research/namd/.
Hardy, DJ. Multilevel summation for the fast evaluation of forces for the simulation of biomolecules (PhD thesis). University of Illinois at Urbana‐Champaign; 2006.
Hardy, DJ, Stone, JE, Schulten, K. Multilevel summation of electrostatic potentials using graphics processing units. Parallel Comput. 2009;35(3):164–177.
Rodrigues, CI, Hardy, DJ, Stone, JE, Schulten, K, Hwu, W‐MW. GPU acceleration of cutoff pair potentials for molecular modeling applications. Proceedings of the 5th Conference on Computing Frontiers; 2008. p. 273–282.
Phillips, JC, Stone, JE, Schulten, K. Adapting a message‐driven parallel application to GPU‐accelerated clusters. Proceedings of the 2008 ACM/IEEE Conference on Supercomputing. 2008. p. 8.
Tanner, DE, Phillips, JC, Schulten, K. GPU/CPU algorithm for generalized born/solvent‐accessible surface area implicit solvent calculations. J Chem Theory Comput. 2012;8(7):2521–2530.
Stone, JE, Hynninen, A‐P, Phillips, JC, Schulten, K. Early experiences porting the NAMD and VMD molecular simulation and analysis software to GPU‐accelerated OpenPOWER platforms. International conference on high performance computing; 2016. p. 188–206.
Anderson, JA, Lorenz, CD, Travesset, A. General purpose molecular dynamics simulations fully implemented on graphics processing units. J Comput Phys. 2008;227(10):5342–5359.
HOOMD‐blue developers. HOOMD‐blue. n.d. [Online; accessed 29 Mar 2019]. Available from: https://glotzerlab.engin.umich.edu/hoomd-blue/.
Jha, PK, Sknepnek, R, Guerrero‐Garcia, GI, Olvera de la Cruz, M. A graphics processing unit implementation of coulomb interaction in molecular dynamics. J Chem Theory Comput. 2010;6(10):3058–3065.
Nguyen, TD, Phillips, CL, Anderson, JA, Glotzer, SC. Rigid body constraints realized in massively‐parallel molecular dynamics on graphics processing units. Comput Phys Commun. 2011;182(11):2307–2313.
Anderson, JA, Glotzer, SC. The development and expansion of HOOMD‐blue through six years of GPU proliferation. Comput Phys. 2013. https://arxiv.org/abs/1308.5587.
Glaser, J, Nguyen, TD, Anderson, JA, et al. Strong scaling of general‐purpose molecular dynamics simulations on GPUs. Comput Phys Commun. 2015;192:97–107.
Friedrichs, MS, Eastman, P, Vaidyanathan, V, et al. Accelerating molecular dynamic simulation on graphics processing units. J Comput Chem. 2009;30(6):864–872.
OpenMM team. OpenMM. n.d. [Online; accessed 29 Mar 2019]. Available from: http://openmm.org/.
Eastman, P, Pande, VS. Efficient nonbonded interactions for molecular dynamics on a graphics processing unit. J Comput Chem. 2010;31(6):1268–1272.
Pande, V, Eastman, P. OpenMM: A hardware‐independent framework for molecular simulations. Comput Sci Eng. 2010;12:34–39.
Ponder, JW, Wu, C, Ren, P, et al. Current status of the amoeba polarizable force field. J Phys Chem B. 2010;114(8):2549–2564.
Ren, P, Ponder, JW. Consistent treatment of inter‐ and intramolecular polarization in molecular mechanics calculations. J Comput Chem. 2002;23(16):1497–1506.
Ren, P, Ponder, JW. Polarizable atomic multipole water model for molecular mechanics simulation. J Phys Chem B. 2003;107(24):5933–5947.
Shi, Y, Xia, Z, Zhang, J, et al. The polarizable atomic multipole‐based AMOEBA force field for proteins. J Chem Theory Comput. 2013;9(9):4046–4063.
Eastman, P, Friedrichs, MS, Chodera, JD, et al. OpenMM 4: A reusable, extensible, hardware independent library for high performance molecular simulation. J Chem Theory Comput. 2012;9(1):461–469.
Albaugh, A, Boateng, HA, Bradshaw, RT, et al. Advanced potential energy surfaces for molecular simulation. J Phys Chem B. 2016;120(37):9811–9832.
Lamoureux, G, MacKerell, AD Jr, Roux, B. A simple polarizable model of water based on classical Drude oscillators. J Chem Phys. 2003;119(10):5185–5197.
Lemkul, JA, Huang, J, Roux, B, MacKerell, AD Jr. An empirical polarizable force field based on the classical Drude oscillator model: Development history and recent applications. Chem Rev. 2016;116(9):4983–5013.
Lopes, PE, Huang, J, Shim, J, et al. Polarizable force field for peptides and proteins based on the classical Drude oscillator. J Chem Theory Comput. 2013;9(12):5430–5449.
Huang, J, Lemkul, JA, Eastman, PK, MacKerell, AD Jr. Molecular dynamics simulations using the Drude polarizable force field on GPUs with OpenMM: Implementation, validation, and benchmarks. J Comput Chem. 2018;39(21):1682–1689.
Chowdhary, J, Harder, E, Lopes, PE, Huang, L, MacKerell, AD Jr, Roux, B. A polarizable force field of dipalmitoylphosphatidylcholine based on the classical Drude model for molecular dynamics simulations of lipids. J Phys Chem B. 2013;117(31):9142–9160.
Eastman, P, Swails, J, Chodera, JD, et al. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comput Biol. 2017;13(7):e1005659.
Eastman, P, Pande, V. Accelerating development and execution speed with just‐in‐time GPU code generation. GPU Computing Gems Jade Edition. Amsterdam, NL: Elsevier, 2012; p. 399–407.
Harvey, M, Giupponi, G, De Fabritiis, G. ACEMD: Accelerating bio‐molecular dynamics in the microsecond time‐scale. J Chem Theory Comput. 2009;5:1632–1639.
Accelera. ACEMD MD Engine. n.d. [Online; accessed 29 Mar 2019]. Available from: https://www.acellera.com/products/molecular-dynamics-software-gpu-acemd/.
Harvey, MJ, De Fabritiis, G. An implementation of the smooth particle mesh Ewald method on GPU hardware. J Chem Theory Comput. 2009;5(9):2371–2377.
Buch, I, Harvey, MJ, Giorgino, T, Anderson, DP, De Fabritiis, G. High‐throughput all‐atom molecular dynamics simulations using distributed computing. J Chem Inf Model. 2010;50(3):397–403.
Schmid, N, Bötschi, M, Van Gunsteren, WF. A GPU solvent‐solvent interaction calculation accelerator for biomolecular simulations using the GROMOS software. J Comput Chem. 2010;31(8):1636–1643.
GROMOS developers. Biomolecular Simulation—The GROMOS Software. n.d. [Online; accessed 29 Mar 2019]. Available from: http://gromos.net/.
Schmid, N, Christ, CD, Christen, M, Eichenberger, AP, van Gunsteren, WF. Architecture, implementation and parallelisation of the GROMOS software for biomolecular simulation. Comput Phys Commun. 2012;183(4):890–903.
Trott,, C. R., Winterfeld,, L., & Crozier,, P. S. (2010). General‐purpose molecular dynamics simulations on GPU‐based clusters. arXiv preprint arXiv:1009.4330.
LAMMPS developers. LAMMPS. n.d. [Online; accessed 29 Mar 2019]. Available from: https://lammps.sandia.gov/.
Brown, WM, Wang, P, Plimpton, SJ, Tharrington, AN. Implementing molecular dynamics on hybrid high performance computers—Short range forces. Comput Phys Commun. 2011;182(4):898–911.
Gay, JG, Berne, BJ. Modification of the overlap potential to mimic a linear site–site potential. J Chem Phys. 1981;74(6):3316–3319.
Brown, WM, Kohlmeyer, A, Plimpton, SJ, Tharrington, AN. Implementing molecular dynamics on hybrid high performance computers—Particle‐particle particle‐mesh. Comput Phys Commun. 2012;183(3):449–459.
Götz, AW, Williamson, MJ, Xu, D, Poole, D, Le Grand, S, Walker, RC. Routine microsecond molecular dynamics simulations with AMBER on GPUs. 1. Generalized born. J Chem Theory Comput. 2012;8:1542–1555.
AMBER developers. The Amber Molecular Dynamics Package. n.d. [Online; accessed 29 Mar 2019]. Available from: http://ambermd.org/.
Case, DA, Babin, V, Berryman, J, et al. Amber 14. Oakland, California: University of California, 2014.
Salomon‐Ferrer, R, Case, DA, Walker, RC. An overview of the Amber biomolecular simulation package. Wiley Interdiscip Rev Comput Mol Sci. 2013a;3(2):198–210.
Le Grand, S, Götz, AW, Walker, RC. SPFP: Speed without compromise—A mixed precision model for GPU accelerated molecular dynamics simulations. Comput Phys Commun. 2013;184(2):374–380.
Salomon‐Ferrer, R, Götz, AW, Poole, D, Grand, SL, Walker, RC. Routine microsecond molecular dynamics simulations with AMBER on GPUs. 2. Particle mesh Ewald. J Chem Theory Comput. 2013b;9:3878–3888.
Betz, RM, DeBardeleben, NA, Walker, RC. An investigation of the effects of hard and soft errors on graphics processing unit‐accelerated molecular dynamics simulations. Concurr Comp Pract E. 2014;26(13):2134–2140.
Lee, T‐S, Cerutti, DS, Mermelstein, D, et al. GPU‐accelerated molecular dynamics and free energy methods in Amber18: Performance enhancements and new features. J Chem Inf Model. 2018;58(10):2043–2050.
Straatsma, T, Berendsen, H. Free energy of ionic hydration: Analysis of a thermodynamic integration technique to evaluate free energy differences by molecular dynamics simulations. J Chem Phys. 1988;89(9):5876–5886.
Lee, T‐S, Hu, Y, Sherborne, B, Guo, Z, York, DM. Toward fast and accurate binding affinity prediction with pmemdGTI: An efficient implementation of GPU‐accelerated thermodynamic integration. J Chem Theory Comput. 2017;13(7):3077–3084.
van der Spoel, D, Hess, B. GROMACS—The road ahead. Wiley Interdiscip Rev Comput Mol Sci. 2011;1(5):710–715.
Páll, S, Hess, B. A flexible algorithm for calculating pair interactions on SIMD architectures. Comput Phys Commun. 2013;184(12):2641–2650.
GROMACS developers. GROMACS. n.d. [Online; accessed 29 Mar 2019]. Available from: http://www.gromacs.org/.
Páll, S, Abraham, MJ, Kutzner, C, Hess, B, Lindahl, E. Tackling exascale software challenges in molecular dynamics simulations with GROMACS. In: Markidis, S, Laure, E, editors. Solving software challenges for exascale. New York, NY: Springer International Publishing, 2015; p. 3–27.
Abraham, MJ, Murtola, T, Schulz, R, et al. GROMACS: High performance molecular simulations through multi‐level parallelism from laptops to supercomputers. SoftwareX. 2015;1‐2:19–25.
Lemkul, JA, Roux, B, van der Spoel, D, MacKerell, AD Jr. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model. J Comput Chem. 2015;36(19):1473–1479.
Kutzner, C, Páll, S, Fechner, M, Esztermann, A, de Groot, BL, Grubmüller, H. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. J Comput Chem. 2015;36(26):1990–2008.
Kutzner, C, Páll, S, Fechner, M, Esztermann, A, de Groot, BL, Grubmüller, H. More bang for your buck: Improved use of GPU nodes for GROMACS 2018. arXiv preprint arXiv:1903.05918. 2019.
Davis, JE, Ozsoy, A, Patel, S, Taufer, M. Towards large‐scale molecular dynamics simulations on graphics processors. Bioinformatics and Computational Biology. Berlin, Germany: Springer, 2009; p. 176–186.
Bauer, BA, Davis, JE, Taufer, M, Patel, S. Molecular dynamics simulations of aqueous ions at the liquid‐vapor interface accelerated using graphics processors. J Comput Chem. 2011;32(3):375–385.
Pratas, F, Mata, RA, Sousa, L. Iterative induced dipoles computation for molecular mechanics on GPUs. Proceedings of the 3rd Workshop on General‐Purpose Computation on Graphics Processing Units; 2010. p. 111–120.
Zhmurov, A, Dima, R, Kholodov, Y, Barsegov, V. Sop‐gpu: Accelerating biomolecular simulations in the centisecond timescale using graphics processors. Proteins. 2010;78(14):2984–2999.
Hyeon, C, Dima, RI, Thirumalai, D. Pathways and kinetic barriers in mechanical unfolding and refolding of RNA and proteins. Structure. 2006;14(11):1633–1645.
SOP‐GPU developers. SOP‐GPU. n.d. [Online; accessed 29 Mar 2019]. Available from: https://faculty.uml.edu/vbarsegov/gpu/sop/sop.html.
Xu, J, Ren, Y, Ge, W, Yu, X, Yang, X, Li, J. Molecular dynamics simulation of macromolecules using graphics processing unit. Mol Simulat. 2010b;36(14):1131–1140.
Myung, HJ, Sakamaki, R, Oh, KJ, Narumi, T, Yasuoka, K, Lee, S. Accelerating molecular dynamics simulation using graphics processing unit. Bull Korean Chem Soc. 2010;31(12):3639–3643.
Rapaport, D. Enhanced molecular dynamics performance with a programmable graphics processor. Comput Phys Commun. 2011;182(4):926–934.
Ruymgaart, AP, Cardenas, AE, Elber, R. MOIL‐opt: Energy‐conserving molecular dynamics on a GPU/CPU system. J Chem Theory Comput. 2011;7(10):3072–3082.
Blom, T, Majek, P, Kirmizialtin, S, Elber, R. MOIL. n.d. [Online; accessed 29 Mar 2019]. Available from: https://biohpc.cornell.edu/software/moil/moil.html.
Ganesan, N, Taufer, M, Bauer, B, Patel, S. FENZI: GPU‐enabled Molecular Dynamics Simulations of Large Membrane Regions based on the CHARMM force field and PME. 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum; 2011. p. 472–480.
FEN ZI developers. FEN ZI. n.d. [Online; accessed 29 Mar 2019]. Available from: https://gcl.cis.udel.edu/projects/fenzi/.
Taufer, M, Ganesan, N, Patel, S. GPU‐enabled macromolecular simulation: Challenges and opportunities. Comput Sci Eng. 2013;15(1):56–65.
Morozov, IV, Kazennov, A, Bystryi, R, Norman, GE, Pisarev, V, Stegailov, VV. Molecular dynamics simulations of the relaxation processes in the condensed matter on GPUs. Comput Phys Commun. 2011;182(9):1974–1978.
Daw, MS, Baskes, MI. Embedded‐atom method: Derivation and application to impurities, surfaces, and other defects in metals. Phys Rev B. 1984;29(12):6443–6453.
Daw, MS, Foiles, SM, Baskes, MI. The embedded‐atom method: A review of theory and applications. Mater Sci Rep. 1993;9(7–8):251–310.
Foiles, S, Baskes, M, Daw, MS. Embedded‐atom‐method functions for the fcc metals Cu, Ag, Au, Ni, Pd, Pt, and their alloys. Phys Rev B. 1986;33(12):7983–7991.
Hou, C, Ge, W. GPU‐accelerated molecular dynamics simulation of solid covalent crystals. Mol Simulat. 2012;38(1):8–15.
Tersoff, J. Empirical interatomic potential for carbon, with applications to amorphous carbon. Phys Rev Lett. 1988a;61(25):2879–2882.
Tersoff, J. Empirical interatomic potential for silicon with improved elastic properties. Phys Rev B. 1988b;38(14):9902–9905.
Tersoff, J. New empirical approach for the structure and energy of covalent systems. Phys Rev B. 1988c;37(12):6991.
Tersoff, J. Modeling solid‐state chemistry: Interatomic potentials for multicomponent systems. Phys Rev B. 1989;39(8):5566–5568.
Stillinger, FH, Weber, TA. Computer simulation of local order in condensed phases of silicon. Phys Rev B. 1985;31(8):5262–5271.
Hou, C, Xu, J, Wang, P, Huang, W, Wang, X. Efficient GPU‐accelerated molecular dynamics simulation of solid covalent crystals. Comput Phys Commun. 2013a;184(5):1364–1371.
Fan, Z, Siro, T, Harju, A. Accelerated molecular dynamics force evaluation on graphics processing units for thermal conductivity calculations. Comput Phys Commun. 2013;184(5):1414–1425.
Buckingham, RA, Lennard‐Jones, JE. The classical equation of state of gaseous helium, neon and argon. Proc Roy Soc Lond A. 1938;168(933):264–283.
Wolf, D, Keblinski, P, Phillpot, SR, Eggebrecht, J. Exact method for the simulation of Coulombic systems by spherically truncated, pairwise r‐1 summation. J Chem Phys. 1999;110(17):8254–8282.
Van Duin, AC, Dasgupta, S, Lorant, F, Goddard, WA. ReaxFF: A reactive force field for hydrocarbons. Chem A Eur J. 2001;105(41):9396–9409.
Senftle, TP, Hong, S, Islam, MM, et al. The ReaxFF reactive force‐field: Development, applications and future directions. NPJ Comput Mater. 2016;2:15011.
Aktulga, HM, Fogarty, JC, Pandit, SA, Grama, AY. Parallel reactive molecular dynamics: Numerical methods and algorithmic techniques. Parallel Comput. 2012;38(4–5):245–259.
Aktulga, HM, Knight, C, Coffman, P, et al. Optimizing the performance of reactive molecular dynamics simulations for multi‐core architectures. arXiv preprint arXiv:1706.07772. 2017.
Nakano, A, Kalia, RK, Nomura, K‐i, et al. De novo ultrascale atomistic simulations on high‐end parallel supercomputers. Int J High Perform Comput Appl. 2008;22(1):113–128.
Zheng, M, Li, X, Guo, L. Algorithms of GPU‐enabled reactive force field (ReaxFF) molecular dynamics. J Mol Graph Model. 2013;41:1–11.
Hou, Q, Li, M, Zhou, Y, Cui, J, Cui, Z, Wang, J. Molecular dynamics simulations with many‐body potentials on multiple GPUs—The implementation, package and performance. Comput Phys Commun. 2013b;184(9):2091–2101.
Zhu, Y‐L, Liu, H, Li, Z‐W, Qian, H‐J, Milano, G, Lu, Z‐Y. GALAMOST: GPU‐accelerated large‐scale molecular simulation toolkit. J Comput Chem. 2013;34(25):2197–2211.
GALAMOST developers. GALAMOST. (n.d.) [Online; accessed 29 Mar 2019]. Available from: http://galamost.ciac.jl.cn/.
Zhu, Y‐L, Pan, D, Li, Z‐W, et al. Employing multi‐GPU power for molecular dynamics simulation: An extension of GALAMOST. Mol Phys. 2018;116(7–8):1065–1077.
Anthopoulos, A, Grimstead, I, Brancale, A. Gpu‐accelerated molecular mechanics computations. J Comput Chem. 2013;34(26):2249–2260.
Halgren, TA. MMFF VI. MMFF94s option for energy minimization studies. J Comput Chem. 1999;20(7):720–729.
Brown, WM, Yamada, M. Implementing molecular dynamics on hybrid high performance computers—Three‐body potentials. Comput Phys Commun. 2013;184(12):2785–2793.
Kylasa, SB, Aktulga, HM, Grama, AY. PuReMD‐GPU: A reactive molecular dynamics simulation package for GPUs. J Comput Phys. 2014;272:343–359.
PuReMD developers. PuReMD. n.d. [Online; accessed 29 Mar 2019]. Available from: https://www.cs.purdue.edu/puremd.
Kylasa,, S. B., Aktulga,, H., & Grama,, A. (2013). PG‐PuReMD: A Parallel‐GPU Reactive Molecular Dynamics Package. Department of Computer Science Technical Reports. Paper 1768. https://docs.lib.purdue.edu/cstech/1768.
Kylasa, SB, Aktulga, HM, Grama, AY. Reactive molecular dynamics on massively parallel heterogeneous architectures. IEEE Trans Parallel Distrib Systems. 2017;28(1):202–214.
Edwards, HC, Trott, CR, Sunderland, D. Kokkos: Enabling manycore performance portability through polymorphic memory access patterns. J Parallel Distrib Comput. 2014;74(12):3202–3216.
Heroux, MA, Doerfler, DW, Crozier, PS, et al. Improving performance via mini‐applications. Sandia National Laboratories, Tech. Rep. SAND2009‐5574, 3; 2009.
Rovigatti, L, Šulc, P, Reguly, IZ, Romano, F. A comparison between parallelization approaches in molecular dynamics simulations on GPUs. J Comput Chem. 2015;36(1):1–8.
OxDNA developers. OxDNA. n.d. [Online; accessed 29 Mar 2019]. Available from: https://dna.physics.ox.ac.uk/index.php/Main_Page.
Šulc, P, Romano, F, Ouldridge, TE, Rovigatti, L, Doye, JPK, Louis, AA. Sequence‐dependent thermodynamics of a coarse‐grained DNA model. J Chem Phys. 2012;137(13):135101.
Minkin, AS, Knizhnik, AA, Potapkin, BV. GPU implementations of some many‐body potentials for molecular dynamics simulations. Adv Eng Softw. 2017;111:43–51.
Minkin, AS, Teslyuk, AB, Knizhnik, AA, Potapkin, BV. GPGPU performance evaluation of some basic molecular dynamics algorithms. 2015 International Conference on High Performance Computing & Simulation (HPCS); 2015. p. 629–634.
Howard, MP, Anderson, JA, Nikoubashman, A, Glotzer, SC, Panagiotopoulos, AZ. Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units. Comput Phys Commun. 2016;203:45–52.
Trȩdak, P, Rudnicki, WR, Majewski, JA. Efficient implementation of the many‐body reactive bond order (REBO) potential on GPU. J Comput Phys. 2016;321:556–570.
Höhnerbach, M, Ismail, AE, Bientinesi, P. The vectorization of the tersoff multi‐body potential: an exercise in performance portability. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis; 2016. p. 7.
Jung, J, Naurse, A, Kobayashi, C, Sugita, Y. Graphics processing unit acceleration and parallelization of GENESIS for large‐scale molecular dynamics simulations. J Chem Theory Comput. 2016;12(10):4947–4958.
Kobayashi, C, Jung, J, Matsunaga, Y, et al. GENESIS 1.1: A hybrid‐parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. J Comput Chem. 2017;38(25):2193–2206.
Jung, J, Mori, T, Sugita, Y. Midpoint cell method for hybrid (mpi+ openmp) parallelization of molecular dynamics simulations. J Comput Chem. 2014;35(14):1064–1072.
Nguyen, TD. GPU‐accelerated Tersoff potentials for massively parallel molecular dynamics simulations. Comput Phys Commun. 2017;212:113–122.
Fan, Z, Chen, W, Vierimaa, V, Harju, A. Efficient molecular dynamics simulations with many‐body potentials on graphics processing units. Comput Phys Commun. 2017;218:10–16.
GPUMD developers. GPUMD. n.d. [Online; accessed 29 Mar 2019]. Available from: https://github.com/brucefan1983/GPUMD.
Bailey, N, Ingebrigtsen, T, Hansen, JS, et al. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles. SciPost Phys. 2017;3(6):038.
RUMD developers. RUMD. n.d. [Online; accessed 29 Mar 2019]. Available from: http://rumd.org/.
Yang, L, Zhang, F, Wang, C‐Z, Ho, K‐M, Travesset, A. Implementation of metal‐friendly EAM/FS‐type semi‐empirical potentials in HOOMD‐blue: A GPU‐accelerated molecular dynamics software. J Comput Phys. 2018;359:352–360.
Finnis, M, Sinclair, J. A simple empirical N‐body potential for transition metals. Philos Mag A. 1984;50(1):45–55.
Xiao, G, Ren, M, Hong, H. 50 Million atoms scale molecular dynamics modelling on a single consumer graphics card. Adv Eng Softw. 2018;124:66–72.
Jász, Á, Rák, Á, Ladjánszki, I, Cserey, G. Optimized GPU implementation of Merck molecular force field and universal force field. J Mol Struct. 2019;1188:227–233.
Halgren, TA. Merck molecular force field. I. Basis, form, scope, parameterization, and performance of MMFF94. J Comput Chem. 1996a;17(5–6):490–519.
Halgren, TA. Merck molecular force field. III. Molecular geometries and vibrational frequencies for MMFF94. J Comput Chem. 1996b;17(5–6):553–586.
Halgren, TA. Merck molecular force field. II. MMFF94 van der Waals and electrostatic parameters for intermolecular interactions. J Comput Chem. 1996c;17(5–6):520–552.
Halgren, TA. Merck molecular force field. V. Extension of MMFF94 using experimental data, additional computational data, and empirical rules. J Comput Chem. 1996d;17(5–6):616–641.
Halgren, TA, Nachbar, RB. Merck molecular force field. IV. Conformational energies and geometries for MMFF94. J Comput Chem. 1996;17(5–6):587–615.
Rappe, AK, Casewit, CJ, Colwell, KS, Goddard, WA, Skiff, WM. UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations. J Am Chem Soc. 1992;114(25):10024–10035.
Turner, D, Andresen, D, Hutson, K, Tygart, A. Application performance on the newest processors and GPUs. Proceedings of the Practice and Experience on Advanced Research Computing; 2018. p. 37.
Biagini, T, Chillemi, G, Mazzoccoli, G, et al. Molecular dynamics recipes for genome research. Brief Bioinform. 2017;19(5):853–862.
Biagini, T, Petrizzelli, F, Truglio, M, et al. Are gaming‐enabled graphic processing unit cards convenient for molecular dynamics simulation? Evol Bioinform. 2019;15:1176934319850144.
Whitehead, N, Fit‐Florea, A. Precision %26 performance: Floating point and IEEE 754 compliance for NVIDIA GPUs. rn (A + B). 2011;21(1):18749–19424.
Colberg, PH, Höfling, F. Highly accelerated simulations of glassy dynamics using GPUs: Caveats on limited floating‐point precision. Comput Phys Commun. 2011;182(5):1120–1129.
Höfling, F, Colberg, P, Höft, N, Kirchner, D, Kopp, M. HAL`s MD package. n.d. [Online; accessed 29 Mar 2019]. Available from: https://halmd.org/.
Welton, B, Miller, B. Exposing hidden performance opportunities in high performance GPU applications. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID); 2018. p. 301–310.