Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3219104.3219134acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article
Public Access

What You Should Know About NAMD and Charm++ But Were Hoping to Ignore

Published: 22 July 2018 Publication History

Abstract

The biomolecular simulation program NAMD is used heavily at many HPC centers. Supporting NAMD users requires knowledge of the Charm++ parallel runtime system on which NAMD is built. Introduced in 1993, Charm++ supports message-driven, task-based, and other programming models and has demonstrated its portability across generations of architectures, interconnects, and operating systems. While Charm++ can use MPI as a portable communication layer, specialized high-performance layers are preferred for Cray, IBM, and InfiniBand networks and a new OFI layer supports Omni-Path. NAMD binaries using some specialized layers can be launched directly with mpiexec or its equivalent, or mpiexec can be called by the charmrun program to leverage system job-launch mechanisms. Charm++ supports multi-threaded parallelism within each process, with a single thread dedicated to communication and the rest for computation. The optimal balance between thread and process parallelism depends on the size of the simulation, features used, memory limitations, nodes count, and the core count and NUMA structure of each node. It is also important to enable the Charm++ built-in CPU affinity settings to bind worker and communication threads appropriately to processor cores. Appropriate execution configuration and CPU affinity settings are particularly non-intuitive on Intel KNL processors due to their high core counts and flat NUMA hierarchy. Rules and heuristics for default settings can provide good default performance in most cases and dramatically reduce the search space when optimizing for a specific simulation on particular machine. Upcoming Charm++ and NAMD releases will simplify and automate launch configuration and affinity settings.

References

[1]
Bilge Acun, Abhishek Gupta, Nikhil Jain, Akhil Langer, Harshitha Menon, Eric Mikida, Xiang Ni, Michael Robson, Yanhua Sun, Ehsan Totoni, Lukasz Wesolowski, and Laxmikant Kale. 2014. Parallel Programming with Migratable Objects: Charm++ in Practice (SC).
[2]
Wei Jiang, James Phillips, Lei Huang, Mikolai Fajer, Yilin Meng, James Gumbart, Yun Luo, Klaus Schulten, and Benoit Roux. 2014. Generalized Scalable Multiple Copy Algorithms for Molecular Dynamics Simulations in NAMD. Comput. Phys. Commun. 185 (2014), 908--916.
[3]
Laxmikant Kale, Anshu Arya, Nikhil Jain, Akhil Langer, Jonathan Lifflander, Harshitha Menon, Xiang Ni, Yanhua Sun, Ehsan Totoni, Ramprasad Venkataraman, and Lukasz Wesolowski. 2012. Migratable Objects + Active Messages + Adaptive Runtime = Productivity + Performance A Submission to 2012 HPC Class II Challenge. Technical Report 12--47. Parallel Programming Laboratory.
[4]
Chao Mei, Yanhua Sun, Gengbin Zheng, Eric J. Bohm, Laxmikant V. Kalé, James C. Phillips, and Chris Harrison. 2011. Enabling and Scaling Biomolecular Simulations of 100 Million Atoms on Petascale Machines with a Multicore-optimized Message-driven Runtime. In Proceedings of the 2011 ACM/IEEE conference on Supercomputing. Seattle, WA, 61:1--61:11.
[5]
Juan R. Perilla, Jodi A. Hadden, Boon Chong Goh, Christopher G. Mayne, and Klaus Schulten. 2016. All-atom molecular dynamics of virus capsids as drug targets. J. Phys. Chem. Lett. 7 (2016), 1836--1844.
[6]
James C.Phillips, Rosemary Braun, Wei Wang, James Gumbart, Emad Tajkhorshid, Elizabeth Villa, Christophe Chipot, Robert D. Skeel, Laxmikant Kale, and Klaus Schulten. 2005. Scalable Molecular Dynamics with NAMD. J. Comp. Chem. 26 (2005), 1781--1802.
[7]
James C. Phillips, John E. Stone, and Klaus Schulten. 2008. Adapting a Message-Driven Parallel Application to GPU-Accelerated Clusters. In SC '08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing. IEEE Press, Piscataway, NJ, USA. (9 pages).
[8]
James C. Phillips, John E. Stone, Kirby L. Vandivort, Timothy G. Armstrong, Justin M. Wozniak, Michael Wilde, and Klaus Schulten. 2014. Petascale Tcl with NAMD, VMD, and Swift/T. In SC'14 workshop on High Performance Technical Computing in Dynamic Languages (SC '14). IEEE Press, 6--17.
[9]
James C. Phillips, Yanhua Sun, Nikhil Jain, Eric J. Bohm, and Laximant V. Kalé. 2014. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC '14). IEEE Press, 81--91.
[10]
John E. Stone, Antti-Pekka Hynninen, James C. Phillips, and Klaus Schulten. 2016. Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms. International Workshop on OpenPOWER for HPC (IWOPH'16) (2016), 188--206.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PEARC '18: Proceedings of the Practice and Experience on Advanced Research Computing: Seamless Creativity
July 2018
652 pages
ISBN:9781450364461
DOI:10.1145/3219104
© 2018 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the United States Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 July 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Charm++
  2. NAMD
  3. high-performance computing
  4. molecular dynamics
  5. scientific software tuning
  6. structural biology
  7. user support

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

PEARC '18

Acceptance Rates

PEARC '18 Paper Acceptance Rate 79 of 123 submissions, 64%;
Overall Acceptance Rate 133 of 202 submissions, 66%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,600
    Total Downloads
  • Downloads (Last 12 months)778
  • Downloads (Last 6 weeks)131
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media