Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/HPTCDL.2014.7acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

Petascale Tcl with NAMD, VMD, and Swift/T

Published: 16 November 2014 Publication History

Abstract

Tcl is the original embeddable dynamic language. Introduced in 1990, Tcl has been the foundation of the scripting interface of the popular biomolecular visualization and analysis program VMD since 1995 and was extended to the parallel molecular dynamics program NAMD in 1999. The two programs together have over 200,000 users who have enjoyed for nearly two decades the stability and flexibility provided by Tcl. VMD users can implement or extend parallel trajectory analysis and movie rendering on thousands of nodes of Blue Waters. NAMD users can implement or extend simulation protocols and multiple-copy algorithms that execute unmodified on any supercomputer without the need to recompile NAMD. We now demonstrate the integration of the Swift/T high-performance parallel scripting language to enable high-level data flow programming in NAMD and VMD. This integration is achieved without modifying or recompiling either program since the Turbine execution engine is itself based on Tcl and is dynamically loaded by the interpreter, as is the platform-specific MPI library on which it depends.

References

[1]
D. Aristoff, T. Lelièvre, C. G. Mayne, and I. Teo. Adaptive multilevel splitting in molecular dynamics simulations. ESAIM: Proc., 2014. In Press.
[2]
T. G. Armstrong, J. M. Wozniak, M. Wilde, and I. T. Foster. Compiler techniques for massively scalable implicit task parallelism. In Proc. SC '14, Nov. 2014.
[3]
D. Beazley. Automated scientific software scripting with SWIG. Future Generation Computer Systems, 19(5):599--609, 2003.
[4]
J. Dinan, S. Krishnamoorthy, D. B. Larkins, J. Nieplocha, and P. Sadayappan. Scioto: A framework for global-view task parallelism. pages 586--593, Los Alamitos, CA, USA, 2008. IEEE Computer Society.
[5]
B. Hess, C. Kutzner, D. van der Spoel, and E. Lindahl. Gromacs 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation. J. Chem. Theor. Comp., 4:435--447, 2008.
[6]
W. Humphrey, A. Dalke, and K. Schulten. VMD -- Visual Molecular Dynamics. J. Mol. Graphics, 14:33--38, 1996.
[7]
W. Jiang, J. Phillips, L. Huang, M. Fajer, Y. Meng, J. Gumbart, Y. Luo, K. Schulten, and B. Roux. Generalized scalable multiple copy algorithms for molecular dynamics simulations in NAMD. Comput. Phys. Commun., 185:908--916, 2014.
[8]
K. W. Kaufmann, G. H. Lemmon, S. L. Deluca, J. H. Sheehan, and J. Meiler. Practically useful: what the Rosetta protein modeling suite can do for you. Biochemistry, 49:2987--2998, 2010.
[9]
S. J. Krieder, J. M. Wozniak, T. G. Armstrong, M. Wilde, D. S. Katz, B. Grimmer, I. T. Foster, and I. Raicu. Design and evaluation of the GeMTC framework for GPU-enabled many task computing. In Proc. HPDC, 2014.
[10]
P. T. Lang, S. R. Brozell, S. Mukherjee, E. F. Pettersen, E. C. Meng, V. Thomas, R. C. Rizzo, D. A. Case, T. L. James, and I. D. Kuntz. DOCK 6: Combining techniques to model RNA-small molecule complexes. RNA, 15(6):1219--1230, June 2009.
[11]
E. Lindahl, B. Hess, and D. van der Spoel. Gromacs 3.0: A package for molecular simulation and trajectory analysis. J. Mol. Mod., 7(8):306--317, 2001.
[12]
M. Lubin, C. Petra, M. Anitescu, and V. Zavala. Scalable stochastic optimization of complex energy systems. In Proc. SC, 2011.
[13]
E. L. Lusk, S. C. Pieper, and R. M. Butler. More scalability, less pain: a simple programming model and its implementation for extreme computing. SciDAC Review, 17:30--37, Jan. 2010.
[14]
M. Matheny, S. Schlachter, L. M. Crouse, E. T. Kimmel, T. Estrada, M. Schumann, R. Armen, G. M. Zoppetti, and M. Taufer. ExSciTecH: expanding volunteer computing to explore science, technology, and health. In eScience'12, pages 1--8, 2012.
[15]
C. G. Mayne, J. Saam, K. Schulten, E. Tajkhorshid, and J. C. Gumbart. Rapid parameterization of small molecules using the Force Field Toolkit. J. Comp. Chem., 34:2757--2770, 2013.
[16]
J. Mongan. Interactive essential dynamics. J. Comp.-Aided Mol. Design, 18:433--436, 2004.
[17]
J. D. Owens, M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips. GPU computing. Proc. IEEE, 96:879--899, 2008.
[18]
T. Peterka, R. Ross, B. Nouanesengsy, T.-Y. Lee, H.-W. Shen, W. Kendall, and J. Huang. A Study of Parallel Particle Tracing for Steady-State and Time-Varying Flow Fields. In Proc. IPDPS, Anchorage AK, 2011.
[19]
P. Peterson. F2PY: a tool for connecting Fortran and Python programs. Int. J. Comput. Sci. Eng., 4(4):296--305, Nov. 2009.
[20]
J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, and K. Schulten. Scalable molecular dynamics with NAMD. J. Comp. Chem., 26:1781--1802, 2005.
[21]
J. C. Phillips, J. E. Stone, and K. Schulten. Adapting a message-driven parallel application to GPU-accelerated clusters. In SC '08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, Piscataway, NJ, USA, 2008. IEEE Press.
[22]
J. C. Phillips, Y. Sun, N. Jain, E. J. Bohm, and L. V. Kalé. Mapping to irregular torus topologies and other techniques for petascale biomolecular simulation. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC '14. IEEE Press, 2014.
[23]
S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. J Comp Phys, 117:1--19, 1995.
[24]
T. Proffen and R. Neder. DISCUS: A program for diffuse scattering and defect-structure simulation. J. Applied Crystallography, 30(2):171--175, 1997.
[25]
E. Roberts, J. Eargle, D. Wright, and Z. Luthey-Schulten. MultiSeq: Unifying sequence and structure data for evolutionary analysis. BMC Bioinformatics, 7:382, 2006.
[26]
J. E. Stone, D. Gohara, and G. Shi. OpenCL: A parallel programming standard for heterogeneous computing systems. Comput. in Sci. and Eng., 12:66--73, 2010.
[27]
J. E. Stone, B. Isralewitz, and K. Schulten. Early experiences scaling VMD molecular visualization and analysis jobs on Blue Waters. In Extreme Scaling Workshop (XSW), 2013, pages 43--50, Aug. 2013.
[28]
J. E. Stone, R. McGreevy, B. Isralewitz, and K. Schulten. GPU-accelerated analysis and visualization of large structures solved by molecular dynamics flexible fitting. Faraday Discuss., 2014. In press.
[29]
J. E. Stone, J. C. Phillips, P. L. Freddolino, D. J. Hardy, L. G. Trabuco, and K. Schulten. Accelerating molecular modeling applications with graphics processors. J. Comp. Chem., 28:2618--2640, 2007.
[30]
J. E. Stone, J. Saam, D. J. Hardy, K. L. Vandivort, W. W. Hwu, and K. Schulten. High performance computation and interactive display of molecular orbitals on GPUs and multi-core CPUs. In Proceedings of the 2nd Workshop on General-Purpose Processing on Graphics Processing Units, ACM International Conference Proceeding Series, volume 383, pages 9--18, New York, NY, USA, 2009. ACM.
[31]
J. E. Stone, K. L. Vandivort, and K. Schulten. Immersive out-of-core visualization of large-size and long-timescale molecular dynamics trajectories. Lect. Notes in Comp. Sci., 6939:1--12, 2011.
[32]
J. E. Stone, K. L. Vandivort, and K. Schulten. GPU-accelerated molecular visualization on petascale supercomputing platforms. In Proceedings of the 8th International Workshop on Ultrascale Visualization, UltraVis '13, pages 6:1--6:8, New York, NY, USA, 2013. ACM.
[33]
L. G. Trabuco, E. Villa, K. Mitra, J. Frank, and K. Schulten. Flexible fitting of atomic structures into electron microscopy maps using molecular dynamics. Structure, 16:673--683, 2008.
[34]
L. G. Trabuco, E. Villa, E. Schreiner, C. B. Harrison, and K. Schulten. Molecular Dynamics Flexible Fitting: A practical guide to combine cryo-electron microscopy and X-ray crystallography. Methods, 49:174--180, 2009.
[35]
T. Tu, C. A. Rendleman, D. W. Borhani, R. O. Dror, J. Gullingsrud, M. O. Jensen, J. L. Klepeis, P. Maragakis, P. Miller, K. A. Stafford, and D. E. Shaw. A scalable parallel framework for analyzing terascale molecular dynamics simulation trajectories. In Proceedings of the 2008 ACM/IEEE conference on Supercomputing, SC '08, pages 56:1--56:12, Piscataway, NJ, USA, 2008. IEEE Press.
[36]
M. Wilde, M. Hategan, J. M. Wozniak, B. Clifford, D. S. Katz, and I. Foster. Swift: A language for distributed parallel scripting. Par. Comp., 37:633--652, 2011.
[37]
C. J. Woods, M. H. Ng, S. Johnston, S. E. Murdock, B. Wu, K. Tai, H. Fangohr, P. Jeffreys, S. Cox, J. G. Frey, M. S. P. Sansom, and J. W. Essex. Grid computing and biomolecular simulation. Philosophical Transactions of the Royal Society A, 363(1833), 2005.
[38]
J. M. Wozniak, T. G. Armstrong, D. S. Katz, M. Wilde, and I. T. Foster. Toward computational experiment management via multi-language applications, 2014. DOE ASCR Workshop on Software Productivity for eXtreme scale Science (SWP4XS).
[39]
J. M. Wozniak, T. G. Armstrong, K. Maheshwari, E. L. Lusk, D. S. Katz, M. Wilde, and I. T. Foster. Turbine: A distributed-memory dataflow engine for high performance many-task applications. Fundamenta Informaticae, 28(3), 2013.
[40]
J. M. Wozniak, T. G. Armstrong, M. Wilde, D. S. Katz, E. Lusk, and I. T. Foster. Swift/T: Large-scale application composition via distributed-memory data flow processing. In Proc. CCGrid '13, pages 95--102, May 2013.
[41]
J. M. Wozniak, T. Peterka, T. G. Armstrong, J. Dinan, E. L. Lusk, M. Wilde, and I. T. Foster. Dataflow coordination of data-parallel tasks via MPI 3.0. In Proc. EuroMPI, 2013.
[42]
G. Zhao, J. R. Perilla, E. L. Yufenyuy, X. Meng, B. Chen, J. Ning, J. Ahn, A. M. Gronenborn, K. Schulten, C. Aiken, and P. Zhang. Mature HIV-1 capsid structure by cryo-electron microscopy and all-atom molecular dynamics. Nature, 497:643--646, 2013.

Cited By

View all
  • (2018)What You Should Know About NAMD and Charm++ But Were Hoping to IgnoreProceedings of the Practice and Experience on Advanced Research Computing: Seamless Creativity10.1145/3219104.3219134(1-6)Online publication date: 22-Jul-2018
  • (2016)An accelerated framework for the classification of biological targets from solid-state micropore dataComputer Methods and Programs in Biomedicine10.1016/j.cmpb.2016.06.001134:C(53-67)Online publication date: 1-Oct-2016
  • (2015)Interlanguage parallel scripting for distributed-memory scientific computingProceedings of the 10th Workshop on Workflows in Support of Large-Scale Science10.1145/2822332.2822338(1-11)Online publication date: 15-Nov-2015

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HPTCDL '14: Proceedings of the 1st First Workshop for High Performance Technical Computing in Dynamic Languages
November 2014
67 pages
ISBN:9781479970209

Sponsors

Publisher

IEEE Press

Publication History

Published: 16 November 2014

Check for updates

Author Tags

  1. GPU
  2. many-core
  3. molecular simulation
  4. molecular visualization
  5. parallel rendering
  6. scripting

Qualifiers

  • Research-article

Conference

SC '14
Sponsor:

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2018)What You Should Know About NAMD and Charm++ But Were Hoping to IgnoreProceedings of the Practice and Experience on Advanced Research Computing: Seamless Creativity10.1145/3219104.3219134(1-6)Online publication date: 22-Jul-2018
  • (2016)An accelerated framework for the classification of biological targets from solid-state micropore dataComputer Methods and Programs in Biomedicine10.1016/j.cmpb.2016.06.001134:C(53-67)Online publication date: 1-Oct-2016
  • (2015)Interlanguage parallel scripting for distributed-memory scientific computingProceedings of the 10th Workshop on Workflows in Support of Large-Scale Science10.1145/2822332.2822338(1-11)Online publication date: 15-Nov-2015

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media