Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2792745.2792788acmotherconferencesArticle/Chapter ViewAbstractPublication PagesxsedeConference Proceedingsconference-collections
research-article

CDD: computational discovery desktop

Published: 26 July 2015 Publication History

Abstract

Computational Discovery Desktop (CDD) is an application-independent middleware that aids researchers in the daily interaction with distributed HPC resources. It provides computational project management at personal, research group, and inter-institutional collaboration levels. The CDD framework seamlessly integrates typical operations, including multi-user access to the project data, application deployment, input preparation, job scheduling, state gathering, data archival, etc. into the daily work of the researcher by offering a common interface to the frequently performed tasks. The implemented scheduling options support an extensive logic for job dependencies and submitting and cancelling a job at specific time and condition. The job state monitor ascertains the job progress and includes the ability to terminate the job if a predefined condition is met, e.g. the job is too slow due to system issues. The interface can easily be expanded to include additional operations. The users are free to choose the desired level of integration of their day-to-day operations into the CDD framework based on their individual habits and experience of interaction with HPC resources. CDD relies on a relational MySQL database back-end. It uses cron utility to perform the scheduled off-line tasks and to create periodic reports about job and project states. CDD uses XML format to manage the input files and to store the post-processed metadata. It comes with an extensive set of command-line tools for interaction with the user, which are written in Perl to simplify modification and to avoid compilation. CDD employs Globus Toolkit to conduct the certificate-based authentication and data transfer. Currently configured applications include NAMD, Amber, GROMACS, LAMMPS, and NWChem. The applications were tested on NCSA Blue Waters and TACC Stampede.

References

[1]
S. Cholia and T. Sun. The NEWT platform: An extensible plugin framework for creating ReSTful HPC APIs. In Proceedings of the 9th Gateway Computing Environments Workshop, GCE '14, pages 17--20, Piscataway, NJ, USA, 2014. IEEE Press.
[2]
E. Deelman, G. Singh, M.-H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, J. Good, A. Laity, J. C. Jacob, and D. S. Katz. Pegasus: A framework for mapping complex scientific workflows onto distributed systems,. Scientific Programming, 13(3):219--237, 2005.
[3]
R. Dooley, G. Allen, and S. Pamidighantam. Computational Chemistry Grid: Production cyberinfrastructure for computational chemistry. In Proceedings of the 13th Annual Mardi Gras Conference, page 83, 2005.
[4]
I. Foster. Globus toolkit version 4: Software for service-oriented systems. J. Comput. Sci. & Technol., 21:513--520, 2006.
[5]
J. Krueger, R. Grunzke, S. Gesing, S. Breuers, A. Brinkmann, L. de la Garza, O. Kohlbacher, M. Kruse, W. E. Nagel, L. Packschies, R. Mueller-Pfefferkorn, P. Schaefer, C. Schaefer, T. Steinke, T. Schlemmer, K. D. Warzecha, A. Zink, and S. Herres-Pawlis. The MoSGrid science gateway -- a complete solution for molecular simulations. J. Chem. Theory Comput., 10:2232--2245, 2014.
[6]
T. McPhillips, S. Bowers, D. Zinn, and B. Ludäscher. Scientific workflow design for mere mortals. Future Gener. Comput. Syst., 25(5):541--551, May 2009.
[7]
National Science Foundation. Cyberinfrastructure framework for 21st century science and engineering (CIF21). http://www.nsf.gov/cif21, 2015.
[8]
J. Novotny, S. Tuecke, and V. Welch. An online credential repository for the grid: MyProxy. Proceedings of the Tenth International Symposium on High Performance Distributed Computing (HPDC-10), pages 104--111, 2001.
[9]
S. Pronk, P. Larsson, I. Pouya, G. R. Bowman, I. S. Haque, K. Beauchamp, B. Hess, V. S. Pande, P. M. Kasson, and E. Lindahl. Copernicus: A new paradigm for parallel adaptive molecular dynamics. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC '11, pages 60:1--60:10, New York, NY, USA, 2011. ACM.
[10]
M. Romberg. The UNICORE grid infrastructure. Sci. Program., 10(2):149--157, Apr. 2002.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
XSEDE '15: Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure
July 2015
296 pages
ISBN:9781450337205
DOI:10.1145/2792745
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

  • San Diego Super Computing Ctr: San Diego Super Computing Ctr
  • HPCWire: HPCWire
  • Omnibond: Omnibond Systems, LLC
  • SGI
  • Internet2
  • Indiana University: Indiana University
  • CASC: The Coalition for Academic Scientific Computation
  • NICS: National Institute for Computational Sciences
  • Intel: Intel
  • DDN: DataDirect Networks, Inc
  • DELL
  • CORSA: CORSA Technology
  • ALLINEA: Allinea Software
  • Cray
  • RENCI: Renaissance Computing Institute

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 July 2015

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

XSEDE '15
Sponsor:
  • San Diego Super Computing Ctr
  • HPCWire
  • Omnibond
  • Indiana University
  • CASC
  • NICS
  • Intel
  • DDN
  • CORSA
  • ALLINEA
  • RENCI

Acceptance Rates

XSEDE '15 Paper Acceptance Rate 49 of 70 submissions, 70%;
Overall Acceptance Rate 129 of 190 submissions, 68%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 50
    Total Downloads
  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media