- Sponsor:
- sighpc
No abstract available.
Proceeding Downloads
Deep and wide metrics for HPC resource capability and project usage
This paper defines and demonstrates application of possible quantitative metrics for the qualitative notions of "deep" and "wide" HPC system use along with the related concepts of capability and capacity computing. By summarizing HPC workloads according ...
How to measure useful, sustained performance
Sustained performance is the amount of useful work a system can produce in a given amount of time on a regular basis. How much useful work a system can achieve is difficult to assess in a simple, general manner because different communities have their ...
Sustained systems performance monitoring at the U. S. Department of Defense high performance computing modernization program
The U. S. Department of Defense High Performance Computing Modernization Program (HPCMP) has implemented sustained systems performance testing on high performance computing systems in use at DoD Supercomputing Resource Centers. The intent is to monitor ...
Performance evaluations of gyrokinetic Eulerian code GT5D on massively parallel multi-core platforms
A gyrokinetic toroidal five dimensional Eulerian code GT5D [Y.Idomura et. al., Comput. Phys. Commun 179, 391 (2008)] is ported on five advanced massively parallel platforms and comprehensive benchmark tests are performed. Sustained performances of the ...
System-level monitoring of floating-point performance to improve effective system utilization
NCAR's Bluefire supercomputer is instrumented with a set of low-overhead processes that continually monitor the floating-point counters of its 3,840 batch-compute cores. We extract performance numbers for each batch job by correlating the data from ...
Performance modeling for systematic performance tuning
The performance of parallel scientific applications depends on many factors which are determined by the execution environment and the parallel application. Especially on large parallel systems, it is too expensive to explore the solution space with ...
The NWSC benchmark suite using scientific throughput to measure supercomputer performance
The NCAR-Wyoming Supercomputing Center (NWSC) will begin operating in June 2012, and will house NCAR's next generation HPC system. The NWSC will support a broad spectrum of Earth Science research drawn from a user community with diverse requirements for ...
Integrating multi-touch in high-resolution display environments
High-resolution display environments consisting of many individual displays arrayed to form a single visible surface are commonly used to present large scale data. Using these displays often involves a control paradigm where interactions become ...
Best practices for the deployment and management of production HPC clusters
Commodity-based Linux HPC clusters dominate the scientific computing landscape in both academia and industry ranging from small research clusters to petascale supercomputers supporting thousands of users. To support broad user communities and manage a ...
Wallaby: a scalable semantic configuration service for grids and clouds
Job schedulers for grids and clouds can offer great generality and configurability, but they typically do so at the cost of increased administrator complexity. In this paper, we present Wallaby, an open-source, scalable configuration service for compute ...
Cloud versus in-house cluster: evaluating Amazon cluster compute instances for running MPI applications
The emergence of cloud services brings new possibilities for constructing and using HPC platforms. However, while cloud services provide the flexibility and convenience of customized, pay-as-you-go parallel computing, multiple previous studies in the ...
Qserv: a distributed shared-nothing database for the LSST catalog
The LSST project will provide public access to a database catalog that, in its final year, is estimated to include 26 billion stars and galaxies in dozens of trillion detections in multiple petabytes. Because we are not aware of an existing open-source ...
Advanced Institute for Computational Science (AICS): Japanese National High-Performance Computing Research Institute and its 10-petaflops supercomputer "K"
Advanced Institute for Computational Science (AICS) was created in July 2010 at RIKEN under the supervision of Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) in order to establish the national center of excellence (COE) ...
Intrusion detection at 100G
Driven by the growing data transfer needs of the scientific community and the standardization of the 100 Gbps Ethernet Specification, 100 Gbps is now becoming a reality for many HPC sites. This tenfold increase in bandwidth creates a number of ...
A long-distance InfiniBand interconnection between two clusters in production use
We discuss operational and organizational issues of an InfiniBand interconnection between two clusters over a distance of 28 km in day-to-day production use. We describe the setup of hardware and networking components, and the solution of technical ...
Janus: co-designing HPC systems and facilities
- Henry M. Tufo,
- Michael K. Patterson,
- Michael Oberg,
- Matthew Woitaszek,
- Guy Cobb,
- Robert Strong,
- Jim Gutowski
The design and procurement of supercomputers may require months, but the construction of a facility to house a supercomputer can extend to years. This paper describes the design and construction of a Top-50 supercomputer system and a fully-customized ...
"Hot" for warm water cooling
Liquid cooling is key to reducing energy consumption for this generation of supercomputers and remains on the roadmap for the foreseeable future. This is because the heat capacity of liquids is orders of magnitude larger than that of air and once heat ...
Challenges in the management of high-performance computing centers: an organizational perspective
To deliver consistently, high performance computing (HPC) centers must execute in both technical and organizational domains. While the technical domain certainly involves formidable challenges, scholars have indicated that organizational challenges in ...
A survey of the practice of computational science
- Prakash Prabhu,
- Thomas B. Jablin,
- Arun Raman,
- Yun Zhang,
- Jialu Huang,
- Hanjun Kim,
- Nick P. Johnson,
- Feng Liu,
- Soumyadeep Ghosh,
- Stephen Beard,
- Taewook Oh,
- Matthew Zoufaly,
- David Walker,
- David I. August
Computing plays an indispensable role in scientific research. Presently, researchers in science have different problems, needs, and beliefs about computation than professional programmers. In order to accelerate the progress of science, computer ...
Adaptive simulation of turbulent flow past a full car model
The massive computational cost for resolving all turbulent scales makes a direct numerical simulation of the underlying Navier-Stokes equations impossible in most engineering applications. We present recent advances in parallel adaptive finite element ...
World-highest resolution global atmospheric model and its performance on the Earth Simulator
- Keiko Takahashi,
- Akira Azami,
- Yuki Tochihara,
- Yoshiyuki Kubo,
- Ken'ichi Itakura,
- Koji Goto,
- Kenryo Kataumi,
- Hiroshi Takahara,
- Yoko Isobe,
- Satoru Okura,
- Hiromitsu Fuchigami,
- Jun-ichi Yamamoto,
- Toshifumi Takei,
- Yoshinori Tsuda,
- Kunihiko Watanabe
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-...
Challenges of HPC monitoring
At a recent meeting of monitoring experts from nine large supercomputing centers, there was a broad divergence of opinion on what monitoring in our environment actually is, what ought to be monitored, what technology should be used, etc. Broad consensus ...
LOGJAM: a scalable unified log file archiver
Log files are a necessary record of events on any system. However, as systems scale, so does the volume of data captured. To complicate matters, this data can be distributed across all nodes within the system. This creates challenges in ways to obtain ...
A toolkit for event analysis and logging
This report describes the Toolkit for Event Analysis and Logging that came out of IBM's effort to converge its support for low level HPC event analysis. The toolkit is designed as a pluggable processing pipeline that allows different components to use ...
SPOTlight on testing: stability, performance and operational testing of LANL HPC clusters
Testing is sometimes a forgotten component of system management, but it becomes very important in the realm of High Performance Computing (HPC) clusters. Many large-scale HPC cluster installations are one of a kind, with unknown issues and unexpected ...
One stop high performance computing user support at SNL
To improve the quality of user support for scientific, engineering, and high performance computing customers, the HPC OneStop Team unified the customer support activities of ten separate groups at Sandia National Laboratories (SNL). To our user ...
Recommendations
Affordances for practice
This paper argues that Gibson's concept of affordance inserts a powerful conceptual lens for the study of sociomateriality as enacted in contemporary organizational practices. Our objective in this paper is to develop a comprehensive view of affordances ...
The turn to practice in HCI: towards a research agenda
CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsThis paper argues that a new paradigm for HCI research, which we label the 'practice' perspective, has been emerging in recent years. This stands in contrast to the prevailing mainstream HCI paradigm, which we term the 'interaction' perspective. The '...
Bridging Work Practice and System Design: Integrating Systemic Analysis, Appreciative Intervention and Practitioner Participation
This article discusses the integration of work practice and system design. By scrutinising the unfolding discourse of workshop participants the co-construction of work practice issues as relevant design considerations is described. Through a mutual ...
Acceptance Rates
Year | Submitted | Accepted | Rate |
---|---|---|---|
SC '17 | 327 | 61 | 19% |
SC '16 | 442 | 81 | 18% |
SC '15 | 358 | 79 | 22% |
SC '14 | 394 | 83 | 21% |
SC '13 | 449 | 91 | 20% |
SC '12 | 461 | 100 | 22% |
SC '11 | 352 | 74 | 21% |
SC '10 | 253 | 51 | 20% |
SC '09 | 261 | 59 | 23% |
SC '08 | 277 | 59 | 21% |
SC '07 | 268 | 54 | 20% |
SC '06 | 239 | 54 | 23% |
SC '05 | 260 | 62 | 24% |
SC '04 | 200 | 60 | 30% |
SC '03 | 207 | 60 | 29% |
SC '02 | 230 | 67 | 29% |
SC '01 | 240 | 60 | 25% |
SC '00 | 179 | 62 | 35% |
Supercomputing '95 | 241 | 69 | 29% |
Supercomputing '93 | 300 | 72 | 24% |
Supercomputing '92 | 220 | 75 | 34% |
Supercomputing '91 | 215 | 83 | 39% |
Overall | 6,373 | 1,516 | 24% |