Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3343211acmotherconferencesBook PagePublication PageseurompiConference Proceedingsconference-collections
EuroMPI '19: Proceedings of the 26th European MPI Users' Group Meeting
ACM2019 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
EuroMPI 2019: 26th European MPI Users' Group Meeting Zürich Switzerland September 11 - 13, 2019
ISBN:
978-1-4503-7175-9
Published:
11 September 2019

Reflects downloads up to 09 Nov 2024Bibliometrics
Skip Abstract Section
Abstract

Since 1994, the EuroMPI conference is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and languages, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability. EuroMPI is not exclusively dedicated to MPI, but also welcomes contributions on extensions or alternative interfaces for high-performance homogeneous/heterogeneous/hybrid systems, benchmarks, tools, parallel I/O, fault tolerance, and parallel applications using MPI and other interfaces. Through the presentation of contributed papers, posters and invited talks, the EuroMPI meeting always provides fine opportunities for attendees to interact and share ideas and experiences to contribute to the improvement and furthering of message-passing and related parallel programming paradigms.

Skip Table Of Content Section
abstract
Foreword EuroMPI 2019
Article No.: 1, Pages 1–2https://doi.org/10.1145/3343211.3343212
SESSION: Message-passing and PGAS interfaces
research-article
Public Access
Exposition, clarification, and expansion of MPI semantic terms and conventions: is a nonblocking MPI function permitted to block?
Article No.: 2, Pages 1–10https://doi.org/10.1145/3343211.3343213

This paper offers a timely study and proposed clarifications, revisions, and enhancements to the Message Passing Interface's (MPI's) Semantic Terms and Conventions. To enhance MPI, a clearer understanding of the meaning of the key terminology has proven ...

research-article
Persistent coarrays: integrating MPI storage windows in coarray fortran
Article No.: 3, Pages 1–8https://doi.org/10.1145/3343211.3343214

The inherent integration of novel hardware and software components on HPC is expected to considerably aggravate the Mean Time Between Failures (MTBF) on scientific applications, while simultaneously increase the programming complexity of these clusters. ...

research-article
QMPI: a next generation MPI profiling interface for modern HPC platforms
Article No.: 4, Pages 1–10https://doi.org/10.1145/3343211.3343215

As we approach exascale and start planning for beyond, the rising complexity of systems and applications demands new monitoring, analysis, and optimization approaches. This requires close coordination with the parallel programming system used, which for ...

research-article
Efficient notifications for MPI one-sided applications
Article No.: 5, Pages 1–10https://doi.org/10.1145/3343211.3343216

MPI One-sided communications have the potential to increase applications performance by reducing the noise on remote processors. They consist in Remote Memory Accesses roughly orchestrated in two types of operations: memory synchronizations and actual ...

research-article
An MPI interface for application and hardware aware cartesian topology optimization
Article No.: 6, Pages 1–8https://doi.org/10.1145/3343211.3343217

Many scientific applications perform computations on a Cartesian grid. The common approach for the parallelization of these applications with MPI is domain decomposition. To help developers with the mapping of MPI processes to subdomains, the MPI ...

SESSION: Applications of and for MPI
research-article
Analysis of model parallelism for distributed neural networks
Article No.: 7, Pages 1–10https://doi.org/10.1145/3343211.3343218

We analyze the performance of model parallelism applied to the training of deep neural networks on clusters. For this study, we elaborate a parameterized analytical performance model that captures the main computational and communication stages in ...

research-article
Implementing efficient message logging protocols as MPI application extensions
Article No.: 8, Pages 1–11https://doi.org/10.1145/3343211.3343219

Message logging protocols are enablers of local rollback, a more efficient alternative to global rollback, for fault tolerant MPI applications. Until now, message logging MPI implementations have incurred the overheads of a redesign and redeployment of ...

research-article
A performance analysis and optimization of PMIx-based HPC software stacks
Article No.: 9, Pages 1–10https://doi.org/10.1145/3343211.3343220

Process management libraries and runtime environments serve an important role in the HPC application lifecycle. This work provides a roadmap for implementing a high-performance PMIx based software stacks and targets four performance-critical areas ...

SESSION: Performance, implementations, hardware
research-article
Mixing ranks, tasks, progress and nonblocking collectives
Article No.: 10, Pages 1–10https://doi.org/10.1145/3343211.3343221

Since the beginning, MPI has defined the rank as an implicit attribute associated with the MPI process' environment. In particular, each MPI process generally runs inside a given UNIX process and is associated with a fixed identifier in its WORLD ...

research-article
Minimizing the usage of hardware counters for collective communication using triggered operations
Article No.: 11, Pages 1–10https://doi.org/10.1145/3343211.3343222

Triggered operations and counting events or counters are building blocks that can be used by communication libraries, such as MPI, to offload collective operations to the Host Fabric Interface (HFI) or Network Interface Card (NIC). Triggered operations ...

research-article
Evaluating tradeoffs between MPI message matching offload hardware capacity and performance
Article No.: 12, Pages 1–11https://doi.org/10.1145/3343211.3343223

Although its demise has been frequently predicted, the Message Passing Interface (MPI) remains the dominant programming model for scientific applications running on high-performance computing (HPC) systems. MPI specifies powerful semantics for ...

research-article
MPI tag matching performance on ConnectX and ARM
Article No.: 13, Pages 1–10https://doi.org/10.1145/3343211.3343224

As we approach Exascale, message matching has increasingly become a significant factor in HPC application performance. To address this, network vendors have placed higher precedence on improving MPI message matching performance. ConnectX-5, Mellanox's ...

research-article
Runtime level failure detection and propagation in HPC systems
Article No.: 14, Pages 1–11https://doi.org/10.1145/3343211.3343225

As the scale of high-performance computing (HPC) systems continues to grow, mean-time-to-failure (MTTF) of these HPC systems is negatively impacted and tends to decrease. In order to efficiently run long computing jobs on these systems, handling system ...

Contributors
  • Swiss Federal Institute of Technology, Zurich
  • Vienna University of Technology

Index Terms

  1. Proceedings of the 26th European MPI Users' Group Meeting
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Acceptance Rates

    EuroMPI '19 Paper Acceptance Rate 13 of 26 submissions, 50%;
    Overall Acceptance Rate 66 of 139 submissions, 47%
    YearSubmittedAcceptedRate
    EuroMPI '19261350%
    EuroMPI '17371746%
    EuroMPI '15291448%
    EuroMPI '13472247%
    Overall1396647%