Since 1994, the EuroMPI conference is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and languages, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability. EuroMPI is not exclusively dedicated to MPI, but also welcomes contributions on extensions or alternative interfaces for high-performance homogeneous/heterogeneous/hybrid systems, benchmarks, tools, parallel I/O, fault tolerance, and parallel applications using MPI and other interfaces. Through the presentation of contributed papers, posters and invited talks, the EuroMPI meeting always provides fine opportunities for attendees to interact and share ideas and experiences to contribute to the improvement and furthering of message-passing and related parallel programming paradigms.
Proceeding Downloads
Exposition, clarification, and expansion of MPI semantic terms and conventions: is a nonblocking MPI function permitted to block?
- Purushotham V. Bangalore,
- Rolf Rabenseifner,
- Daniel J. Holmes,
- Julien Jaeger,
- Guillaume Mercier,
- Claudia Blaas-Schenner,
- Anthony Skjellum
This paper offers a timely study and proposed clarifications, revisions, and enhancements to the Message Passing Interface's (MPI's) Semantic Terms and Conventions. To enhance MPI, a clearer understanding of the meaning of the key terminology has proven ...
Persistent coarrays: integrating MPI storage windows in coarray fortran
The inherent integration of novel hardware and software components on HPC is expected to considerably aggravate the Mean Time Between Failures (MTBF) on scientific applications, while simultaneously increase the programming complexity of these clusters. ...
QMPI: a next generation MPI profiling interface for modern HPC platforms
As we approach exascale and start planning for beyond, the rising complexity of systems and applications demands new monitoring, analysis, and optimization approaches. This requires close coordination with the parallel programming system used, which for ...
Efficient notifications for MPI one-sided applications
MPI One-sided communications have the potential to increase applications performance by reducing the noise on remote processors. They consist in Remote Memory Accesses roughly orchestrated in two types of operations: memory synchronizations and actual ...
An MPI interface for application and hardware aware cartesian topology optimization
Many scientific applications perform computations on a Cartesian grid. The common approach for the parallelization of these applications with MPI is domain decomposition. To help developers with the mapping of MPI processes to subdomains, the MPI ...
Analysis of model parallelism for distributed neural networks
We analyze the performance of model parallelism applied to the training of deep neural networks on clusters. For this study, we elaborate a parameterized analytical performance model that captures the main computational and communication stages in ...
Implementing efficient message logging protocols as MPI application extensions
Message logging protocols are enablers of local rollback, a more efficient alternative to global rollback, for fault tolerant MPI applications. Until now, message logging MPI implementations have incurred the overheads of a redesign and redeployment of ...
A performance analysis and optimization of PMIx-based HPC software stacks
Process management libraries and runtime environments serve an important role in the HPC application lifecycle. This work provides a roadmap for implementing a high-performance PMIx based software stacks and targets four performance-critical areas ...
Mixing ranks, tasks, progress and nonblocking collectives
- Jean-Baptiste Besnard,
- Julien Jaeger,
- Allen D. Malony,
- Sameer Shende,
- Hugo Taboada,
- Marc Pérache,
- Patrick Carribault
Since the beginning, MPI has defined the rank as an implicit attribute associated with the MPI process' environment. In particular, each MPI process generally runs inside a given UNIX process and is associated with a fixed identifier in its WORLD ...
Minimizing the usage of hardware counters for collective communication using triggered operations
Triggered operations and counting events or counters are building blocks that can be used by communication libraries, such as MPI, to offload collective operations to the Host Fabric Interface (HFI) or Network Interface Card (NIC). Triggered operations ...
Evaluating tradeoffs between MPI message matching offload hardware capacity and performance
Although its demise has been frequently predicted, the Message Passing Interface (MPI) remains the dominant programming model for scientific applications running on high-performance computing (HPC) systems. MPI specifies powerful semantics for ...
MPI tag matching performance on ConnectX and ARM
As we approach Exascale, message matching has increasingly become a significant factor in HPC application performance. To address this, network vendors have placed higher precedence on improving MPI message matching performance. ConnectX-5, Mellanox's ...
Runtime level failure detection and propagation in HPC systems
As the scale of high-performance computing (HPC) systems continues to grow, mean-time-to-failure (MTTF) of these HPC systems is negatively impacted and tends to decrease. In order to efficiently run long computing jobs on these systems, handling system ...
Index Terms
- Proceedings of the 26th European MPI Users' Group Meeting
Recommendations
Acceptance Rates
Year | Submitted | Accepted | Rate |
---|---|---|---|
EuroMPI '19 | 26 | 13 | 50% |
EuroMPI '17 | 37 | 17 | 46% |
EuroMPI '15 | 29 | 14 | 48% |
EuroMPI '13 | 47 | 22 | 47% |
Overall | 139 | 66 | 47% |