default search action
29th HPDC 2020: Stockholm, Sweden
- Manish Parashar, Vladimir Vlassov, David E. Irwin, Kathryn M. Mohror:
HPDC '20: The 29th International Symposium on High-Performance Parallel and Distributed Computing, Stockholm, Sweden, June 23-26, 2020. ACM 2020, ISBN 978-1-4503-7052-3
Keynote & Invited Talks
- Costas Bekas:
Cognitive Discovery: Accelerating Technical R&D with AI. 1 - Mary W. Hall:
High Performance is All about Minimizing Data Movement. 3-4
Improving Accuracy and Efficiency
- Peter A. Dinda, Alex Bernat, Conor Hetland:
Spying on the Floating Point Behavior of Existing, Unmodified Scientific Applications. 5-16 - Shaoshuai Zhang, Elaheh Baharlouei, Panruo Wu:
High Accuracy Matrix Computations on Neural Engines: A Study of QR Factorization and its Applications. 17-28 - Ivan Simecek, Claudio Kozický, Daniel Langr, Pavel Tvrdík:
Space-Efficient k-d Tree-Based Storage Format for Sparse Tensors. 29-33 - Maurizio Drocco, Vito Giovanni Castellana, Marco Minutoli:
Practical Distributed Programming in C++. 35-39
Up in the Clouds
- J. C. S. Kadupitige, Vikram Jadhao, Prateek Sharma:
Modeling The Temporally Constrained Preemptions of Transient Cloud VMs. 41-52 - Alexander Fuerst, Ahmed Ali-Eldin, Prashant J. Shenoy, Prateek Sharma:
Cloud-scale VM-deflation for Running Interactive Applications On Transient Servers. 53-64 - Ryan Chard, Yadu N. Babuji, Zhuozhao Li, Tyler J. Skluzacek, Anna Woodard, Ben Blaiszik, Ian T. Foster, Kyle Chard:
funcX: A Federated Function Serving Fabric for Science. 65-76
Big Data Management
- Sunggon Kim, Alex Sim, Kesheng Wu, Suren Byna, Yongseok Son, Hyeonsang Eom:
Towards HPC I/O Performance Prediction through Large-scale Log Analysis. 77-88 - Kai Zhao, Sheng Di, Xin Liang, Sihuan Li, Dingwen Tao, Zizhong Chen, Franck Cappello:
Significantly Improving Lossy Compression for HPC Datasets with Second-Order Prediction and Parameter Optimization. 89-100 - Alessio Netti, Micha Müller, Carla Guillén, Michael Ott, Daniele Tafani, Gence Ozer, Martin Schulz:
DCDB Wintermute: Enabling Online and Holistic Operational Data Analytics on HPC Systems. 101-112
Distributed Learning
- Linnan Wang, Wei Wu, Junyu Zhang, Hang Liu, George Bosilca, Maurice Herlihy, Rodrigo Fonseca:
FFT-based Gradient Sparsification for the Distributed Training of Deep Neural Networks. 113-124 - Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng:
TiFL: A Tier-based Federated Learning System. 125-136
Enabling Adaptivity
- Xi Wang, John D. Leidel, Brody Williams, Yong Chen:
PAC: Paged Adaptive Coalescer for 3D-Stacked Memory. 137-148 - Yuede Ji, H. Howie Huang:
Aquila: Adaptive Parallel Computation of Graph Connectivity Queries. 149-160 - Abel Souza, Kristiaan Pelckmans, Devarshi Ghoshal, Lavanya Ramakrishnan, Johan Tordsson:
ASA - The Adaptive Scheduling Architecture. 161-165 - Subhendu Behera, Lipeng Wan, Frank Mueller, Matthew Wolf, Scott Klasky:
Orchestrating Fault Prediction with Live Migration and Checkpointing. 167-171
Exploiting GPUs
- Ting-An Yeh, Hung-Hsin Chen, Jerry Chou:
KubeShare: A Framework to Manage GPUs as First-Class and Shared Resources in Container Cloud. 173-184 - Donglin Yang, Dazhao Cheng:
Efficient GPU Memory Management for Nonlinear DNNs. 185-196 - Dongjie Tang, Yun Wang, Linsheng Li, Jiacheng Ma, Xue Liu, Zhengwei Qi, Haibing Guan:
gRemote: API-Forwarding Powered Cloud Rendering. 197-201 - Akihiko Kasagi, Akihiro Tabuchi, Masafumi Yamazaki, Takumi Honda, Masahiro Miwa, Naoto Fukumoto, Tsuguchika Tabaru, Atsushi Ike, Kohta Nakashima:
An Efficient Technique for Large Mini-batch Challenge of DNNs Training on Large Scale Cluster. 203-207
High Performance Networking
- Jian Zhao, Shujun Zhuang, Jian Li, Haibing Guan:
RECANS: Low-Latency Network Function Chains with Hierarchical State Sharing. 209-220 - Garegin Grigoryan, Yaoqing Liu, Minseok Kwon:
Boosting FIB Caching Performance with Aggregation. 221-232
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.