Cited By
View all- Nuriyev EManumachu RAseeri SVerma MLastovetsky A(2024)SUARA: A scalable universal allreduce communication algorithm for acceleration of parallel deep learning applicationsJournal of Parallel and Distributed Computing10.1016/j.jpdc.2023.104767183(104767)Online publication date: Jan-2024
- Reisizadeh APrakash SPedarsani RAvestimehr A(2022)CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed LearningIEEE/ACM Transactions on Networking10.1109/TNET.2021.310909730:1(148-161)Online publication date: Feb-2022
- Castelló AQuintana-Ortí EDuato J(2021)Accelerating distributed deep neural network training with pipelined MPI allreduceCluster Computing10.1007/s10586-021-03370-9Online publication date: 7-Aug-2021
- Show More Cited By