UMPIPE: Unequal Microbatches-Based Pipeline Parallelism for Deep Neural Network Training
Abstract
References
Index Terms
- UMPIPE: Unequal Microbatches-Based Pipeline Parallelism for Deep Neural Network Training
Recommendations
On-the-fly pipeline parallelism
SPAA '13: Proceedings of the twenty-fifth annual ACM symposium on Parallelism in algorithms and architecturesPipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have ...
On-the-Fly Pipeline Parallelism
Special Issue for SPAA 2013Pipeline parallelism organizes a parallel program as a linear sequence of stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have ...
Toward robustness against label noise in training deep discriminative neural networks
NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing SystemsCollecting large training datasets, annotated with high-quality labels, is costly and time-consuming. This paper proposes a novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply. The ...
Comments
Information & Contributors
Information
Published In
Publisher
IEEE Press
Publication History
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0