Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3386263.3406924acmotherconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

COCOA: Content-Oriented Configurable Architecture Based on Highly-Adaptive Data Transmission Networks

Published: 07 September 2020 Publication History

Abstract

In domain of parallel computation, most works focus on optimizing PE organization or memory hierarchy to pursue the maximum efficiency, while the importance of data contents has been overlooked for a long time. Actually for structured data, insights on data contents (i.e. values and locations within a structured form) can greatly benefit the computation performance, as fine-grained data manipulation can be performed. In this paper, we claim that by providing a flexible and adaptive data path, an efficient architecture with capability of fine-grained data manipulation can be built. Specifically, we propose COCOA, a novel content-oriented configurable architecture, which integrates multi-functional data reorganization networks in traditional computing scheme to handle the contents of data during the transmission path, so that they can be processed more efficiently. We evaluate COCOA on various problems: complex matrix algorithm (matrix inversion) and sparse DNN. The results indicates that COCOA is versatile enough to achieve high computation efficiency in both cases.

Supplementary Material

MP4 File (3386263.3406924.mp4)
Presentation video

References

[1]
2018. A new golden age for computer architecture: Domain-specific hard-ware/software co-design, enhanced security, open instruction sets, and agile chip development. In ACM/IEEE International Symposium on Computer Architecture. 27?29.
[2]
J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. E. Jerger, and A. Moshovos. 2016. Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). 1--13. https://doi.org/10.1109/ISCA.2016.11
[3]
cliffordwolf. 2006. PicoRV32 - A Size-Optimized RISC-V CPU. http://github.com/cliffordwolf/picorv32 Retrieved June 7, 2019 from
[4]
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. 2016. EIE: efficient inference engine on compressed deep neural network. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE, 243--254.
[5]
Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning Both Weights and Connections for Efficient Neural Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 (NIPS'15). 1135--1143.
[6]
Longbo Huang and Jean Walrand. 2013. A Benes packet network. In 2013 Proceedings IEEE INFOCOM. IEEE, 1204--1212.
[7]
Hiroshi Inoue and Kenjiro Taura. 2015. SIMD-and cache-friendly algorithm for sorting an array of structures. Proceedings of the VLDB Endowment, Vol. 8, 11 (2015), 1274--1285.
[8]
Sang-Woo Jun, Shuotao Xu, et al. 2017. Terabyte sort on fpga-accelerated flash storage. In 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 17--24.
[9]
Andrew Lavin and Scott Gray. 2016. Fast algorithms for convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4013--4021.
[10]
Vivienne Sze, Yu Hsin Chen, Tien Ju Yang, and Joel Emer. 2017. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE, Vol. 105, 12 (2017), 2295--2329.
[11]
Kizheppatt Vipin and Suhaib A Fahmy. 2018. FPGA dynamic and partial reconfiguration: A survey of architectures, methods, and applications. ACM Computing Surveys (CSUR), Vol. 51, 4 (2018), 72.

Cited By

View all
  • (2022)MI2D: Accelerating Matrix Inversion with 2-Dimensional Tile ManipulationsProceedings of the Great Lakes Symposium on VLSI 202210.1145/3526241.3530314(423-429)Online publication date: 6-Jun-2022
  • (2021)PIT: Processing-In-Transmission With Fine-Grained Data Manipulation NetworksIEEE Transactions on Computers10.1109/TC.2020.304823370:6(877-891)Online publication date: 1-Jun-2021
  • (2020)Exploring Better Speculation and Data Locality in Sparse Matrix-Vector Multiplication on Intel Xeon2020 IEEE 38th International Conference on Computer Design (ICCD)10.1109/ICCD50377.2020.00105(601-609)Online publication date: Oct-2020

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSI
September 2020
597 pages
ISBN:9781450379441
DOI:10.1145/3386263
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computing architecture
  2. data reorgonization
  3. high-performance computing
  4. transmission network

Qualifiers

  • Research-article

Conference

GLSVLSI '20
GLSVLSI '20: Great Lakes Symposium on VLSI 2020
September 7 - 9, 2020
Virtual Event, China

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)2
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)MI2D: Accelerating Matrix Inversion with 2-Dimensional Tile ManipulationsProceedings of the Great Lakes Symposium on VLSI 202210.1145/3526241.3530314(423-429)Online publication date: 6-Jun-2022
  • (2021)PIT: Processing-In-Transmission With Fine-Grained Data Manipulation NetworksIEEE Transactions on Computers10.1109/TC.2020.304823370:6(877-891)Online publication date: 1-Jun-2021
  • (2020)Exploring Better Speculation and Data Locality in Sparse Matrix-Vector Multiplication on Intel Xeon2020 IEEE 38th International Conference on Computer Design (ICCD)10.1109/ICCD50377.2020.00105(601-609)Online publication date: Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media