Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Deep Neural Network Accelerator using Residue Arithmetic in a Hybrid Optoelectronic System

Published: 13 October 2022 Publication History

Abstract

The acceleration of Deep Neural Networks (DNNs) has attracted much attention in research. Many critical real-time applications benefit from DNN accelerators but are limited by their compute-intensive nature. This work introduces an accelerator for Convolutional Neural Network (CNN), based on a hybrid optoelectronic computing architecture and residue number system (RNS). The RNS reduces the optical critical path and lowers the power requirements. In addition, the wavelength division multiplexing (WDM) allows high-speed operation at the system level by enabling high-level parallelism. The proposed RNS compute modules use one-hot encoding, and thus enable fast switching between the electrical and optical domains. We propose a new architecture that combines residue electrical adders and optical multipliers as the matrix-vector multiplication unit. Moreover, we enhance the implementation of different CNN computational kernels using WDM-enabled RNS based integrated photonics. The area and power efficiency of the proposed accelerator are 0.39 TOPS/s/mm2 and 3.22 TOPS/s/W, respectively. In terms of computation capability, the proposed chip is 12.7× and 4.02× better than other optical implementation and memristor implementation, respectively. Our experimental evaluation using DNN benchmarks illustrates that our architecture can perform on average more than 72 times faster than GPU under the same power budget.

References

[1]
Hossam O. Ahmed, Maged Ghoneima, and Mohamed Dessouky. 2018. High-speed 2D parallel MAC unit hardware accelerator for convolutional neural network. In Proceedings of SAI Intelligent Systems Conference. Springer, 655–663.
[2]
Giuseppe Alia and Enrico Martinelli. 2005. NEUROM: A ROM based RNS digital neuron. Neural Networks 18, 2 (2005), 179–189.
[3]
Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-mei W. Hwu, John Paul Strachan, Kaushik Roy, et al. 2019. PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 715–731.
[4]
Leily A. Bakhtiar and Mehdi Hosseinzadeh. 2016. All optical residue arithmetic with micro ring resonators and its application. Optical and Quantum Electronics 48, 2 (2016), 125.
[5]
Viraj Bangari, Bicky A. Marquez, Heidi Miller, Alexander N. Tait, Mitchell A. Nahmias, Thomas Ferreira De Lima, Hsuan-Tung Peng, Paul R. Prucnal, and Bhavin J. Shastri. 2019. Digital electronics and analog photonics for convolutional neural networks (DEAP-CNNs). IEEE Journal of Selected Topics in Quantum Electronics 26, 1 (2019), 1–13.
[6]
Václav E. Beneš. 1964. Permutation groups, complexes, and rearrangeable connecting networks. Bell System Technical Journal 43, 4 (1964), 1619–1640.
[7]
Matthias Blaicher, Muhammad Rodlin Billah, Juned Kemal, Tobias Hoose, Pablo Marin-Palomo, Andreas Hofmann, Yasar Kutuvantavida, Clemens Kieninger, Philipp-Immanuel Dietrich, Matthias Lauermann, et al. 2020. Hybrid multi-chip assembly of optical communication engines by in situ 3D nano-lithography. Light: Science & Applications 9, 1 (2020), 1–11.
[8]
Chihming Chang and Rami Melhem. 1997. Arbitrary size Benes networks. Parallel Processing Letters 7, 3 (1997), 279–284.
[9]
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM Sigplan Notices, Vol. 49. ACM, 269–284.
[10]
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, et al. 2014. DaDianNao: A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, 609–622.
[11]
Dan Claudiu Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber. 2011. Flexible, high performance convolutional neural networks for image classification. In Twenty-Second International Joint Conference on Artificial Intelligence.
[12]
Jason Cong and Bingjun Xiao. 2014. Minimizing computation in convolutional neural networks. In International Conference on Artificial Neural Networks. Springer, 281–290.
[13]
NVIDIA Corporation. 2019. NVIDIA TESLA V100 TENSOR CORE GPU. (2019). https://www.nvidia.com/en-us/data-center/tesla-v100/.
[14]
Ines Del Campo, Raul Finker, Javier Echanobe, and Koldo Basterretxea. 2013. Controlled accuracy approximation of sigmoid function for efficient FPGA-based implementation of artificial neurons. Electronics Letters 49, 25 (2013), 1598–1600.
[15]
Boqun Dong, Mengfei Liu, and Yangyang Zhao. 2021. Simulation of a nano plasmonic pillar-based optical sensor with AI-assisted signal processing. In 239th ECS Meeting with the 18th International Meeting on Chemical Sensors (IMCS) (May 30-June 3, 2021). ECS.
[16]
Johannes Feldmann, Nathan Youngblood, Maxim Karpov, Helge Gehring, Xuan Li, Maik Stappers, Manuel Le Gallo, Xin Fu, Anton Lukashchuk, Arslan Sajid Raja, et al. 2021. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 7840 (2021), 52–58.
[17]
Chenghao Feng, Zheng Zhao, Zhoufeng Ying, Jiaqi Gu, David Z. Pan, and Ray T. Chen. 2020. Compact design of on-chip Elman optical recurrent neural network. In CLEO: Applications and Technology. Optical Society of America, JTh2B–8.
[18]
Yi Gang, Weisheng Zhao, Jacques-Olivier Klein, Claude Chappert, and Pascale Mazoyer. 2011. A high-reliability, low-power magnetic full adder. IEEE Transactions on Magnetics 47, 11 (2011), 4611–4616.
[19]
Tara Ghafouri and Negin Manavizadeh. 2021. Design and simulation of high-performance 2: 1 multiplexer based on side-contacted FED. Ain Shams Engineering Journal 12, 1 (2021), 709–716.
[20]
Candy Goyal and Ashish Kumar. 2012. Comparative analysis of energy-efficient low power 1-bit full adders at 120nm technology. International Journal of Advances in Engineering & Technology 4, 1 (2012), 501.
[21]
Kaiyuan Guo, Shulin Zeng, Jincheng Yu, Yu Wang, and Huazhong Yang. 2017. A survey of FPGA-based neural network accelerator. arXiv preprint arXiv:1712.08934 (2017).
[22]
Alexander Guzhva, Sergey Dolenko, and Igor Persiantsev. 2009. Multifold acceleration of neural network computations using GPU. In International Conference on Artificial Neural Networks. Springer, 373–380.
[23]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[24]
Alan Huang, Yoshito Tsunoda, Joseph W. Goodman, and Satoshi Ishihara. 1979. Optical computation using residue arithmetic. Applied Optics 18, 2 (1979), 149–162.
[25]
Hasitha Jayatilleka, Michael Caverley, Nicolas A. F. Jaeger, Sudip Shekhar, and Lukas Chrostowski. 2015. Crosstalk limitations of microring-resonator based WDM demultiplexers on SOI. In 2015 IEEE Optical Interconnects Conference (OI). IEEE, 48–49.
[26]
Raj Johri, Raghvendra Singh, Satya Prakash Pandey, and Shyam Akashe. 2012. Comparative analysis of 10T and 14T full adder at 45nm technology. In 2012 2nd IEEE International Conference on Parallel, Distributed and Grid Computing. IEEE, 833–837.
[27]
Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, and David Patterson. 2020. A domain-specific supercomputer for training deep neural networks. Commun. ACM 63, 7 (2020), 67–78.
[28]
Abbas Karimi, Kiarash Aghakhani, Seyed Ehsan Manavi, Faraneh Zarafshan, and SAR Al-Haddad. 2014. Introduction and analysis of optimal routing algorithm in Benes networks. Procedia Computer Science 42 (2014), 313–319.
[29]
Dave Kharas, Jason J. Plant, William Loh, Reuel B. Swint, Suraj Bramhavar, Christopher Heidelberger, Siva Yegnanarayanan, and Paul W. Juodawlkis. 2020. High-power (>300 mW) on-chip laser with passively aligned silicon-nitride waveguide DBR cavity. IEEE Photonics Journal 12, 6 (2020), 1–12.
[30]
Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
[31]
Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Yi Luo, Mona Jarrahi, and Aydogan Ozcan. 2018. All-optical machine learning using diffractive deep neural networks. Science 361, 6406 (2018), 1004–1008.
[32]
Daofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. 2015. PuDianNao: A polyvalent machine learning accelerator. In ACM SIGARCH Computer Architecture News, Vol. 43. ACM, 369–381.
[33]
Weichen Liu, Wenyang Liu, Yichen Ye, Qian Lou, Yiyuan Xie, and Lei Jiang. 2019. HolyLight: A nanophotonic accelerator for deep learning in data centers. In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1483–1488.
[34]
Naveen Muralimanohar, Rajeev Balasubramonian, and Norm Jouppi. 2007. Optimizing NUCA organizations and wiring alternatives for large caches with CACTI 6.0. In Proceedings of the 40th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, 3–14.
[35]
Hiroki Nakahara and Tsutomu Sasao. 2015. A deep convolutional neural network based on nested residue number system. In 2015 25th International Conference on Field Programmable Logic and Applications (FPL). IEEE, 1–6.
[36]
Dong Nguyen, Daewoo Kim, and Jongeun Lee. 2017. Double MAC: Doubling the performance of convolutional neural networks on modern FPGAs. In Design, Automation & Test in Europe Conference & Exhibition (DATE’17). IEEE, 890–893.
[37]
Eric B. Olsen. 2018. RNS hardware matrix multiplier for high precision neural network acceleration: “RNS TPU”. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5.
[38]
Amos R. Omondi and Benjamin Premkumar. 2007. Residue Number Systems: Theory and Implementation. Vol. 2. World Scientific.
[39]
Rupert F. Oulton, Volker J. Sorger, Thomas Zentgraf, Ren-Min Ma, Christopher Gladden, Lun Dai, Guy Bartal, and Xiang Zhang. 2009. Plasmon lasers at deep subwavelength scale. Nature 461, 7264 (2009), 629.
[40]
Sudeep Pasricha and Mahdi Nikdast. 2020. A survey of silicon photonics for energy-efficient manycore computing. IEEE Design & Test 37, 4 (2020), 60–81.
[41]
Jiaxin Peng, Yousra Alkabani, Shuai Sun, Volker Sorger, and Tarek El-Ghazawi. 2019. Integrated photonics architectures for residue number system computations. In IEEE International Conference on Rebooting Computing (ICRC’19). 129–137.
[42]
Jiaxin Peng, Yousra Alkabani, Shuai Sun, Volker J. Sorger, and Tarek El-Ghazawi. 2020. DNNARA: A deep neural network accelerator using residue arithmetic and integrated photonics. In 49th International Conference on Parallel Processing-ICPP. 1–11.
[43]
Jiaxin Peng, Shuai Sun, Vikram K. Narayana, Volker J. Sorger, and Tarek El-Ghazawi. 2018. Residue number system arithmetic based on integrated nanophotonics. Optics Letters 43, 9 (2018), 2026–2029.
[44]
Luca Pilato, Sergio Saponara, and Luca Fanucci. 2016. Performance of digital adder architectures in 180nm CMOS standard-cell technology. In 2016 International Conference on Applied Electronics (AE). IEEE, 211–214.
[45]
S. Preethi, V. Prasannadevi, and B. Arunadevi. 2020. Smart healthcare monitoring system for war-end soldiers using CNN. In Smart Medical Data Sensing and IoT Systems Design in Healthcare. IGI Global, 97–131.
[46]
Zidi Qin, Yuou Qiu, Huaqing Sun, Zhonghai Lu, Zhongfeng Wang, Qinghong Shen, and Hongbing Pan. 2020. A novel approximation methodology and its efficient VLSI implementation for the sigmoid function. IEEE Transactions on Circuits and Systems II: Express Briefs 67, 12 (2020), 3422–3426.
[47]
Vinyas D. Sagar and T. S. Nanjundeswaraswamy. 2019. Artificial intelligence in autonomous vehicles—a literature review. i-Manager’s Journal on Future Engineering and Technology 14, 3 (2019), 56.
[48]
Sahand Salamat, Mohsen Imani, Sarangh Gupta, and Tajana Rosing. 2018. RNSnet: In-memory neural network acceleration using residue number system. In 2018 IEEE International Conference on Rebooting Computing (ICRC). IEEE, 1–12.
[49]
Yannick Salamin, Ping Ma, Benedikt Baeuerle, Alexandros Emboras, Yuriy Fedoryshyn, Wolfgang Heni, Bojun Cheng, Arne Josten, and Juerg Leuthold. 2018. 100 GHz plasmonic photodetector. ACS Photonics 5, 8 (2018), 3291–3297.
[50]
Alexander A. Sawchuk and Timothy C. Strand. 1984. Digital optical computing. Proc. IEEE 72, 7 (1984), 758–779.
[51]
Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R. Stanley Williams, and Vivek Srikumar. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Computer Architecture News 44, 3 (2016), 14–26.
[52]
Yichen Shen, Nicholas C. Harris, Scott Skirlo, Mihika Prabhu, Tom Baehr-Jones, Michael Hochberg, Xin Sun, Shijie Zhao, Hugo Larochelle, Dirk Englund, et al. 2017. Deep learning with coherent nanophotonic circuits. Nature Photonics 11, 7 (2017), 441.
[53]
Kyle Shiflett, Avinash Karanth, Razvan Bunescu, and Ahmed Louri. 2021. Albireo: Energy-efficient acceleration of convolutional neural networks via silicon photonics. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). IEEE, 860–873.
[54]
Jiong Si, Sarah L. Harris, and Evangelos Yfantis. 2018. A dynamic ReLU on neural network. In 2018 IEEE 13th Dallas Circuits and Systems Conference (DCAS). IEEE, 1–6.
[55]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[56]
Linghao Song, Xuehai Qian, Hai Li, and Yiran Chen. 2017. PipeLayer: A pipelined ReRAM-based accelerator for deep learning. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 541–552.
[57]
Shuai Sun, Vikram K. Narayana, Ibrahim Sarpkaya, Joseph Crandall, Richard A. Soref, Hamed Dalir, Tarek El-Ghazawi, and Volker J. Sorger. 2017. Hybrid photonic-plasmonic nonblocking broadband 5 × 5 router for optical networks. IEEE Photonics Journal 10, 2 (2017), 1–12.
[58]
Febin Sunny, Asif Mirza, Mahdi Nikdast, and Sudeep Pasricha. 2021. CrossLight: A cross-layer optimized silicon photonic neural network accelerator. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 1069–1074.
[59]
Nicholas S. Szabo and Richard I. Tanaka. 1967. Residue Arithmetic and its Applications to Computer Technology. McGraw-Hill.
[60]
Zarin Tabassum, Meem Shahrin, Aniqa Ibnat, and Tawfiq Amin. 2018. Comparative analysis and simulation of different CMOS full adders using cadence in 90 nm technology. In 2018 3rd International Conference for Convergence in Technology (I2CT). IEEE, 1–6.
[61]
A. Tai, I. Cindrich, James R. Fienup, and C. C. Aleksoff. 1979. Optical residue arithmetic computer with programmable computation modules. Applied Optics 18, 16 (1979), 2812–2823.
[62]
Alexander N. Tait, Thomas Ferreira De Lima, Ellen Zhou, Allie X. Wu, Mitchell A. Nahmias, Bhavin J. Shastri, and Paul R. Prucnal. 2017. Neuromorphic photonic networks using silicon photonic weight banks. Scientific Reports 7, 1 (2017), 1–10.
[63]
Hsinyu Tsai, Stefano Ambrogio, Pritish Narayanan, Robert M. Shelby, and Geoffrey W. Burr. 2018. Recent progress in analog memory-based accelerators for deep learning. Journal of Physics D: Applied Physics 51, 28 (2018), 283001.
[64]
Giane Ulloa, Vinícius Lucena, and Cristina Meinhardt. 2017. Comparing 32nm full adder TMR and DTMR architectures. In 2017 24th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE, 294–297.
[65]
International Communication Union. 2012. G.694.1 : Spectral grids for WDM applications: DWDM frequency grid. (2012). https://www.itu.int/rec/T-REC-G.694.1-201202-I/en.
[66]
Swagath Venkataramani, Vijayalakshmi Srinivasan, Wei Wang, Sanchari Sen, Jintao Zhang, Ankur Agrawal, Monodeep Kar, Shubham Jain, Alberto Mannari, Hoang Tran, et al. 2021. RaPiD: AI accelerator for ultra-low precision training and inference. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). IEEE, 153–166.
[67]
Dong Wang, Ke Xu, Qun Jia, and Soheil Ghiasi. 2019. ABM-SpConv: A novel approach to FPGA-based acceleration of convolutional neural network inference. In Proceedings of the 56th Annual Design Automation Conference 2019. ACM, 87.
[68]
Xingyuan Xu, Mengxi Tan, Bill Corcoran, Jiayang Wu, Andreas Boes, Thach G. Nguyen, Sai T. Chu, Brent E. Little, Damien G. Hicks, Roberto Morandotti, et al. 2021. 11 TOPS photonic convolutional accelerator for optical neural networks. Nature 589, 7840 (2021), 44–51.
[69]
Zhoufeng Ying, Shounak Dhar, Zheng Zhao, Chenghao Feng, Rohan Mital, Chi-Jui Chung, David Z. Pan, Richard A. Soref, and Ray T. Chen. 2018. Electro-optic ripple-carry adder in integrated silicon photonics for optical computing. IEEE Journal of Selected Topics in Quantum Electronics 24, 6 (2018), 1–10.
[70]
Chen Zhang, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Cong. 2015. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 161–170.
[71]
Tianyi Zhang, Zhiqiu Lin, Guandao Yang, and Christopher De Sa. 2019. QPyTorch: A low-precision arithmetic simulation framework. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS). IEEE, 10–13.
[72]
Shao Nan Zheng, J. Zou, Hong Cai, J. F. Song, L. K. Chin, P. Y. Liu, Z. P. Lin, D. L. Kwong, and A. Q. Liu. 2019. Microring resonator-assisted Fourier transform spectrometer with enhanced resolution and large bandwidth in single chip solution. Nature Communications 10, 1 (2019), 1–8.
[73]
Farzaneh Zokaee, Qian Lou, Nathan Youngblood, Weichen Liu, Yiyuan Xie, and Lei Jiang. 2020. LightBulb: A photonic-nonvolatile-memory-based accelerator for binarized convolutional neural networks. In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1438–1443.

Cited By

View all
  • (2024)Improvement of the Cybersecurity of the Satellite Internet of Vehicles through the Application of an Authentication Protocol Based on a Modular Error-Correction CodeWorld Electric Vehicle Journal10.3390/wevj1507027815:7(278)Online publication date: 21-Jun-2024
  • (2024)A review of emerging trends in photonic deep learning acceleratorsFrontiers in Physics10.3389/fphy.2024.136909912Online publication date: 15-Jul-2024
  • (2024)Photonic Cryptographic Circuits for All-Photonics Networkオールフォトニクス・ネットワークに向けた光暗号回路IEICE ESS Fundamentals Review10.1587/essfr.18.2_15818:2(158-166)Online publication date: 1-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Journal on Emerging Technologies in Computing Systems
ACM Journal on Emerging Technologies in Computing Systems  Volume 18, Issue 4
October 2022
429 pages
ISSN:1550-4832
EISSN:1550-4840
DOI:10.1145/3563906
  • Editor:
  • Ramesh Karri
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 13 October 2022
Online AM: 21 July 2022
Accepted: 05 July 2022
Revised: 01 July 2022
Received: 28 December 2021
Published in JETC Volume 18, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Neural network accelerator
  2. optical computing
  3. residue number system

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • Air Force Office of Scientific Research (AFOSR)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)183
  • Downloads (Last 6 weeks)18
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Improvement of the Cybersecurity of the Satellite Internet of Vehicles through the Application of an Authentication Protocol Based on a Modular Error-Correction CodeWorld Electric Vehicle Journal10.3390/wevj1507027815:7(278)Online publication date: 21-Jun-2024
  • (2024)A review of emerging trends in photonic deep learning acceleratorsFrontiers in Physics10.3389/fphy.2024.136909912Online publication date: 15-Jul-2024
  • (2024)Photonic Cryptographic Circuits for All-Photonics Networkオールフォトニクス・ネットワークに向けた光暗号回路IEICE ESS Fundamentals Review10.1587/essfr.18.2_15818:2(158-166)Online publication date: 1-Oct-2024
  • (2024)A blueprint for precise and fault-tolerant analog neural networksNature Communications10.1038/s41467-024-49324-815:1Online publication date: 14-Jun-2024
  • (2024)Photonics Multiply-Accumulation Computations System Based on Residue ArithmeticACS Photonics10.1021/acsphotonics.3c0170411:4(1540-1547)Online publication date: 6-Apr-2024
  • (2023)Reconfigurable focusing and defocusing meta lens using Sb2Se3 phase change materialHigh Contrast Metastructures XII10.1117/12.2648377(12)Online publication date: 15-Mar-2023
  • (2023)Photonic tensor core machine learning acceleratorAI and Optical Data Sciences IV10.1117/12.2647179(30)Online publication date: 15-Mar-2023

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media