Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Architecting for Artificial Intelligence with Emerging Nanotechnology

Published: 12 August 2021 Publication History

Abstract

Artificial Intelligence is becoming ubiquitous in products and services that we use daily. Although the domain of AI has seen substantial improvements over recent years, its effectiveness is limited by the capabilities of current computing technology. Recently, there have been several architectural innovations for AI using emerging nanotechnology. These architectures implement mathematical computations of AI with circuits that utilize physical behavior of nanodevices purpose-built for such computations. This approach leads to a much greater efficiency vs. software algorithms running on von Neumann processors or CMOS architectures, which emulate the operations with transistor circuits. In this article, we provide a comprehensive survey of these architectural directions and categorize them based on their contributions. Furthermore, we discuss the potential offered by these directions with real-world examples. We also discuss major challenges and opportunities in this field.

References

[1]
Kyle Hollins Wray, Stefan J. Witwicki, and Shlomo Zilberstein. 2017. Online decision-making for scalable autonomous systems. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence.
[2]
Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. 2015. DeepDriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision.
[3]
Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B. Choy, Philip H. S. Torr, and Manmohan Chandraker. 2017. DESIRE: Distant future prediction in dynamic scenes with interacting agents. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).
[4]
Karen Simonyan, Sander Dieleman, Andrew Senior, and Alex Graves. 2016. WaveNet. Retrieved from https://arXiv1609.03499v2.
[5]
Tomas Mikolov, Wen Tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous spaceword representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT’13).
[6]
Sebastian Ruder, Anders Søgaard, and Ivan Vulić. 2019. Unsupervised cross-lingual representation learning.
[7]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[8]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. MIT Press.
[9]
J. A. Sparano, R. J. Gray, D. F. Makower, K. I. Pritchard, K. S. Albain, D. F. Hayes, C. E. Geyer, E. C. Dees, M. P. Goetz, J. A. Olson, T. Lively, S. S. Badve, T. J. Saphner, L. I. Wagner, T. J. Whelan, M. J. Ellis, S. Paik, W. C. Wood, P. M. Ravdin, M. M. Keane, H. L. Gomez Moreno, P. S. Reddy, T. F. Goggins, I. A. Mayer, A. M. Brufsky, D. L. Toppmeyer, V. G. Kaklamani, J. L. Berenberg, J. Abrams, and G. W. Sledge. 2018. Adjuvant chemotherapy guided by a 21-gene expression assay in breast cancer. N. Engl. J. Med. (2018).
[10]
Mohamed Nooman Ahmed, Andeep S. Toor, Kelsey O'Neil, and Dawson Friedland. 2017. Cognitive computing and the future of healthcare: The cognitive power of IBM Watson has the potential to transform global personalized medicine. IEEE Pulse. 8, 3 (2017), 4--9.
[11]
S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer. 2014. cudnn: Efficient primitives for deep learning. Retrieved from https://arXiv:1410.0759.
[12]
NVIDIA Tesla V100. 2017. Retrieved from https://www.nvidia.com/en-gb/data-center/tesla-v100/.
[13]
NVIDIA Tesla V100. 2017. Retrieved from https://www.nvidia.com/en-gb/data-center/tesla-v100/.
[14]
PyTorch: An open source deep learning platform that provides a seamless path from research prototyping to production deployment. PyTorch team. 2016. Retrieved from https://pytorch.org/.
[15]
Accelerating DNNs with Xilinx Alveo Accelerator Cards. Retrieved from https://www.xilinx.com/.
[16]
Eriko Nurvitadhi, Ganesh Venkatesh, Jaewoong Sim, Debbie Marr, Randy Huang, Jason Gee Hock Ong, Yeong Tat Liew, Krishnan Srivatsan, Duncan Moss, Suchit Subhaschandra, and Guy Boudoukh. 2017. Can FPGAs beat GPUs in accelerating next-generation deep neural networks? In Proceedings of the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’17).
[17]
K. Guo, S. Zeng, J. Yu, Y. Wang, and H. Yang. 2017. A survey of FPGA-based neural network accelerator. Retrieved from https://arXiv:1712.08934.
[18]
Mike Davies, Narayan Srinivasa, Tsung Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, Yuyun Liao, Chit Kwan Lin, Andrew Lines, Ruokun Liu, Deepak Mathaikutty, Steven McCoy, Arnab Paul, Jonathan Tse, Guruguhanathan Venkataramanan, Yi HsinWeng, Andreas Wild, Yoonseok Yang, and Hong Wang. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro. 38, 1 (Jan. 2018), 82--99.
[19]
Andrew S. Cassidy, Jun Sawada, Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Filipp Akopyan, Bryan L. Jackson, and Dharmendra S. Modha. 2016. TrueNorth: A high-performance, low-power neurosynaptic processor for multi-sensory perception, action, and cognition. In Proceedings of the Government Microcircuits Applications & Critical Technology Conference, Orlando, Fl, USA. 14--17.
[20]
Goya Inference Platform and Performance Benchmarks. 2019. Retrieved from https://www.habana.ai/.
[21]
Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R. Chandrasekaran, Jean Marie Bussat, Rodrigo Alvarez-Icaza, John V. Arthur, Paul A. Merolla, and Kwabena Boahen. 2014. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE. 102, 5 (2014), 699--716.
[22]
Johannes Schemmel, Johannes Fieres, and Karlheinz Meier. 2008. Wafer-scale integration of analog neural networks. In Proceedings of the International Joint Conference on Neural Networks.
[23]
Tesla Full Self Driving Chip design and architecture. 2016. Retrieved from https://en.wikichip.org/wiki/tesla_(car_company)/fsd_chip.
[24]
J. Joshua Yang, Dmitri B. Strukov, and Duncan R. Stewart. 2013. Memristive devices for computing. Nature Nanotechnol. 8, 1 (2013), 13--24.
[25]
Stuart A. Wolf, Almadena Y. Chtchelkanova, and Daryl M. Treger. 2006. Spintronics - A retrospective and perspective. IBM J. Res. Dev.
[26]
Abhronil Sengupta and Kaushik Roy. 2017. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing. Appl. Phys. Rev. 4, 4 (2017), 041105.
[27]
Abu Sebastian, Manuel Le Gallo, Riduan Khaddam-Aljameh, and Evangelos Eleftheriou. 2020. Memory devices and applications for in-memory computing. Nature Nanotechnol. 15, 7 (2020), 529--544.
[28]
Shimeng Yu. 2018. Neuro-inspired computing with emerging nonvolatile memorys. Proc. IEEE (2018).
[29]
Navnidhi K. Upadhyay, Hao Jiang, Zhongrui Wang, Shiva Asapu, Qiangfei Xia, and J. Joshua Yang. 2019. Emerging memory devices for neuromorphic computing. Adv. Mater. Technologies. 4, 4 (2019), 1800589.
[30]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Commun. ACM. 60, 6 (2017), 84--90.
[31]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1988. Learning representations by back-propagating errors. Cogn. Model. 5, 3 (1988), 1.
[32]
D. O. Hebb. 1949. The Organization of Behavior. A Wiley Book in Clinical Psychology. 62 (1949), 78.
[33]
David E. Rumelhart and David Zipser. 1985. Feature discovery by competitive learning. Cogn. Sci. 9, 1 (1985), 75--112.
[34]
Harel Z. Shouval, Samuel S. H. Wang, and Gayle M. Wittenberg. 2010. Spike timing dependent plasticity: A consequence of more fundamental learning rules. Front. Comput. Neurosci. 4 (2010), 19.
[35]
X. Yang. 2017. Understanding the variational lower bound. Tech. Rep. 1-4.
[36]
David Maxwell Chickering. 1996. Learning Bayesian Networks is NP-Complete.
[37]
S. Kullback and R. A. Leibler. 1951. On information and sufficiency. Ann. Math. Statist. 22 (1951), 1, 79–86. Retrieved from https://projecteuclid.org/euclid.aoms/1177729694.
[38]
Kelin J. Kuhn. 2012. Considerations for ultimate CMOS scaling. IEEE Trans. Electron Devices 59, 7 (2012), 1813--1828.
[39]
Judea Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.
[40]
Stuart Geman and Donald Geman. 1984. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 6 (1984).
[41]
W. K. Hastings. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika (1970).
[42]
J. Joshua Yang, Matthew D. Pickett, Xuema Li, Douglas A. A. Ohlberg, Duncan R. Stewart, and R. Stanley Williams. 2008. Memristive switching mechanism for metal/oxide/metal nanodevices. Nat. Nanotechnol. 3, 7 (2008), 429--433.
[43]
Zhongrui Wang, Saumil Joshi, Sergey E. Savel'ev, Hao Jiang, Rivu Midya, Peng Lin, Miao Hu, Ning Ge, John Paul Strachan, Zhiyong Li, Qing Wu, Mark Barnell, Geng Lin Li, Huolin L. Xin, R. Stanley Williams, Qiangfei Xia, and J. Joshua Yang. 2017. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 1 (2017).
[44]
Pi Shuang, Can Li, Hao Jiang, Weiwei Xia, Huolin Xin, J. Joshua Yang, and Qiangfei Xia. “Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension”. Nature Nanotechnol. 14, 1 (2019), 35–39.
[45]
F. Xiong, E. Yalon, A. Behnam, C. M. Neumann, K. L. Grosse, S. Deshmukh, and E. Pop. 2016. Towards ultimate scaling limits of phase-change memory. In Proceedings of the IEEE International Electron Devices Meeting (IEDM’16), 4–1. IEEE, 2016.
[46]
Tsai Meng-Ju, Pin-Jui Chen, Dun-Bao Ruan, Fu-Ju Hou, Po-Yang Peng, Liu-Gu Chen, and Yung-Chun Wu. 2019. Investigation of 5-nm-Thick Hf 0.5 Zr 0.5 O 2 ferroelectric FinFET dimensions for sub-60-mV/decade subthreshold slope. IEEE J. Electron Devices Soc. 7 (2019), 1033–1037.
[47]
H. S. Philip Wong, Simone Raoux, Sangbum Kim, Jiale Liang, John P. Reifenberg, Bipin Rajendran, Mehdi Asheghi, and Kenneth E. Goodson. 2010. Phase change memory. In Proceedings of the IEEE.
[48]
Zhengyang Zhao, Mahdi Jamali, Noel D'Souza, Delin Zhang, Supriyo Bandyopadhyay, Jayasimha Atulasimha, and Jian Ping Wang. 2016. Giant voltage manipulation of MgO-based magnetic tunnel junctions via localized anisotropic strain: A potential pathway to ultra-energy-efficient memory technology. Appl. Phys. Lett. 109, 9 (2016).
[49]
J. Park. 2020. Hybrid non-volatile flip-flops using spin-orbit-torque (SOT) magnetic tunnel junction devices for high integration and low energy power-gating applications. Electronics 9, 9 (2020), 1406.
[50]
R. Buhrman. 2003. Nano-processing and properties of spin transfer device structures. INTERMAG. IEEE. CC-04
[51]
Yue Zhang, Weisheng Zhao, Yahya Lakys, Jacques Olivier Klein, Joo Von Kim, Dafiné Ravelosona, and Claude Chappert. 2012. Compact modeling of perpendicular-anisotropy CoFeB/MgO magnetic tunnel junctions. IEEE Trans. Electron Devices 59, 3 (2012), 819--826.
[52]
Andrei Slavin and Vasil Tiberkevich. 2008. Excitation of spin waves by spin-polarized current in magnetic nano-structures. In IEEE Transactions on Magnetics. 44, 7 (2008), 1916--1927.
[53]
Dmitriy V. Dmitriev, Igor V. Marchishin, Andrey V. Goran, and Alexey A. Bykov. 2011. Microwave-induced giant oscillations of the magnetoconductivity and zero-conductance state in 2D electronic Corbino disks with capacitance contacts. In Proceedings of the 12th International Conference and Seminar on Micro/Nanotechnologies and Electron Devices (EDM’11).
[54]
Yuansu Luo and Konrad Samwer. 2007. Oscillation of low-bias tunnel conductance with applied magnetic field in manganite/alumina tunnel structures. In IEEE Transactions on Magnetics. 43, 6 (2007), 2803--805.
[55]
C. Mitsumata and A. Sakuma. 2011. Generalized model of antiferromagnetic domain wall. In IEEE Transactions on Magnetics. 47, 10 (2011), 3501--3504.
[56]
Wang Kang, Chentian Zheng, Yangqi Huang, Xichao Zhang, Yan Zhou, Weifeng Lv, and Weisheng Zhao. 2016. Complementary Skyrmion Racetrack Memory with Voltage Manipulation. IEEE Electron Device Lett. 37, 7 (2016), 924--927.
[57]
A. Fiore, C. Paranthoen, J. X. Chen, M. Ilegems, L. Mariucci, and M. Rossetti. 2003. Nanoscale quantum-dot light-emitting diodes. In Quantum Electronics and Laser Science Conference. Optical Society of America.
[58]
Alessandro Spinelli and Andrea L. Lacaita. 1997. Physics and numerical simulation of single photon avalanche diodes. IEEE Trans. Electron Devices 44, 11 (1997), 1931--1943.
[59]
O. Hayden, R. Agarwal, and C. M. Lieber. 2006. Nanoscale avalanche photodiodes for highly sensitive and spatially resolved photon detection. Nature Mater. 5, 5 (2006), 352--356. https://doi.org/10.1038/nmat1635
[60]
Amirhasan Nourbakhsh, Ahmad Zubair, Redwan N. Sajjad, K. G. Tavakkoli, Amir Wei Chen, Shiang Fang, Xi Ling, Jing Kong, Mildred S. Dresselhaus, Efthimios Kaxiras, Karl K. Berggren, Dimitri Antoniadis, and Tomás Palacios. 2016. MoS2 Field-Effect Transistor with Sub-10 nm channel length. Nano Lett. 16, 12 (2016), 7798--7806.
[61]
Aaron D. Franklin, Mathieu Luisier, Shu Jen Han, George Tulevski, Chris M. Breslin, Lynne Gignac, Mark S. Lundstrom, and Wilfried Haensch. 2012. Sub-10 nm carbon nanotube transistor. Nano Lett. 12, 2 (2012), 758--762.
[62]
Yunlong Guo, Gui Yu, and Yunqi Liu. 2010. Functional organic field-effect transistors. Adv. Mater. 22, 40 (2010), 4427--4447.
[63]
Jonathan Rivnay, Sahika Inal, Alberto Salleo, Róisín M. Owens, Magnus Berggren, and George G. Malliaras. 2018. Organic electrochemical transistors. Nature Rev. Mater. 3, 2 (2018), 1--14.
[64]
Mengwei Si, Pai Ying Liao, Gang Qiu, Yuqin Duan, and Peide D. Ye. 2018. Ferroelectric field-effect transistors based on MoS2 and CuInP2S6 two-dimensional Van der Waals heterostructure. ACS Nano 12, 7 (2018), 6700--6705.
[65]
M. De Marchi, D. Sacchetto, S. Frache, J. Zhang, P. E. Gaillardon, Y. Leblebici, and G. De Micheli. 2012. Polarity control in double-gate, gate-all-around vertically stacked silicon nanowire FETs. In Proceedings of the International Electron Devices Meeting (IEDM’12).
[66]
Ren Li, Rawan Naous, Hossein Fariborzi, and Khaled Nabil Salama. 2019. Approximate computing with stochastic transistors’ voltage over-scaling. IEEE Access (2019).
[67]
Hyungjun Kim, Taesu Kim, Jinseok Kim, and Jae Joon Kim. 2018. Neural network optimized to resistive memory with nonlinear current-voltage characteristics. IACM Journal on Emerging Technologies in Computing Systems. 14, 2 (2018), 1--17.
[68]
Irina Kataeva, Farnood Merrikh-Bayat, Elham Zamanidoost, and Dmitri Strukov. 2015. Efficient training algorithms for neural networks based on memristive crossbar circuits. In Proceedings of the International Joint Conference on Neural Networks.
[69]
Elham Zamanidoost, Michael Klachko, Dmitri Strukov, and Irina Kataeva. 2015. Low area overhead in situ training approach for memristor-based classifier. In Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’15).
[70]
Beiye Liu, Miao Hu, Hai Li, Zhi Hong Mao, Yiran Chen, Tingwen Huang, and Wei Zhang. 2013. Digital-assisted noise-eliminating training for memristor crossbar-based analog neuromorphic computing engine. In Proceedings of the Design Automation Conference.
[71]
Damien Querlioz, Olivier Bichler, Philippe Dollfus, and Christian Gamrat. 2013. Immunity to device variations in a spiking neural network with memristive nanodevices. IEEE Trans. Nanotechnol. 12, 3 (2013).
[72]
Miao Hu, John Paul Strachan, Zhiyong Li, R. Stanley, and Williams. 2016. Dot-product engine as computing memory to accelerate machine learning algorithms. In Proceedings of the International Symposium on Quality Electronic Design (ISQED’16).
[73]
Walt Woods and Christof Teuscher. 2017. Approximate vector matrix multiplication implementations for neuromorphic applications using memristive crossbars. In Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’17).
[74]
Miguel Angel Lastras-Montano, Bhaswar Chakrabarti, Dmitri B. Strukov, and Kwang Ting Cheng. 2017. 3D-DPE: A 3D high-bandwidth dot-product engine for high-performance neuromorphic computing. In Proceedings of the Design, Automation, and Test in Europe (DATE’17).
[75]
Miao Hu, Catherine E. Graves, Can Li, Yunning Li, Ning Ge, Eric Montgomery, Noraica Davila, Hao Jiang, R. Stanley Williams, J. Joshua Yang, Qiangfei Xia, and John Paul Strachan. 2018. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 9 (2018), 1705914.
[76]
Miao Hu, John Paul Strachan, Zhiyong Li, Emmanuelle M. Grafals, Noraica Davila, Catherine Graves, Sity Lam, Ning Ge, Jianhua Joshua Yang, and R. Stanley Williams. 2016. Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication. In Proceedings of the Design Automation Conference.
[77]
Hussein Assaf, Yvon Savaria, and Mohamad Sawan. 2019. Memristor emulators for an adaptive DPE Algorithm: Comparative study. In Proceedings of the IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS’19).
[78]
Can Li, Miao Hu, Yunning Li, Hao Jiang, Ning Ge, Eric Montgomery, Jiaming Zhang, Wenhao Song, Noraica Dávila, Catherine E. Graves, Zhiyong Li, John Paul Strachan, Peng Lin, Zhongrui Wang, Mark Barnell, Qing Wu, R. Stanley Williams, J. Joshua Yang, and Qiangfei Xia. 2018. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 1 (2018), 52--59.
[79]
M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov. 2015. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 7550 (2015), 61--64.
[80]
Geoffrey W. Burr, Robert M. Shelby, Severin Sidler, Carmelo Di Nolfo, Junwoo Jang, Irem Boybat, Rohit S. Shenoy, Pritish Narayanan, Kumar Virwani, Emanuele U. Giacometti, Bulent N. Kurdi, and Hyunsang Hwang. 2015. Experimental demonstration and tolerancing of a large-scale neural network (165 000 Synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices s 62, 11 (2015), 3498--3507.
[81]
F. Merrikh Bayat, M. Prezioso, B. Chakrabarti, H. Nili, I. Kataeva, and D. Strukov. 2018. Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits. Nat. Commun. 9, 1 (2018), 1--7.
[82]
Can Li, Zhongrui Wang, Mingyi Rao, Daniel Belkin, Wenhao Song, Hao Jiang, Peng Yan, Yunning Li, Peng Lin, Miao Hu, Ning Ge, John Paul Strachan, Mark Barnell, Qing Wu, R. Stanley Williams, J. Joshua Yang, and Qiangfei Xia. 2019. Long short-term memory networks in memristor crossbar arrays. Nat. Mach. Intell. 1, 1 (2019), 49--57.
[83]
Xinjie Guo, Farnood Merrikh-Bayat, Ligang Gao, Brian D. Hoskins, Fabien Alibart, Bernabe Linares-Barranco, Luke Theogarajan, Christof Teuscher, and Dmitri B. Strukov. 2015. Modeling and experimental demonstration of a hopfield network analog-to-digital converter with hybrid CMOS/memristor circuits. Front. Neurosci. 9 (2015), 488.
[84]
Can Li, Daniel Belkin, Yunning Li, Peng Yan, Miao Hu, Ning Ge, Hao Jiang, Eric Montgomery, Peng Lin, Zhongrui Wang, Wenhao Song, John Paul Strachan, Mark Barnell, Qing Wu, R. Stanley Williams, J. Joshua Yang, and Qiangfei Xia. 2018. Efficient and self-adaptive in situ learning in multilayer memristor neural networks. Nat. Commun. 9, 1 (2018), 1--8.
[85]
Chris Yakopcic, Raqibul Hasan, and Tarek M. Taha. 2015. Memristor based neuromorphic circuit for ex situ training of multi-layer neural network algorithms. In Proceedings of the International Joint Conference on Neural Networks.
[86]
Manan Suri, Vivek Parmar, Ashwani Kumar, Damien Querlioz, and Fabien Alibart. 2016. Neuromorphic hybrid RRAM-CMOS RBM architecture. In Proceedings of the 15th Non-Volatile Memory Technology Symposium (NVMTS’15).
[87]
Amitesh Kumar, Mangal Das, Vivek Garg, Brajendra S. Sengar, Myo Than Htay, Shailendra Kumar, Abhinav Kranti, and Shaibal Mukherjee. 2017. Forming-free high-endurance Al/ZnO/Al memristor fabricated by dual ion beam sputtering. Appl. Phys. Lett. 110, 25 (2017), 253509.
[88]
D. Garbin, E. Vianello, O. Bichler, M. Azzaz, Q. Rafhay, P. Candelier, C. Gamrat, G. Ghibaudo, B. Desalvo, and L. Perniola. 2015. On the impact of OxRAM-based synapses variability on convolutional neural networks performance. In Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’15).
[89]
Jeyavijayan Rajendran, Harika Maenm, Ramesh Karri, and Garrett S. Rose. 2011. An approach to tolerate process related variations in memristor-based applications. In Proceedings of the IEEE International Conference on VLSI Design.
[90]
Beiye Liu, Hai Li, Yiran Chen, Xin Li, Qing Wu, and Tingwen Huang. 2015. Vortex: Variation-aware training for memristor X-bar. In Proceedings of the Design Automation Conference.
[91]
Jeyavijayan Rajendran, Ramesh Karri, and Garrett S. Rose. 2015. Improving tolerance to variations in memristor-based applications using parallel memristors. IEEE Trans. Comput. 64, 3 (2015), 733--746.
[92]
Damien Querlioz, Olivier Bichler, and Christian Gamrat. 2011. Simulation of a memristor-based spiking neural network immune to device variations. In Proceedings of the International Joint Conference on Neural Networks.
[93]
Sheng-Yang Sun, Zhiwei Li, Jiwei Li, Husheng Liu, Haijun Liu, and Qingjiang Li. 2019. A memristor-based convolutional neural network with full parallelization architecture. IEICE Electronics Express (2019) 16-20181034.
[94]
J. Joshua Yang, M. X. Zhang, Matthew D. Pickett, Feng Miao, John Paul Strachan, Wen Di Li, Wei Yi, Douglas A.A. Ohlberg, Byung Joon Choi, Wei Wu, Janice H. Nickel, Gilberto Medeiros-Ribeiro, and R. Stanley Williams. 2012. Engineering nonlinearity into memristors for passive crossbar applications. Appl. Phys. Lett. 100, 11 (2012), 113501.
[95]
Charles Augustine, Arijit Raychowdhury, Dinesh Somasekhar, James Tschanz, Vivek De, and Kaushik Roy. 2011. Design space exploration of typical STT MTJ stacks in memory arrays in the presence of variability and disturbances. IEEE Trans. Electron Devices 58, 12 (2011), 4333--4343.
[96]
Aminul Islam, Mohd Ajmal Kafeel, Tanzeem Iqbal, and Mohd Hasan. 2012. Variability analysis of MTJ-based circuit. In Proceedings of the 3rd International Conference on Computer and Communication Technology (ICCCT’12).
[97]
Jayita Das, Syed M. Alam, and Sanjukta Bhanja. 2012. Non-destructive variability tolerant differential read for non-volatile logic. In Proceedings of the Midwest Symposium on Circuits and Systems.
[98]
Raffaele De Rose, Marco Lanuzza, Felice Crupi, Giulio Siracusano, Riccardo Tomasello, Giovanni Finocchio, and Mario Carpentieri. 2017. Variability-aware analysis of hybrid MTJ/CMOS circuits by a micromagnetic-based simulation framework. IEEE Trans. Nanotechnol. 16, 2 (2017), 160--168.
[99]
Chris Yakopcic and Tarek M. Taha. 2013. Energy efficient perceptron pattern recognition using segmented memristor crossbar arrays. In Proceedings of the International Joint Conference on Neural Networks.
[100]
Chris Yakopcic and Tarek M. Taha. 2013. Energy efficient perceptron pattern recognition using segmented memristor crossbar arrays. In Proceedings of the International Joint Conference on Neural Networks.
[101]
Tarek M. Taha, Raqibul Hasan, and Chris Yakopcic. 2014. Memristor crossbar based multicore neuromorphic processors. In Proceedings of the International System on Chip Conference.
[102]
Chris Yakopcic, Md Zahangir Alom, and Tarek M. Taha. 2017. Extremely parallel memristor crossbar architecture for convolutional neural network implementation. In Proceedings of the International Joint Conference on Neural Networks.
[103]
Chris Yakopcic, Md Zahangir Alom, and Tarek M. Taha. 2016. Memristor crossbar deep network implementation based on a Convolutional neural network. In Proceedings of the International Joint Conference on Neural Networks.
[104]
Yong Shim, Abhronil Sengupta, and Kaushik Roy. 2016. Low-power approximate convolution computing unit with domain-wall motion based “spin-memristor” for image processing applications. In Proceedings of the Design Automation Conference.
[105]
Xiaoxiao Liu, Mengjie Mao, Beiye Liu, Hai Li, Yiran Chen, Boxun Li, Yu Wang, Hao Jiang, Mark Barnell, Qing Wu, and Jianhua Yang. 2015. RENO: A high-efficient reconfigurable neuromorphic computing accelerator design. In Proceedings of the Design Automation Conference.
[106]
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li et al. 2014. Dadiannao: A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. 609–622.
[107]
Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R. Stanley Williams, and Vivek Srikumar. 2016. ISAAC: A Convolutional Neural Network Accelerator with In Situ Analog Arithmetic in Crossbars. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA 2016.
[108]
Roman Kaplan, Leonid Yavits, and Ran Ginosar. 2018. PRINS: Processing-in-storage acceleration of machine learning. IEEE Trans. Nanotechnol. 17, 5 (2018), 889--896.
[109]
Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA 2016.
[110]
Cheng Xin Xue, Wei Hao Chen, Je Syu Liu, Jia Fang Li, Wei Yu Lin, Wei En Lin, Jing Hong Wang, Wei Chen Wei, Ting Wei Chang, Tung Cheng Chang, Tsung Yuan Huang, Hui Yao Kao, Shih Ying Wei, Yen Cheng Chiu, Chun Ying Lee, Chung Chuan Lo, Ya Chin King, Chorng Jung Lin, Ren Shuo Liu, Chih Cheng Hsieh, Kea Tiong Tang, and Meng Fan Chang. 2019. 24.1 A 1Mb Multibit ReRAM Computing-in-memory macro with 14.6ns parallel MAC computing time for CNN based ai edge processors. In Proceedings of the IEEE International Solid-State Circuits Conference.
[111]
Qi Liu, Bin Gao, Peng Yao, Dong Wu, Junren Chen, Yachuan Pang, Wenqiang Zhang, Yan Liao, Cheng Xin Xue, Wei Hao Chen, Jianshi Tang, Yu Wang, Meng Fan Chang, He Qian, and Huaqiang Wu. 2020. A fully integrated analog ReRAM Based 78.4TOPS/W compute-in-memory chip with fully parallel MAC computing. In Proceedings of the IEEE International Solid-State Circuits Conference.
[112]
Lixue Xia, Boxun Li, Tianqi Tang, Peng Gu, Pai Yu Chen, Shimeng Yu, Yu Cao, Yu Wang, Yuan Xie, and Huazhong Yang. 2018. MNSIM: Simulation platform for memristor-based neuromorphic computing system. IEEE Trans. Comput. Des. Integr. Circuits Syst. 37, 5 (2018), 1009--1022.
[113]
Xiaochen Peng, Shanshi Huang, Yandong Luo, Xiaoyu Sun, and Shimeng Yu. 2019. DNN+NeuroSim: An end-to-end benchmarking framework for compute-in-memory accelerators with versatile device technologies. In Proceedings of the International Electron Devices Meeting, IEDM.
[114]
Mazad S. Zaveri and Dan Hammerstrom. 2010. CMOL/CMOS Implementations of bayesian polytree inference: Digital and mixed-signal architectures and performance/price. IEEE Trans. Nanotechnol. 9, 2 (2010), 194--211.
[115]
Santosh Khasanvis, Mingyu Li, Mostafizur Rahman, Ayan K. Biswas, Mohammad Salehi-Fashami, Jayasimha Atulasimha, Supriyo Bandyopadhyay, and Csaba Andras Moritz. 2015. Architecting for causal intelligence at nanoscale. Computer 48, 12 (2015), 54--64.
[116]
Santosh Khasanvis, Mingyu Li, Mostafizur Rahman, Mohammad Salehi-Fashami, Ayan K. Biswas, Jayasimha Atulasimha, Supriyo Bandyopadhyay, and Csaba Andras Moritz. 2015. Self-similar magneto-electric nanocircuit technology for probabilistic inference engines. IEEE Trans. Nanotechnol. 14, 6 (2015), 980--991.
[117]
Sourabh Kulkarni, Sachin Bhat, Santosh Khasanvis, and Csaba Andras Moritz. 2017. Magneto-electric approximate computational circuits for Bayesian inference. In Proceedings of the IEEE International Conference on Rebooting Computing (ICRC’17).
[118]
Xiaotao Jia, Jianlei Yang, Zhaohao Wang, Yiran Chen, Hai Helen Li, and Weisheng Zhao. 2018. Spintronics based stochastic computing for efficient Bayesian inference system. In Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC’18).
[119]
Sourabh Kulkarni, Sachin Bhat, and Csaba Andras Moritz. 2019. Reconfigurable probabilistic AI architecture for personalized cancer treatment. In Proceedings of the 4th IEEE International Conference on Rebooting Computing (ICRC’19).
[120]
Z. Kulesza and W. Tylman. 2006. Implementation of Bayesian network in FPGA circuit. In Proceedings of the International Conference Mixed Design of Integrated Circuits and System (MIXDES'06).
[121]
Mingjie Lin, Ilia Lebedev, and John Wawrzynek. 2010. High-throughput bayesian computing machine with reconfigurable hardware. In Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays (FPGA’10).
[122]
The Power Consumption of NVIDIA. 2021. Retrieved from https://video-nvidia.com/en-gb/energy-nvidia-geforce.
[123]
Raqibul Hasan and Tarek M. Taha. 2014. Enabling back propagation training of memristor crossbar neuromorphic processors. In Proceedings of the International Joint Conference on Neural Networks.
[124]
Djaafar Chabi, Zhaohao Wang, Christopher Bennett, Jacques Olivier Klein, and Weisheng Zhao. 2015. Ultrahigh Density Memristor Neural Crossbar for On-Chip Supervised Learning. IEEE Trans. Nanotechnol. (2015).
[125]
Manu V. Nair and Piotr Dudek. 2015. Gradient-descent-based learning in memristive crossbar arrays. In Proceedings of the International Joint Conference on Neural Networks.
[126]
Daniel Soudry, Dotan Di Castro, Asaf Gal, Avinoam Kolodny, and Shahar Kvatinsky. 2015. Memristor-based multilayer neural networks with online gradient descent training. IEEE Trans. Neural Networks Learn. Syst. 26, 10 (2015), 2408--2421.
[127]
Cory Merkel and Dhireesha Kudithipudi. 2014. Neuromemristive extreme learning machines for pattern classification. In Proceedings of IEEE Computer Society Annual Symposium on VLSI (ISVLSI’14).
[128]
Stefano Ambrogio, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, Irem Boybat, Carmelo Di Nolfo, Severin Sidler, Massimo Giordano, Martina Bodini, Nathan C. P. Farinha, Benjamin Killeen, Christina Cheng, Yassine Jaoudi, and Geoffrey W. Burr. 2018. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 7708 (2018), 60--67.
[129]
Pai Yu Chen, Ligang Gao, and Shimeng Yu. 2016. Design of resistive synaptic array for implementing on-chip sparse learning. IEEE Trans. Multi-Scale Comput. Syst. 2, 4 (2016), 257--264.
[130]
Boxun Li, Yuzhi Wang, Yu Weng, Yiran Chen, and Huazhong Yang. 2014. Training itself: Mixed-signal training acceleration for memristor-based neural network. In Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC’14).
[131]
Abhronil Sengupta, Aparajita Banerjee, and Kaushik Roy. 2016. Hybrid Spintronic-CMOS Spiking Neural Network with On-Chip Learning: Devices, Circuits, and Systems. Phys. Rev. Appl. 6, 6 (2016), 64003.
[132]
Yongtae Kim, Yong Zhang, and Peng Li. 2015. A reconfigurable digital neuromorphic processor with memristive synaptic crossbar for cognitive computing. ACM J. Emerg. Technol. Comput. Syst. 11, 4 (2015), 1--25.
[133]
Christopher H. Bennett, Naimul Hassan, Xuan Hu, Jean Anne C. Incornvia, Joseph S. Friedman, and Matthew M. Marinella. 2019. Semi-supervised learning and inference in domain-wall magnetic tunnel junction (DW-MTJ) neural networks. Spintronics XII. Vol. 11090, p. 110903I.
[134]
Mahdi Nazm Bojnordi and Engin Ipek. 2016. Memristive Boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning. In Proceedings of the International Symposium on High-Performance Computer Architecture.
[135]
Sukru Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sangbum Kim, Matthew Brightsky, Hsiang Lan Lung, Chung Lam, Gert Cauwenberghs, and Hon Sum Philip Wong. 2016. Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses. IEEE Trans. Electron Devices 63, 12 (2016), 5004--5011.
[136]
Shamma Nasrin, Justine L. Drobitch, Supriyo Bandyopadhyay, and Amit Ranjan Trivedi. 2019. Low Power Restricted Boltzmann Machine Using Mixed-Mode Magneto-Tunneling Junctions. IEEE Electron Device Lett. 40, 2 (2019), 345--348.
[137]
Johannes Bill and Robert Legenstein. 2014. A compound memristive synapse model for statistical learning through STDP in spiking neural networks. Front. Neurosci. 8 (2014), 412.
[138]
Bernhard Nessler, Michael Pfeiffer, Lars Buesing, and Wolfgang Maass. 2013. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity. PLoS Comput. Biol. 6, 1 (2013), 1--10.
[139]
Behtash Behin-Aein, Vinh Diep, and Supriyo Datta. 2016. A building block for hardware belief networks. Sci. Rep. 6, 1 (2016), 1--10.
[140]
Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR’14).
[141]
R. M. Neal. 1994. Bayesian learning for neural networks. Ph.D. Thesis, Dept. of Computer Science, University of Toronto.
[142]
Y. Kuramoto and H. Araki. 1975. Lecture Notes in Physics. In Proceedings of the International Symposium on Mathematical Problems in Theoretical Physics. 39. Springer-Verlag, New York. p. 420
[143]
Dmitri E. Nikonov, Gyorgy Csaba, Wolfgang Porod, Tadashi Shibata, Danny Voils, Dan Hammerstrom, Ian A. Young, and George I. Bourianoff. 2015. Coupled-oscillator associative memory array operation for pattern recognition. IEEE J. Explor. Solid-State Comput. Devices Circuits 1 (2015), 85--93.
[144]
Jacob Torrejon, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi, Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Mark D. Stiles, and Julie Grollier. 2017. Neuromorphic computing with nanoscale spintronic oscillators. Nature 27, 35 (2017), 355205.
[145]
Angeliki Pantazi, Stanisław Woźniak, Tomas Tuma, and Evangelos Eleftheriou. 2016. All-memristive neuromorphic computing with level-tuned neurons. Nanotechnology 27, 35 (2016), 355205.
[146]
Abhronil Sengupta and Kaushik Roy. 2016. A vision for all-spin neural networks: A device to system perspective. IEEE Trans. Circuits Syst. I Regul. Pap. 63, 12 (2016), 2267--2277.
[147]
Indranil Chakraborty, Gobinda Saha, Abhronil Sengupta, and Kaushik Roy. 2018. Toward fast neural computing using all-photonic phase change spiking neurons. Sci. Rep. 8, 1 (2018), 1--9.
[148]
Miguel Romera, Philippe Talatchian, Sumito Tsunegi, Flavio Abreu Araujo, Vincent Cros, Paolo Bortolotti, Juan Trastoy, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Maxence Ernoult, Damir Vodenicarevic, Tifenn Hirtzlin, Nicolas Locatelli, Damien Querlioz, and Julie Grollier. 2018. Vowel recognition with four coupled spin-torque nano-oscillators. Nature. 563, 7730 (2018), 230--234.
[149]
Alexander Khitun, Guanxiong Liu, and Alexander A. Balandin. 2017. Two-dimensional oscillatory neural network based on room-temperature charge-density-wave devices. IEEE Trans. Nanotechnol. 16, 5 (2017), 860--867.
[150]
Sourabh Kulkarni, Sachin Bhat, and Csaba Andras Moritz. 2017. Structure discovery for gene expression networks with emerging stochastic hardware. In Proceedings of the IEEE International Conference on Rebooting Computing (ICRC’17).
[151]
Brian Sutton, Kerem Yunus Camsari, Behtash Behin-Aein, and Supriyo Datta. 2017. Intrinsic optimization using stochastic nanomagnets. Sci. Rep. 7, 1 (2017), 1--9.
[152]
Naoya Onizawa, Daisaku Katagiri, Warren J. Gross, and Takahiro Hanyu. 2016. Analog-to-stochastic converter using magnetic tunnel junction devices for vision chips. IEEE Trans. Nanotechnol. 15, 5 (2016), 705--714.
[153]
Rafatul Faria, Kerem Y. Camsari, and Supriyo Datta. 2018. Implementing Bayesian networks with embedded stochastic MRAM. AIP Adv. 8, 4 (2018), 45101.
[154]
Ramtin Zand, Kerem Yunus Camsari, Steven D. Pyle, Ibrahim Ahmed, Chris H. Kim, and Ronald F. DeMara. 2018. Low-Energy deep belief networks using intrinsic sigmoidal spintronic-based probabilistic neurons. In Proceedings of the ACM Great Lakes Symposium on VLSI (GLSVLSI’18).
[155]
Siyang Wang, Alvin R. Lebeck, and Chris Dwyer. 2015. Nanoscale resonance energy transfer-based devices for probabilistic computing. IEEE Micro (2015).
[156]
Siyang Wang, Xiangyu Zhang, Yuxuan Li, Ramin Bashizade, Song Yang, Chris Dwyer, and Alvin R. Lebeck. 2016. Accelerating markov random field inference using molecular optical gibbs sampling units. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA 2016.
[157]
Xiangyu Zhang, Ramin Bashizade, Craig LaBoda, Chris Dwyer, and Alvin R. Lebeck. 2018. Architecting a stochastic computing unit with molecular optical devices. In Proceedings of the International Symposium on Computer Architecture.
[158]
Pierre Alexandre Blanche, Masoud Babaeian, Madeleine Glick, John Wissinger, Robert Norwood, Nasser Peyghambarian, Mark Neifeld, and Ratchaneekorn Thamvichai. 2016. Optical implementation of probabilistic graphical models. In Proceedings of the IEEE International Conference on Rebooting Computing (ICRC’16).
[159]
Alice Mizrahi, Tifenn Hirtzlin, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Julie Grollier, and Damien Querlioz. 2018. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1 (2018), 1--11.
[160]
Sung Hyun Jo, Ting Chang, Idongesit Ebong, Bhavitavya B. Bhadviya, Pinaki Mazumder, and Wei Lu. 2010. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 4 (2010), 1297--1301.
[161]
Zhongrui Wang, Saumil Joshi, Sergey E. Savel'ev, Hao Jiang, Rivu Midya, Peng Lin, Miao Hu, Ning Ge, John Paul Strachan, Zhiyong Li, Qing Wu, Mark Barnell, Geng Lin Li, Huolin L. Xin, R. Stanley Williams, Qiangfei Xia, and J. Joshua Yang. 2017. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 1 (2017), 101--108.
[162]
Duygu Kuzum, Rakesh G. D. Jeyasingh, Byoungil Lee, and H. S. Philip Wong. 2012. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12, 5 (2012), 2179--2186.
[163]
Bryan L. Jackson, Bipin Rajendran, Gregory S. Corrado, Matthew Breitwisch, Geoffrey W. Burr, Roger Cheek, Kailash Gopalakrishnan, Simone Raoux, Charles T. Rettner, Alvaro Padilla, Alex G. Schrott, Rohit S. Shenoy, Bülent N. Kurdi, Chung H. Lam, and Dharmendra S. Modha. 2013. Nanoscale electronic synapses using phase change devices. ACM J. Emerg. Technol. Comput. Syst. 9, 2 (2013), 1--20.
[164]
Michael L. Schneider, Christine A. Donnelly, Stephen E. Russek, Burm Baek, Matthew R. Pufall, Peter F. Hopkins, Paul D. Dresselhaus, Samuel P. Benz, and William H. Rippard. 2018. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions. Sci. Adv. 4, 1 (2018), e1701329.
[165]
Takeo Ohno, Tsuyoshi Hasegawa, Tohru Tsuruoka, Kazuya Terabe, James K. Gimzewski, and Masakazu Aono. 2011. Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 10, 8 (2011), 591--595.
[166]
Abhronil Sengupta, Zubair Al Azim, Xuanyao Fong, and Kaushik Roy. 2015. Spin-orbit torque induced spike-timing dependent plasticity. Appl. Phys. Lett. 106, 9 (2015), 93704.
[167]
Matthew Jerry, Pai Yu Chen, Jianchi Zhang, Pankaj Sharma, Kai Ni, Shimeng Yu, and Suman Datta. 2018. Ferroelectric FET analog synapse for acceleration of deep neural network training. In Proceedings of the International Electron Devices Meeting (IEDM’18).
[168]
James M. Bower, David Beeman, Mark Nelson, and John Rinzel. 1995. The hodgkin-huxley model. In The Book of GENESIS.
[169]
Doron Tal and Eric L. Schwartz. 1997. Computing with the leaky integrate-and-fire neuron: Logarithmic computation and multiplication. Neural Comput. 9, 2 (1997), 305--318.
[170]
Arun V. Holden and Yin Shui Fan. 1992. From simple to simple bursting oscillatory behaviour via chaos in the Rose-Hindmarsh model for neuronal activity. Chaos, Solitons Fractals 2, 3 (1992), 221--236.
[171]
Jack D. Cowan. 1990. Discussion: McCulloch-Pitts and related neural nets from 1943 to 1989. Bull. Math. Biol. 52, 1 (1990), 73--97.
[172]
Matthew D. Pickett, Gilberto Medeiros-Ribeiro, and R. Stanley Williams. 2013. A scalable neuristor built with Mott memristors. Nat. Mater. (2013).
[173]
Tomas Tuma, Angeliki Pantazi, Manuel Le Gallo, Abu Sebastian, and Evangelos Eleftheriou. 2016. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 8 (2016), 693.
[174]
Zhongrui Wang, Mingyi Rao, Jin Woo Han, Jiaming Zhang, Peng Lin, Yunning Li, Can Li, Wenhao Song, Shiva Asapu, Rivu Midya, Ye Zhuo, Hao Jiang, Jung Ho Yoon, Navnidhi Kumar Upadhyay, Saumil Joshi, Miao Hu, John Paul Strachan, Mark Barnell, Qing Wu, Huaqiang Wu, Qinru Qiu, R. Stanley Williams, Qiangfei Xia, and J. Joshua Yang. 2018. Capacitive neural network with neuro-transistors. Nat. Commun. 9, 1 (2018), 1--10.
[175]
Mrigank Sharad, Deliang Fan, and Kaushik Roy. 2013. Spin-neurons: A possible path to energy-efficient neuromorphic computers. J. Appl. Phys. 114, 23 (2013), 234906.
[176]
Wesley H. Brigner, Naimul Hassan, Xuan Hu, Lucian Jiang-Wei, Otitoaleke G. Akinola, Felipe Garcia-Sanchez, Massimo Pasquale, Christopher H. Bennett, Jean Anne C. Incorvia, and Joseph S. Friedman. 2019. Magnetic domain wall neuron with intrinsic leaking and lateral inhibition capability. Spintronics XII. Vol. 11090, p. 110903K.
[177]
Angeliki Pantazi, Stanisław Woźniak, Tomas Tuma, and Evangelos Eleftheriou. 2016. All-memristive neuromorphic computing with level-tuned neurons. Nanotechnology 27, 35 (2016), 355205.
[178]
Abhronil Sengupta and Kaushik Roy. 2016. A vision for all-spin neural networks: A device to system perspective. IEEE Trans. Circuits Syst. I Regul. Pap. (2016).
[179]
Indranil Chakraborty, Gobinda Saha, Abhronil Sengupta, and Kaushik Roy. 2018. Toward fast neural computing using all-photonic phase change spiking neurons. Sci. Rep. 8, 1 (2018), 1--9.
[180]
D. B. Strukov, D. R. Stewart, J. Borghetti, X. Li, M. Pickett, G. Medeiros Ribeiro, W. Robinett, G. Snider, J. P. Strachan, W. Wu, Q. Xia, J. Joshua Yang, and R. S. Williams. 2010. Hybrid CMOS/memristor circuits. In Proceedings of the IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems.
[181]
Kuk Hwan Kim, Siddharth Gaba, Dana Wheeler, Jose M. Cruz-Albrecht, Tahir Hussain, Narayan Srinivasa, and Wei Lu. 2012. A functional hybrid memristor crossbar-array/CMOS system for data storage and neuromorphic applications. Nano Lett. 12, 1 (2012), 389--395.
[182]
Sachin Bhat, Sourabh Kulkarni, Jiajun Shi, Mingyu Li, and Csaba Andras Moritz. 2017. SkyNet: Memristor-based 3D IC for artificial neural networks. In Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’17).
[183]
Shankar Ganesh Ramasubramanian, Rangharajan Venkatesan, Mrigank Sharad, Kaushik Roy, and Anand Raghunathan. 2015. SPINDLE: SPINtronic Deep Learning Engine for large-scale neuromorphic computing. In Proceedings of the International Symposium on Low Power Electronics and Design.
[184]
E. Covi, R. George, J. Frascaroli, S. Brivio, C. Mayr, H. Mostafa, G. Indiveri, and S. Spiga. 2018. Spike-driven threshold-based learning with memristive synapses and neuromorphic silicon neurons. J. Phys. D. Appl. Phys. 51, 34 (2018), 344003.
[185]
Ramtin Zand and Ronald F. DeMara. 2019. SNRA: A Spintronic Neuromorphic Reconfigurable Array for In-Circuit Training and Evaluation of Deep Belief Networks. In Proceedings of the IEEE International Conference on Rebooting Computing (ICRC’18).
[186]
Jacob Torrejon, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi, Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Mark D. Stiles, and Julie Grollier. 2017. Neuromorphic computing with nanoscale spintronic oscillators. Nature. 547, 7664 (2017), 428--431.
[187]
Mohammed A. Zidan, Yeon Joo Jeong, and Wei D. Lu. 2017. Temporal Learning Using Second-Order Memristors. IEEE Trans. Nanotechnol. 16, 4 (2017), 721--723.
[188]
Mahyar Shahsavari, Pierre Falez, and Pierre Boulet. 2016. Combining a volatile and nonvolatile memristor in artificial synapse to improve learning in spiking neural networks. In Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’16).
[189]
Himanshu Thapliyal, Fazel Sharifi, and S. Dinesh Kumar. 2018. Energy-efficient design of hybrid MTJ/CMOS and MTJ/Nanoelectronics circuits. IEEE Trans. Magn. 54, 7 (2018), 1--8.
[190]
Pi Feng Chiu, Meng Fan Chang, Che Wei Wu, Ching Hao Chuang, Shyh Shyuan Sheu, Yu Sheng Chen, and Ming Jinn Tsai. 2012. Low store energy, low VDDmin, 8T2R nonvolatile latch and SRAM with vertical-stacked resistive memory (memristor) devices for low power mobile applications. IEEE J. Solid-State Circ. 47, 6 (2012), 1483--496.
[191]
Said Hamdioui, Lei Xie, Hoang Anh Du Nguyen, Mottaqiallah Taouil, Koen Bertels, Henk Corporaal, Hailong Jiao, Francky Catthoor, Dirk Wouters, Linn Eike, and Jan Van Lunteren. 2015. Memristor-based computation-in-memory architecture for data-intensive applications. In Proceedings of the Design, Automation and Test in Europe (DATE’15).
[192]
Alina Frolova and Bartek Wilczyński. 2018. Distributed bayesian networks reconstruction on the whole genome scale. PeerJ (2018).
[193]
I. Pournara, C. S. Bouganis, and G. A. Constantinides. 2005. FPGA-accelerated bayesian learning for reconstruction of gene regulatory networks. Proceedings of the International Conference on Field Programmable Logic and Applications (FPL’05). https://doi.org/10.1109/FPL.2005.1515742
[194]
R. Ferreira and J. C. G. Vendramini. 2010. FPGA-accelerated attractor computation of scale free gene regulatory networks. Proceedings of the International Conference on Field Programmable Logic and Applications (FPL’10). https://doi.org/10.1109/FPL.2010.108

Cited By

View all
  • (2022)Quantum Information Technologies Applied to Nature and SocietyAssessment Methods and Success Factors for Digital Education and New Media10.4018/978-1-7998-8721-8.ch007(180-201)Online publication date: 16-Dec-2022
  • (2022)A Conditional Generative Adversarial Network and Transfer Learning-Oriented Anomaly Classification System for Electrospun NanofibersInternational Journal of Neural Systems10.1142/S012906572250054X32:12Online publication date: 13-Oct-2022
  • (2022)Nanotechnology and Computer Science: Trends and advancesMemories - Materials, Devices, Circuits and Systems10.1016/j.memori.2022.1000112(100011)Online publication date: Oct-2022

Index Terms

  1. Architecting for Artificial Intelligence with Emerging Nanotechnology

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Journal on Emerging Technologies in Computing Systems
      ACM Journal on Emerging Technologies in Computing Systems  Volume 17, Issue 3
      July 2021
      483 pages
      ISSN:1550-4832
      EISSN:1550-4840
      DOI:10.1145/3464978
      • Editor:
      • Ramesh Karri
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Journal Family

      Publication History

      Published: 12 August 2021
      Accepted: 01 December 2020
      Revised: 01 September 2020
      Received: 01 February 2020
      Published in JETC Volume 17, Issue 3

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Artificial Intelligence
      2. Bayesian Networks
      3. Computer Architecture
      4. Emerging technology
      5. Nanodevices
      6. Nanoscale Architectures
      7. Neural Networks
      8. Neuromorphic Computing
      9. Probabilistic Graphical Models

      Qualifiers

      • Research-article
      • Refereed

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)68
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 03 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)Quantum Information Technologies Applied to Nature and SocietyAssessment Methods and Success Factors for Digital Education and New Media10.4018/978-1-7998-8721-8.ch007(180-201)Online publication date: 16-Dec-2022
      • (2022)A Conditional Generative Adversarial Network and Transfer Learning-Oriented Anomaly Classification System for Electrospun NanofibersInternational Journal of Neural Systems10.1142/S012906572250054X32:12Online publication date: 13-Oct-2022
      • (2022)Nanotechnology and Computer Science: Trends and advancesMemories - Materials, Devices, Circuits and Systems10.1016/j.memori.2022.1000112(100011)Online publication date: Oct-2022

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media