Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

An Instruction Set Architecture for Machine Learning

Published: 13 August 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Machine Learning (ML) are a family of models for learning from the data to improve performance on a certain task. ML techniques, especially recent renewed neural networks (deep neural networks), have proven to be efficient for a broad range of applications. ML techniques are conventionally executed on general-purpose processors (such as CPU and GPGPU), which usually are not energy efficient, since they invest excessive hardware resources to flexibly support various workloads. Consequently, application-specific hardware accelerators have been proposed recently to improve energy efficiency. However, such accelerators were designed for a small set of ML techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks of an ML technique (such as layers in neural networks) or even an ML as a whole. Although straightforward and easy to implement for a limited set of similar ML techniques, the lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different ML techniques with sufficient flexibility and efficiency.
    In this article, we first propose a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. We then extend the application scope of Cambricon from NN to ML techniques. We also propose an assembly language, an assembler, and runtime to support programming with Cambricon, especially targeting large-scale ML problems. Our evaluation over a total of 16 representative yet distinct ML techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of ML techniques and provides higher code density than general-purpose ISAs such as x86, MIPS, and GPGPU. Compared to the latest state-of-the-art NN accelerator design DaDianNao [7] (which can only accommodate three types of NN techniques), our Cambricon-based accelerator prototype implemented in TSMC 65nm technology incurs only negligible latency/power/area overheads, with a versatile coverage of 10 different NN benchmarks and 7 other ML benchmarks. Compared to the recent prevalent ML accelerator PuDianNao, our Cambricon-based accelerator is able to support all the ML techniques as well as the 10 NNs but with only approximate 5.1% performance loss.

    References

    [1]
    N. S. Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 46, 3 (1992), 175--185.
    [2]
    L. Breiman, J. H. Friedman, R. A. Olshcn, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth International Group, Belmont CA.
    [3]
    Srimat Chakradhar, Murugan Sankaradas, Venkata Jakkula, and Srihari Cadambi. 2010. A dynamically configurable coprocessor for convolutional neural networks. In Proceedings of the 37th Annual International Symposium on Computer Architecture.
    [4]
    Yun-Fan Chang, P. Lin, Shao-Hua Cheng, Kai-Hsuan Chan, Yi-Chong Zeng, Chia-Wei Liao, Wen-Tsung Chang, Yu-Chiang Wang, and Yu Tsao. 2014. Robust anchorperson detection based on audio streams using a hybrid I-vector and DNN system. In Proceedings of the 2014 Annual Summit and Conference on Asia-Pacific Signal and Information Processing Association.
    [5]
    Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems.
    [6]
    Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2015. A high-throughput neural network accelerator. IEEE Micro 35, 3 (2015), 24--32.
    [7]
    Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, and Olivier Temam. 2014. DaDianNao: A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture.
    [8]
    Ping Chi, Wang-Chien Lee, and Yuan Xie. 2016. Adapting B-plus tree for emerging nov-volatile memory based main memory. In Proceedings of the IEEE Conference on Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD’16).
    [9]
    Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proceedings of the 43rd International Symposium on Computer Architecture (ISCA’16).
    [10]
    A. Coates, B. Huval, T. Wang, D. J. Wu, and A. Y. Ng. 2013. Deep learning with cots hpc systems. In Proceedings of the 30th International Conference on Machine Learning.
    [11]
    Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Mach. Learn. 20, 3 (1995), 273--297.
    [12]
    G. E. Dahl, T. N. Sainath, and G. E. Hinton. 2013. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
    [13]
    A. L. Edwards. 1984. An introduction to linear regression and correlation. Math. Gaz. 69, 2 (1984), 1--17.
    [14]
    V. Eijkhout. 2011. Introduction to High Performance Scientific computing. Retrieved from www.lulu.com.
    [15]
    H. Esmaeilzadeh, P. Saeedi, B. N. Araabi, C. Lucas, and Sied Mehdi Fakhraie. 2006. Neural network stream processing core (NnSP) for embedded systems. In Proceedings of the 2006 IEEE International Symposium on Circuits and Systems.
    [16]
    Hadi Esmaeilzadeh, Adrian Sampson, Luis Ceze, and Doug Burger. 2012. Neural acceleration for general-purpose approximate programs. In Proceedings of the 2012 IEEE/ACM International Symposium on Microarchitecture.
    [17]
    C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun. 2011. NeuFlow: A runtime reconfigurable dataflow processor for vision. In Proceedings of the 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
    [18]
    C. Farabet, C. Poulet, J.Y. Han, and Y. LeCun. 2009. CNP: An FPGA-based processor for convolutional networks. In Proceedings of the 2009 International Conference on Field Programmable Logic and Applications.
    [19]
    E. W. Forgy. 1965. Cluster analysis of multivariate data : Efficiency versus interpretability of classifications. Biometrics 21, 3 (1965), 41--52.
    [20]
    V. Gokhale, Jonghoon Jin, A. Dundar, B. Martini, and E. Culurciello. 2014. A 240 G-ops/s mobile coprocessor for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.
    [21]
    A. Graves and J. Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM networks. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks.
    [22]
    Atif Hashmi, Andrew Nere, James Jamal Thomas, and Mikko Lipasti. 2011. A case for neuromorphic ISAs. In Proceedings of the 16th International Conference on Architectural Support for Programming Languages and Operating Systems.
    [23]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the International Conference on Computer Vision. 1026--1034.
    [24]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 770--778.
    [25]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 770--778.
    [26]
    Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Conference on Information and Knowledge Management.
    [27]
    INTEL. {n.d.}. AVX-512. Retrieved from https://software.intel.com/en-us/blogs/2013/avx-512-instructions.
    [28]
    INTEL. {n.d.}. MKL. Retrieved from https://software.intel.com/en-us/intel-mkl.
    [29]
    Pineda Fernando J. 1987. Generalization of back-propagation to recurrent neural networks. Phys. Rev. Lett. (1987), 602--611.
    [30]
    K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. 2009. What is the best multi-stage architecture for object recognition? In Proceedings of the 12th IEEE International Conference on Computer Vision.
    [31]
    Norman P. Jouppi, Cliff Young, Nishant Patil, David A. Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. 2017. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA’17). 1--12.
    [32]
    V. Kantabutra. 1996. On hardware for computing exponential and trigonometric functions. IEEE Trans. Comput. 45, 3 (1996), 328--339.
    [33]
    A. Krizhevsky. {n.d.}. cuda-convnet: High-performance c++/cuda implemen- tation of convolutional neural networks.
    [34]
    Alex Krizhevsky, Sutskever Ilya, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25.
    [35]
    Pat Langley, Wayne Iba, and Kevin Thompson. 1992. An analysis of bayesian classifiers. In Proceedings of the 10th National Conference on Artificial Intelligence. 223--228.
    [36]
    Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. 2007. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th International Conference on Machine Learning.
    [37]
    Q.V. Le. 2013. Building high-level features using large scale unsupervised learning. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
    [38]
    Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, Vol. 86. 2278--2324.
    [39]
    Daofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. 2015. PuDianNao: A polyvalent machine learning accelerator. In Proceedings of the 20th International Conference on Architectural Support for Programming Languages and Operating Systems.
    [40]
    Shaoli Liu, Zidong Du, Jinhua Tao, Dong Han, Tao Luo, Yuan Xie, Yunji Chen, and Tianshi Chen. 2016. Cambricon: An instruction set architecture for neural networks. In Proceedings of the 43rd ACM/IEEE Annual International Symposium on Computer Architecture (ISCA’16). 393--405.
    [41]
    A. A. Maashri, M. DeBole, M. Cotter, N. Chandramoorthy, Yang Xiao, V. Narayanan, and C. Chakrabarti, C. 2012. Accelerating neuromorphic vision algorithms for recognition. In Proceedings of the 49th ACM/EDAC/IEEE Design Automation Conference.
    [42]
    Divya Mahajan, Jongse Park, Emmanuel Amaro, Hardik Sharma, Amir Yazdanbakhsh, Joon Kyung Kim, and Hadi Esmaeilzadeh. 2016. TABLA: A unified template-based framework for accelerating statistical machine learning. In Proceedings of the 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA’16). 14--26.
    [43]
    G. Marsaglia and W. W. Tsang. 2000. The ziggurat method for generating random variables. J. Stat. Softw. 5, 8 (2000). https://EconPapers.repec.org/RePEc:jss:jstsof:v:005:i08.
    [44]
    Paul A Merolla, John V. Arthur, Rodrigo Alvarez-icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. 2014. A million spiling-neuron interated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668--673.
    [45]
    Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533.
    [46]
    M. A. Motter. 1999. Control of the NASA langley 16-foot transonic tunnel with the self-organizing map. In Proceedings of the 1999 American Control Conference.
    [47]
    NVIDIA. {n.d.}. CUBLAS. Retrieved from https://developer.nvidia.com/cublas.
    [48]
    C. S. Oliveira and E. Del Hernandez. 2004. Forms of adapting patterns to Hopfield neural networks with larger number of nodes and higher storage capacity. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks.
    [49]
    Jongse Park, Hardik Sharma, Divya Mahajan, Joon Kyung Kim, Preston Olds, and Hadi Esmaeilzadeh. 2017. Scale-out acceleration for machine learning. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’17). 367--381.
    [50]
    David A. Patterson and Carlo H. Sequin. 1981. RISC I: A reduced instruction set VLSI computer. In Proceedings of the 8th Annual Symposium on Computer Architecture.
    [51]
    M. Peemen, A. A. A. Setio, B. Mesman, and H. Corporaal. 2013. Memory-centric accelerator design for convolutional neural networks. In Proceedings of the 31st IEEE International Conference on Computer Design.
    [52]
    John C. Platt, Nello Cristianini, and John Shawe-Taylor. 1999. Large margin DAGs for multiclass classification. In Proceedings of the Advances in Neural Information Processing Systems 12 (NIPS’99). 547--553. http://papers.nips.cc/paper/1773-large-margin-dags-for-multiclass-classification
    [53]
    Matt Poremba, Tao Zhang, and Yuan Xie. 2016. Fine-granularity tile-level parallelism in non-volatile memory architecture with two-dimensional bank subdivision. In Proceedings of the IEEE/ACM Design Automation Conference (DAC’16).
    [54]
    J. R. Quinlan. 1986. Induction of Decision Trees. Kluwer Academic Publishers, Amsterdam. 81--106.
    [55]
    J. Ross Quinlan. 1996. Bagging, boosting, and C4.5. In Proceedings of the 13th National Conference on Artificial Intelligence and 8th Innovative Applications of Artificial Intelligence Conference, (AAAI ’96 and IAAI’96). 725--730.
    [56]
    R. Salakhutdinov and G. E. Hinton. 2012. An efficient learning procedure for deep boltzmann machines. Neur. Comput. 24, 8 (2012), 1967--2006.
    [57]
    M. Sankaradas, V. Jakkula, S. Cadambi, S. Chakradhar, I. Durdanovic, E. Cosatto, and H. P. Graf. 2009. A massively parallel coprocessor for convolutional neural networks. In Proceedings of the 20th IEEE International Conference on Application-specific Systems, Architectures and Processors.
    [58]
    R. Sarikaya, G. E. Hinton, and A. Deoras. 2014. Application of deep belief networks for natural language understanding. IEEE Trans. Aud. Speech Lang. Process. 22, 4 (2014), 778--784.
    [59]
    P. Sermanet and Y. LeCun. 2011. Traffic sign recognition with multi-scale convolutional networks. In Proceedings of the 2011 International Joint Conference on Neural Networks.
    [60]
    Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations. http://arxiv.org/abs/1409.1556
    [61]
    Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the Computer Vision and Pattern Recognition. 1--9.
    [62]
    O. Temam. 2012. A defect-tolerant accelerator for emerging high-performance applications. In Proceedings of the 39th Annual International Symposium on Computer Architecture.
    [63]
    V. Vanhoucke, A. Senior, and M. Z. Mao. 2011. Improving the speed of neural networks on CPUs. In Proceedings of the Deep Learning and Unsupervised Feature Learning Workshop (NIPS’11).
    [64]
    Yu Wang, Tianqi Tang, Lixue Xia, Boxun Li, Peng Gu, Huazhong Yang, Hai Li, and Yuan Xie. 2015. Energy efficient RRAM spiking neural network for real time classification. In Proceedings of the 25th Edition of the Great Lakes Symposium on VLSI.
    [65]
    Cong Xu, Dimin Niu, Naveen Muralimanohar, Rajeev Balasubramonian, Tao Zhang, Shimeng Yu, and Yuan Xie. 2015. Overcoming the challenges of cross-point resistive memory architectures. In Proceedings of the 21st International Symposium on High Performance Computer Architecture.
    [66]
    Tao Xu, Jieping Zhou, Jianhua Gong, Wenyi Sun, Liqun Fang, and Yanli Li. 2012. Improved SOM based data mining of seasonal flu in mainland China. In Proceedings of the 2012 8th International Conference on Natural Computation.
    [67]
    Xian-Hua Zeng, Si-Wei Luo, and Jiao Wang. 2007. Auto-associative neural network system for recognition. In Proceedings of the 2007 International Conference on Machine Learning and Cybernetics.
    [68]
    Zhengyou Zhang, M. Lyons, M. Schuster, and S. Akamatsu. 1998. Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition.
    [69]
    Jishen Zhao, Guangyu Sun, Gabriel H. Loh, and Yuan Xie. 2013. Optimizing GPU energy efficiency with 3D die-stacking graphics memory and reconfigurable memory interface. ACM Trans. Arch. Code Optimiz. 10, 4, Article 24 (Dec. 2013), 25 pages.

    Cited By

    View all
    • (2023)The Design of Optimized RISC Processor for Edge Artificial Intelligence Based on Custom Instruction Set ExtensionIEEE Access10.1109/ACCESS.2023.327641111(49409-49421)Online publication date: 2023
    • (2022)SmartVM: A Smart Contract Virtual Machine for Fast On-Chain DNN ComputationsIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2022.317740533:12(4100-4116)Online publication date: 1-Dec-2022
    • (2021)Design and Application of English Grammar Intelligent Question Bank SystemScientific Programming10.1155/2021/94837192021Online publication date: 29-Oct-2021
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Computer Systems
    ACM Transactions on Computer Systems  Volume 36, Issue 3
    August 2018
    99 pages
    ISSN:0734-2071
    EISSN:1557-7333
    DOI:10.1145/3341160
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 August 2019
    Accepted: 01 May 2019
    Revised: 01 April 2019
    Received: 01 May 2018
    Published in TOCS Volume 36, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Instruction set architecture
    2. machine learning
    3. machine-learning accelerator

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • National Key Research and Development Program of China
    • 973 Program of China
    • Strategic Priority Research Program of Chinese Academy of Science
    • Key Research Projects in Frontier Science of Chinese Academy of Sciences
    • Transformation and Transfer of Scientific and Technological Achievements of Chinese Academy of Sciences
    • Beijing Natural Science Foundation
    • CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT)
    • NSF
    • National Science and Technology

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,613
    • Downloads (Last 6 weeks)118
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)The Design of Optimized RISC Processor for Edge Artificial Intelligence Based on Custom Instruction Set ExtensionIEEE Access10.1109/ACCESS.2023.327641111(49409-49421)Online publication date: 2023
    • (2022)SmartVM: A Smart Contract Virtual Machine for Fast On-Chain DNN ComputationsIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2022.317740533:12(4100-4116)Online publication date: 1-Dec-2022
    • (2021)Design and Application of English Grammar Intelligent Question Bank SystemScientific Programming10.1155/2021/94837192021Online publication date: 29-Oct-2021
    • (2021)Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and InsightsProceedings of the IEEE10.1109/JPROC.2021.3098483109:10(1706-1752)Online publication date: Oct-2021
    • (2021)Comprehensive Study: Machine Learning & Deep Learning Algorithms for Paddy Crops2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)10.1109/ICRITO51393.2021.9596189(1-7)Online publication date: 3-Sep-2021
    • (2020)Assessment of Machine Learning Algorithms for Near-Sensor Computing under Radiation Soft Errors2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)10.1109/ICECS49266.2020.9294938(1-4)Online publication date: 23-Nov-2020
    • (2020)Towards Unified INT8 Training for Convolutional Neural Network2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR42600.2020.00204(1966-1976)Online publication date: Jul-2020
    • (2020)Secure, privacy-preserving and federated machine learning in medical imagingNature Machine Intelligence10.1038/s42256-020-0186-12:6(305-311)Online publication date: 8-Jun-2020

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media