Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/ISCA52012.2021.00066acmconferencesArticle/Chapter ViewAbstractPublication PagesiscaConference Proceedingsconference-collections
research-article

NASGuard: a novel accelerator architecture for robust neural architecture search (NAS) networks

Published: 25 November 2021 Publication History

Abstract

Due to the wide deployment of deep learning applications in safety-critical systems, robust and secure execution of deep learning workloads is imperative. Adversarial examples, where the inputs are carefully designed to mislead the machine learning model is among the most challenging attacks to detect and defeat. The most dominant approach for defending against adversarial examples is to systematically create a network architecture that is sufficiently robust. Neural Architecture Search (NAS) has been heavily used as the de facto approach to design robust neural network models, by using the accuracy of detecting adversarial examples as a key metric of the neural network's robustness. While NAS has been proven effective in improving the robustness (and accuracy in general), the NAS-generated network models run noticeably slower on typical DNN accelerators than the hand-crafted networks, mainly because DNN accelerators are not optimized for robust NAS-generated models. In particular, the inherent multi-branch nature of NAS-generated networks causes unacceptable performance and energy overheads.
To bridge the gap between the robustness and performance efficiency of deep learning applications, we need to rethink the design of AI accelerators to enable efficient execution of robust (auto-generated) neural networks. In this paper, we propose a novel hardware architecture, NASGuard, which enables efficient inference of robust NAS networks. NASGuard leverages a heuristic multi-branch mapping model to improve the efficiency of the underlying computing resources. Moreover, NASGuard addresses the load imbalance problem between the computation and memory-access tasks from multi-branch parallel computing. Finally, we propose a topology-aware performance prediction model for data prefetching, to fully exploit the temporal and spatial localities of robust NAS-generated architectures. We have implemented NASGuard with Verilog RTL. The evaluation results show that NASGuard achieves an average speedup of 1.74X over the baseline DNN accelerator.

References

[1]
T. Tsai, K. Yang, T.-Y. Ho, and Y. Jin, "Robust adversarial objects against deep learning models," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 01, 2020, pp. 954--962.
[2]
A. Athalye, N. Carlini, and D. Wagner, "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples," arXiv preprint arXiv:1802.00420, 2018.
[3]
H. Qiu, C. Xiao, L. Yang, X. Yan, H. Lee, and B. Li, "Semanticadv: Generating adversarial examples via attribute-conditional image editing," arXiv preprint arXiv:1906.07927, 2019.
[4]
J. Li, S. Ji, T. Du, B. Li, and T. Wang, "Textbugger: Generating adversarial text against real-world applications," arXiv preprint arXiv:1812.05271, 2018.
[5]
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016.
[6]
C. Yang, A. Kortylewski, C. Xie, Y. Cao, and A. Yuille, "Patchattack: A black-box texture-based attack with reinforcement learning," arXiv preprint arXiv:2004.05682, 2020.
[7]
Y. Zhong and W. Deng, "Towards transferable adversarial attack against deep face recognition," arXiv preprint arXiv:2004.05790, 2020.
[8]
Berkeley, "Physical adversarial examples against deep neural networks," https://bair.berkeley.edu/blog/2017/12/30/yolo-attack/, 2018.
[9]
D. Yang, C. Xiao, B. Li, J. Deng, and M. Liu, "Realistic adversarial examples in 3d meshes," arXiv preprint arXiv:1810.05206, vol. 2, 2018.
[10]
K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, and X. Lin, "Adversarial t-shirt! evading person detectors in a physical world," in European Conference on Computer Vision. Springer, 2020, pp. 665--681.
[11]
M. Guo, Y. Yang, R. Xu, Z. Liu, and D. Lin, "When nas meets robustness: In search of robust architectures against adversarial attacks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 631--640.
[12]
S. Kotyan and D. V. Vargas, "Towards evolving robust neural architectures to defend from adversarial attacks," in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, 2020, pp. 135--136.
[13]
D. Yin, R. Kannan, and P. Bartlett, "Rademacher complexity for adversarially robust generalization," in International Conference on Machine Learning, 2019, pp. 7085--7094.
[14]
R. Gao, T. Cai, H. Li, C.-J. Hsieh, L. Wang, and J. D. Lee, "Convergence of adversarial training in overparametrized neural networks," in Advances in Neural Information Processing Systems, 2019, pp. 13 029--13 040.
[15]
M. Cheng, Q. Lei, P.-Y. Chen, I. Dhillon, and C.-J. Hsieh, "Cat: Customized adversarial training for improved robustness," arXiv preprint arXiv:2002.06789, 2020.
[16]
D. Wu, Y. Wang, and S.-t. Xia, "Revisiting loss landscape for adversarial robustness," arXiv preprint arXiv:2004.05884, 2020.
[17]
B. Li, S. Wang, S. Jana, and L. Carin, "Towards understanding fast adversarial training," arXiv preprint arXiv:2006.03089, 2020.
[18]
B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, "Learning transferable architectures for scalable image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8697--8710.
[19]
I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
[20]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017.
[21]
S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574--2582.
[22]
J. Su, D. V. Vargas, and K. Sakurai, "One pixel attack for fooling deep neural networks," IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828--841, 2019.
[23]
S. Baluja and I. Fischer, "Adversarial transformation networks: Learning to generate adversarial examples," arXiv preprint arXiv:1703.09387, 2017.
[24]
X. Chen and C.-J. Hsieh, "Stabilizing differentiable architecture search via perturbation-based regularization," arXiv preprint arXiv:2002.05283, 2020.
[25]
H. Chen, B. Zhang, S. Xue, X. Gong, H. Liu, R. Ji, and D. Doermann, "Anti-bandit neural architecture search for model defense," arXiv preprint arXiv:2008.00698, 2020.
[26]
D. V. Vargas and S. Kotyan, "Evolving robust neural architectures to defend from adversarial attacks," arXiv preprint arXiv:1906.11667, 2019.
[27]
C. Devaguptapu, D. Agarwal, G. Mittal, and V. N. Balasubramanian, "An empirical study on the robustness of nas based architectures," arXiv preprint arXiv:2007.08428, 2020.
[28]
J. Nandy, W. Hsu, and M. L. Lee, "Approximate manifold defense against multiple adversarial perturbations," arXiv preprint arXiv:2004.02183, 2020.
[29]
N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 ieee symposium on security and privacy (sp). IEEE, 2017, pp. 39--57.
[30]
P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, "Ead: elastic-net attacks to deep neural networks via adversarial examples," arXiv preprint arXiv:1709.04114, 2017.
[31]
M. Dong, Y. Li, Y. Wang, and C. Xu, "Adversarially robust neural architectures," arXiv preprint arXiv:2009.00902, 2020.
[32]
Y. Dong, F. Liao, T. Pang, X. Hu, and J. Zhu, "Discovering adversarial examples with momentum," CoRR abs/1710.06081, 2017.
[33]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097--1105.
[34]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1--9.
[35]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770--778.
[36]
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[37]
N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers et al., "In-datacenter performance analysis of a tensor processing unit," in Proceedings of the 44th Annual International Symposium on Computer Architecture, 2017, pp. 1--12.
[38]
T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, "Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning," ACM SIGARCH Computer Architecture News, vol. 42, no. 1, pp. 269--284, 2014.
[39]
Y.-H. Chen, T. Krishna, J. S. Emer, and V. Sze, "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks," IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127--138, 2016.
[40]
H. Kwon, A. Samajdar, and T. Krishna, "Maeri: Enabling flexible dataflow mapping over dnn accelerators via reconfigurable interconnects," ACM SIGPLAN Notices, vol. 53, no. 2, pp. 461--475, 2018.
[41]
NVIDIA, "Hardware architectural specification," http://nvdla.org/hw/v1/hwarch.html, 2018.
[42]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700--4708.
[43]
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492--1500.
[44]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, "Inception-v4, inception-resnet and the impact of residual connections on learning," arXiv preprint arXiv:1602.07261, 2016.
[45]
S. Venkataramani, A. Ranjan, S. Banerjee, D. Das, S. Avancha, A. Jagannathan, A. Durg, D. Nagaraj, B. Kaul, P. Dubey et al., "Scaledeep: A scalable compute architecture for learning and evaluating deep networks," in Proceedings of the 44th Annual International Symposium on Computer Architecture, 2017, pp. 13--26.
[46]
Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun et al., "Dadiannao: A machine-learning supercomputer," in 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE, 2014, pp. 609--622.
[47]
S. Lym and M. Erez, "Flexsa: Flexible systolic array architecture for efficient pruned dnn model training," arXiv preprint arXiv:2004.13027, 2020.
[48]
S. Yin, P. Ouyang, S. Tang, F. Tu, X. Li, S. Zheng, T. Lu, J. Gu, L. Liu, and S. Wei, "A high energy efficient reconfigurable hybrid neural network processor for deep learning applications," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 968--982, 2017.
[49]
Y.-H. Chen, T.-J. Yang, J. Emer, and V. Sze, "Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 9, no. 2, pp. 292--308, 2019.
[50]
K. Hegde, R. Agrawal, Y. Yao, and C. W. Fletcher, "Morph: Flexible acceleration for 3d cnn-based video understanding," in 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2018, pp. 933--946.
[51]
X. Wei, Y. Liang, and J. Cong, "Overcoming data transfer bottlenecks in fpga-based dnn accelerators via layer conscious memory management," in 2019 56th ACM/IEEE Design Automation Conference (DAC). IEEE, 2019, pp. 1--6.
[52]
S. Sah and V. G. Vaidya, "Dependency aware ahead of time static scheduler for multicore," in 2014 IEEE/ACIS 13th International Conference on Computer and Information Science (ICIS). IEEE, 2014, pp. 337--342.
[53]
J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, S. Song et al., "Going deeper with embedded fpga platform for convolutional neural network," in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2016, pp. 26--35.
[54]
A. Azizimazreah and L. Chen, "Shortcut mining: Exploiting cross-layer shortcut reuse in dcnn accelerators," in 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2019, pp. 94--105.
[55]
Y. Guan, G. Sun, Z. Yuan, X. Li, N. Xu, S. Chen, J. Cong, and Y. Xie, "Crane: Mitigating accelerator under-utilization caused by sparsity irregularities in cnns," IEEE Transactions on Computers, 2020.
[56]
H. Kwon, P. Chatarasi, M. Pellauer, A. Parashar, V. Sarkar, and T. Krishna, "Understanding reuse, performance, and hardware cost of dnn dataflow: A data-centric approach," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 754--768.
[57]
A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter, "Understanding and robustifying differentiable architecture search," arXiv preprint arXiv:1909.09656, 2019.
[58]
M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le, "Mnasnet: Platform-aware neural architecture search for mobile," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2820--2828.
[59]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818--2826.
[60]
A. Samajdar, Y. Zhu, P. Whatmough, M. Mattina, and T. Krishna, "Scale-sim: Systolic cnn accelerator simulator," arXiv preprint arXiv:1811.02883, 2018.
[61]
X. Lin, S. Yin, F. Tu, L. Liu, X. Li, and S. Wei, "Lcp: A layer clusters paralleling mapping method for accelerating inception and residual networks on fpga," in 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC). IEEE, 2018, pp. 1--6.
[62]
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
[63]
J. Fang, Y. Shen, Y. Wang, and L. Chen, "Optimizing dnn computation graph using graph substitutions," Proceedings of the VLDB Endowment, vol. 13, no. 12, pp. 2734--2746, 2020.
[64]
B. H. Ahn, J. Lee, J. M. Lin, H.-P. Cheng, J. Hou, and H. Esmaeilzadeh, "Ordering chaos: Memory-aware scheduling of irregularly wired neural networks for edge devices," arXiv preprint arXiv:2003.02369, 2020.
[65]
Z. Jia, J. Thomas, T. Warszawski, M. Gao, M. Zaharia, and A. Aiken, "Optimizing dnn computation with relaxed graph substitutions," in Proceedings of the 2nd Conference on Systems and Machine Learning (SysML'19), 2019.
[66]
S.-C. Kao, G. Jeong, and T. Krishna, "Confuciux: Autonomous hardware resource assignment for dnn accelerators using reinforcement learning," arXiv preprint arXiv:2009.02010, 2020.
[67]
N. Srivastava, H. Jin, S. Smith, H. Rong, D. Albonesi, and Z. Zhang, "Tensaurus: A versatile accelerator for mixed sparse-dense tensor computations," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 689--702.
[68]
X. Yang, M. Gao, Q. Liu, J. Setter, J. Pu, A. Nayak, S. Bell, K. Cao, H. Ha, P. Raina et al., "Interstellar: Using halide's scheduling language to analyze dnn accelerators," in Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, 2020, pp. 369--383.
[69]
H. Sharma, J. Park, N. Suda, L. Lai, B. Chau, V. Chandra, and H. Esmaeilzadeh, "Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural networks," in Proceedings of the 45th Annual International Symposium on Computer Architecture. IEEE Press, 2018, pp. 764--775.
[70]
K. Hegde, J. Yu, R. Agrawal, M. Yan, M. Pellauer, and C. Fletcher, "Ucnn: Exploiting computational reuse in deep neural networks via weight repetition," in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2018, pp. 674--687.
[71]
V. Akhlaghi, A. Yazdanbakhsh, K. Samadi, R. K. Gupta, and H. Esmaeilzadeh, "Snapea: Predictive early activation for reducing computation in deep convolutional neural networks," in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2018, pp. 662--673.
[72]
Y. S. Shao, J. Clemons, R. Venkatesan, B. Zimmer, M. Fojtik, N. Jiang, B. Keller, A. Klinefelter, N. Pinckney, P. Raina et al., "Simba: Scaling deep-learning inference with multi-chip-module-based architecture," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 14--27.
[73]
C. Ding, S. Liao, Y. Wang, Z. Li, N. Liu, Y. Zhuo, C. Wang, X. Qian, Y. Bai, G. Yuan et al., "C ir cnn: accelerating and compressing deep neural networks using block-circulant weight matrices," in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture. ACM, 2017, pp. 395--408.
[74]
H. Sharma, J. Park, D. Mahajan, E. Amaro, J. K. Kim, C. Shao, A. Mishra, and H. Esmaeilzadeh, "From high-level deep neural models to fpgas," in The 49th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Press, 2016, p. 17.
[75]
E. Baek, D. Kwon, and J. Kim, "A multi-neural network acceleration architecture," in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2020, pp. 940--953.
[76]
Y. Choi and M. Rhu, "Prema: A predictive multi-task scheduling algorithm for preemptible neural processing units," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 220--233.
[77]
X. Wang, R. Hou, B. Zhao, F. Yuan, J. Zhang, D. Meng, and X. Qian, "Dnnguard: An elastic heterogeneous dnn accelerator architecture against adversarial attacks," in Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, 2020, pp. 19--34.
[78]
X. Hu, L. Liang, L. Deng, Y. Ji, Y. Ding, Z. Du, Q. Guo, T. Sherwood, Y. Xie et al., "A systematic view of leakage risks in deep neural network systems," 2021.
[79]
Y. Gan, Y. Qiu, J. Leng, M. Guo, and Y. Zhu, "Ptolemy: Architecture support for robust deep learning," in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 241--255.
[80]
S. Gupta and B. Akin, "Accelerator-aware neural network design using automl," arXiv preprint arXiv:2003.02838, 2020.
[81]
W. Jiang, X. Zhang, E. H.-M. Sha, L. Yang, Q. Zhuge, Y. Shi, and J. Hu, "Accuracy vs. efficiency: Achieving both through fpga-implementation aware neural architecture search," in Proceedings of the 56th Annual Design Automation Conference 2019, 2019, pp. 1--6.
[82]
L. Yang, W. Jiang, W. Liu, H. Edwin, Y. Shi, and J. Hu, "Co-exploring neural architecture and network-on-chip design for real-time artificial intelligence," in 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2020, pp. 85--90.
[83]
W. Chen, Y. Wang, S. Yang, C. Liu, and L. Zhang, "You only search once: A fast automation framework for single-stage dnn/accelerator co-design," arXiv preprint arXiv:2005.07075, 2020.
[84]
M. S. Abdelfattah, Ł. Dudziak, T. Chau, R. Lee, H. Kim, and N. D. Lane, "Best of both worlds: Automl codesign of a cnn and its hardware accelerator," arXiv preprint arXiv:2002.05022, 2020.
[85]
H. Cai, L. Zhu, and S. Han, "Proxylessnas: Direct neural architecture search on target task and hardware," arXiv preprint arXiv:1812.00332, 2018.

Cited By

View all
  • (2024)A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example AttacksACM Transactions on Embedded Computing Systems10.1145/367731823:5(1-28)Online publication date: 14-Aug-2024
  • (2023)AQ2PNN: Enabling Two-party Privacy-Preserving Deep Neural Network Inference with Adaptive QuantizationProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture10.1145/3613424.3614297(628-640)Online publication date: 28-Oct-2023
  • (2023)Inter-layer Scheduling Space Definition and Exploration for Tiled AcceleratorsProceedings of the 50th Annual International Symposium on Computer Architecture10.1145/3579371.3589048(1-17)Online publication date: 17-Jun-2023

Index Terms

  1. NASGuard: a novel accelerator architecture for robust neural architecture search (NAS) networks
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            ISCA '21: Proceedings of the 48th Annual International Symposium on Computer Architecture
            June 2021
            1168 pages
            ISBN:9781450390866

            Sponsors

            In-Cooperation

            • IEEE

            Publisher

            IEEE Press

            Publication History

            Published: 25 November 2021

            Check for updates

            Author Tags

            1. DNN accelerator
            2. adversarial example
            3. robust NAS network

            Qualifiers

            • Research-article

            Conference

            ISCA '21
            Sponsor:

            Acceptance Rates

            Overall Acceptance Rate 543 of 3,203 submissions, 17%

            Upcoming Conference

            ISCA '25

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)32
            • Downloads (Last 6 weeks)2
            Reflects downloads up to 12 Nov 2024

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example AttacksACM Transactions on Embedded Computing Systems10.1145/367731823:5(1-28)Online publication date: 14-Aug-2024
            • (2023)AQ2PNN: Enabling Two-party Privacy-Preserving Deep Neural Network Inference with Adaptive QuantizationProceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture10.1145/3613424.3614297(628-640)Online publication date: 28-Oct-2023
            • (2023)Inter-layer Scheduling Space Definition and Exploration for Tiled AcceleratorsProceedings of the 50th Annual International Symposium on Computer Architecture10.1145/3579371.3589048(1-17)Online publication date: 17-Jun-2023

            View Options

            Get Access

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media