Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3582016.3582061acmconferencesArticle/Chapter ViewAbstractPublication PagesasplosConference Proceedingsconference-collections
research-article

Heron: Automatically Constrained High-Performance Library Generation for Deep Learning Accelerators

Published: 25 March 2023 Publication History

Abstract

Deep Learning Accelerators (DLAs) are effective to improve both performance and energy efficiency of compute-intensive deep learning algorithms. A flexible and portable mean to exploit DLAs is using high-performance software libraries with well-established APIs, which are typically either manually implemented or automatically generated by exploration-based compilation approaches. Though exploration-based approaches significantly reduce programming efforts, they fail to find optimal or near-optimal programs from a large but low-quality search space because the massive inherent constraints of DLAs cannot be accurately characterized.
In this paper, we propose Heron, a novel exploration-based approach, to efficiently generate high-performance libraries of DLAs. The key is to automatically (rather than manually) enforce massive sophisticated while accurate constraints through the entire program generation including constrained space generation and constrained space exploration. By conducting static analysis on compute, sophisticated constraints are automatically generated to properly characterize inherent constraints of DLAs, and thus greatly prune invalid program candidates to produce a high-quality constrained search space. To efficiently explore the resultant search space, we further propose a novel constraint-based genetic algorithm, which features that the evolutionary process is conducted on formulated constraint satisfactory problems instead of concrete solutions. Thus, the sophisticated constraints of the search space are strictly preserved during the entire exploration process. We conduct extensive experiments on 3 representative DLAs, i.e., NVIDIA TensorCore, Intel DL Boost Acceleration, and TVM Versatile Tensor Accelerator. Experimental results demonstrate that Heron averagely achieves 2.71x speedup over four state-of-the-art automatic generation approaches. Also, compared to vendor-provided hand-tuned libraries, Heron achieves 2.00x speedup on average.

References

[1]
[n.d]. Accelerate Fast Math with Intel® oneAPI Math Kernel Library. http://software.intel.com/en-us/intel-mkl
[2]
[n.d]. Basic Linear Algebra on NVIDIA GPUs. https://developer.nvidia.com/cublas
[3]
[n.d]. Cambricon MLU. https://www.cambricon.com/
[4]
[n.d]. Googles Operations Research Tools. https://github.com/google/or-tools
[5]
[n.d]. GOYA INFERENCE PRODUCTS. https://habana.ai/inference/
[6]
[n.d]. Inside Apple’s new A11 Bionic processor. https://www.zdnet.com/article/inside-apples-new-a11-bionic-processor/
[7]
[n.d]. Intel Deep Learning Boost - Intel AI. https://www.intel.com/content/www/us/en/artificial-intelligence/deep-learning-boost.html
[8]
[n.d]. NVIDIA A100 TENSOR CORE GPU. https://www.nvidia.com/en-us/data-center/a100/
[9]
[n.d]. NVIDIA cuDNN. https://developer.nvidia.com/cudnn
[10]
[n.d]. NVIDIA T4 Tensor Core GPU for AI inference. https://www.nvidia.com/en-us/data-center/tesla-t4/
[11]
[n.d]. NVIDIA Tensor Core. https://www.nvidia.cn/data-center/tensor-cores/
[12]
[n.d]. NVIDIA V100 TENSOR CORE GPU. https://www.nvidia.com/en-us/data-center/v100/
[13]
[n.d]. oneAPI Deep Neural Network Library (oneDNN). https://github.com/intel/mkl-dnn
[14]
[n.d]. OpenBLAS: An optimized BLAS library. https://www.openblas.net
[15]
[n.d]. Poplar Graph Framework Software. https://www.graphcore.ai/products/poplar
[16]
[n.d]. Programming Tensor Cores in CUDA 9. https://developer.nvidia.com/blog/programming-tensor-cores-cuda-9/
[17]
Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, and Frédo Durand. 2019. Learning to optimize halide with tree search and random programs. ACM Transactions on Graphics (TOG), 38, 4 (2019), 1–12.
[18]
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Abdurrahman Akkas, Yunming Zhang, Patricia Suriana, Shoaib Kamil, and Saman Amarasinghe. 2019. Tiramisu: A polyhedral compiler for expressing fast and portable code. In 2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). 193–205.
[19]
James Bergstra and Yoshua Bengio. 2012. Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research (JMLR), 13 (2012), 281–305.
[20]
John Burgess. 2020. Rtx on—the nvidia turing gpu. IEEE Micro, 40, 2 (2020), 36–44.
[21]
Ruizhe Cai, Ao Ren, Ning Liu, Caiwen Ding, Luhao Wang, Xuehai Qian, Massoud Pedram, and Yanzhi Wang. 2018. VIBNN: Hardware Acceleration of Bayesian Neural Networks. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 476–488.
[22]
Prasanth Chatarasi, Hyoukjun Kwon, Natesh Raina, Saurabh Malik, Vaisakh Haridas, Angshuman Parashar, Michael Pellauer, Tushar Krishna, and Vivek Sarkar. 2020. Marvel: A Data-centric Compiler for DNN Operators on Spatial Accelerators. arXiv: Distributed, Parallel, and Cluster Computing.
[23]
Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 269–284.
[24]
Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
[25]
Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, and Luis Ceze. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning. In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI). 578–594.
[26]
Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS). 3393–3404.
[27]
Yunji Chen, Tianshi Chen, Zhiwei Xu, Ninghui Sun, and Olivier Temam. 2016. DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning. Commun. ACM, 59, 11 (2016), 105–112.
[28]
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, and Olivier Temam. 2014. DaDianNao: A Machine-Learning Supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). 609–622.
[29]
Yu-Hsin Chen, Joel Emer, and Vivienne Sze. 2016. Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks. In Proceedings of the 43rd ACM/IEEE Annual International Symposium on Computer Architecture (ISCA). 367–379.
[30]
Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory. In Proceedings of the 43rd ACM/IEEE International Symposium on Computer Architecture (ISCA). 27–39.
[31]
Carlos A. Coello Coello. 2000. CONSTRAINT-HANDLING USING AN EVOLUTIONARY MULTIOBJECTIVE OPTIMIZATION TECHNIQUE. Civil Engineering and Environmental Systems, 17 (2000), 319–346.
[32]
Carlos A. Coello Coello. 2022. Constraint-handling techniques used with evolutionary algorithms. Proceedings of the Genetic and Evolutionary Computation Conference Companion.
[33]
Shail Dave, Aviral Shrivastava, Youngbin Kim, Sasikanth Avancha, and Kyoungwoo Lee. 2020. dMazeRunner: Optimizing Convolutions on Dataflow Accelerators. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1544–1548.
[34]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). 4171–4186.
[35]
Zidong Du, Robert Fasthuber, Tianshi Chen, Paolo Ienne, Ling Li, Tao Luo, Xiaobing Feng, Yunji Chen, and Olivier Temam. 2015. ShiDianNao: Shifting Vision Processing Closer to the Sensor. In Proceedings of the 42nd ACM/IEEE Annual International Symposium on Computer Architecture (ISCA). 92–104.
[36]
Mingyu Gao, Jing Pu, Xuan Yang, Mark Horowitz, and Christos Kozyrakis. 2017. TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 751–764.
[37]
David E Golberg. 1989. Genetic algorithms in search, optimization, and machine learning. Addion Wesley, 1989, 102 (1989), 36.
[38]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778.
[39]
Kartik Hegde, Po-An Tsai, Sitao Huang, Vikas Chandra, Angshuman Parashar, and Christopher W. Fletcher. 2021. Mind mappings: enabling efficient algorithm-accelerator mapping space search. Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
[40]
Brian Hickmann, Jieasheng Chen, Michael Rotzin, Andrew Yang, Maciej Urbanski, and Sasikanth Avancha. 2020. Intel Nervana Neural Network Processor-T (NNP-T) Fused Floating Point Many-Term Dot Product. In 2020 IEEE 27th Symposium on Computer Arithmetic (ARITH). 133–136.
[41]
Abdollah Homaifar, Charlene X. Qi, and Steven H. Lai. 1994. Constrained Optimization Via Genetic Algorithms. Simulation, 62 (1994), 242 – 253.
[42]
Zhe Jia, Blake Tillman, Marco Maggioni, and Daniele Paolo Scarpazza. 2019. Dissecting the graphcore ipu architecture via microbenchmarking. arXiv preprint arXiv:1912.03413.
[43]
Jeffrey A. Joines and Christopher R. Houck. 1994. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s. Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, 579–584 vol.2.
[44]
Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, and Al Borchers. 2017. In-Datacenter Performance Analysis of a Tensor Processing Unit. In Proceedings of the 44th ACM/IEEE Annual International Symposium on Computer Architecture (ISCA). 1–12.
[45]
Slawomir Koziel and Zbigniew Michalewicz. 1999. Evolutionary Algorithms, Homomorphous Mappings, and Constrained Parameter Optimization. Evolutionary Computation, 7 (1999), 19–44.
[46]
Oliver Kramer. 2010. A Review of Constraint-Handling Techniques for Evolution Strategies. Appl. Comput. Intell. Soft Comput., 2010 (2010), 185063:1–185063:11.
[47]
Tzu-Mao Li, Michaël Gharbi, Andrew Adams, Frédo Durand, and Jonathan Ragan-Kelley. 2018. Differentiable programming for image processing and deep learning in Halide. ACM Transactions on Graphics (TOG), 37, 4 (2018), 1–13.
[48]
Daofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. 2015. PuDianNao : A Polyvalent Machine Learning Accelerator. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 369–381.
[49]
Shaoli Liu, Zidong Du, Jinhua Tao, Dong Han, Tao Luo, Yuan Xie, Yunji Chen, and Tianshi Chen. 2016. Cambricon: An instruction set architecture for neural networks. In Proceedings of the 43rd ACM/IEEE Annual International Symposium on Computer Architecture (ISCA). 393–405.
[50]
Yizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, and Yida Wang. 2019. Optimizing CNN Model Inference on CPUs. In Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference (ATC). 1025–1040.
[51]
Martin Lukasiewycz, Michael Glaß, Christian Haubelt, and Jürgen Teich. 2007. SAT-decoding in evolutionary algorithms for discrete constrained optimization problems. 2007 IEEE Congress on Evolutionary Computation, 935–942.
[52]
Efrén Mezura-Montes and Carlos A. Coello Coello. 2008. Constrained Optimization via Multiobjective Evolutionary Algorithms. In Multiobjective Problem Solving from Nature.
[53]
Efrén Mezura-Montes and Carlos A. Coello Coello. 2011. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput., 1 (2011), 173–194.
[54]
Zbigniew Michalewicz and Girish Nazhiyath. 1995. Genocop III: a co-evolutionary algorithm for numerical optimization problems with nonlinear constraints. Proceedings of 1995 IEEE International Conference on Evolutionary Computation, 2 (1995), 647–651 vol.2.
[55]
Thierry Moreau, Tianqi Chen, Luis Vega, Jared Roesch, Eddie Yan, Lianmin Zheng, Josh Fromm, Ziheng Jiang, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2019. A Hardware–Software Blueprint for Flexible Deep Learning Specialization. IEEE Micro, 39, 5 (2019), 8–16.
[56]
Ravi Teja Mullapudi, Andrew Adams, Dillon Sharlet, Jonathan Ragan-Kelley, and Kayvon Fatahalian. 2016. Automatically scheduling halide image processing pipelines. ACM Transactions on Graphics (TOG), 35, 4 (2016), 1–11.
[57]
Surya Narayanan, Karl Taht, Rajeev Balasubramonian, Edouard Giacomin, and Pierre-Emmanuel Gaillardon. 2020. SpinalFlow: An Architecture and Dataflow Tailored for Spiking Neural Networks. In Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture (ISCA). 349–362.
[58]
David Orvosh and Lawrence Davis. 1994. Using a genetic algorithm to optimize problems with feasibility constraints. Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, 548–553 vol.2.
[59]
Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. 2013. Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). 519–530.
[60]
Khaled M. Rasheed. 1998. An Adaptive Penalty Approach for Constrained Genetic-Algorithm Optimization.
[61]
Tapabrata Ray, Hemant Kumar Singh, Amitay Isaacs, and Warren F. Smith. 2009. Infeasibility Driven Evolutionary Algorithm for Constrained Optimization.
[62]
Thomas Philip Runarsson and Xin Yao. 2000. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput., 4 (2000), 284–294.
[63]
Hans-Paul Schwefel. 1995. Evolution and optimum seeking. In Sixth-generation computer technology series.
[64]
K. Simonyan and A. Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[65]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2818–2826.
[66]
Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730.
[67]
Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, and Tony Nowatzki. 2021. UNIT: Unifying Tensorized Instruction Compilation. In 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). 77–89.
[68]
Jing Xiao, Zbigniew Michalewicz, Lixin Zhang, and Krzysztof Trojanowski. 1997. Adaptive evolutionary planner/navigator for mobile robots. IEEE Trans. Evol. Comput., 1 (1997), 18–28.
[69]
Ozgur Yeniay. 2005. Penalty Function Methods for Constrained Optimization with Genetic Algorithms. Mathematical & Computational Applications, 10 (2005), 45–56.
[70]
Jie Zhao, Bojie Li, Wang Nie, Zhen Geng, Renwei Zhang, Xiong Gao, Bin Cheng, Chen Wu, Yun Cheng, Zheng Li, Peng Di, Kun Zhang, and Xuefeng Jin. 2021. AKG: automatic kernel generation for neural processing units using polyhedral transformations. In Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI). 1233–1248.
[71]
Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph Gonzalez, and Ion Stoica. 2020. Ansor: Generating High-performance Tensor Programs for Deep Learning. In Proceedings of 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI). 863–879.
[72]
Size Zheng, Renze Chen, Anjiang Wei, Yicheng Jin, Qin Han, Liqiang Lu, Bingyang Wu, Xiuhong Li, Shengen Yan, and Yun Liang. 2022. AMOS: Enabling Automatic Mapping for Tensor Computations On Spatial Accelerators with Hardware Abstraction. In Proceedings of the 49th Annual International Symposium on Computer Architecture (ISCA). 874–887.
[73]
Size Zheng, Yun Liang, Shuo Wang, Renze Chen, and Kaiwen Sheng. 2020. FlexTensor: An Automatic Schedule Exploration and Optimization Framework for Tensor Computation on Heterogeneous System. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 859–873.

Cited By

View all
  • (2025)GTA: Generating high-performance tensorized program with dual-task schedulingJournal of Systems Architecture10.1016/j.sysarc.2025.103359(103359)Online publication date: Feb-2025
  • (2024)Foreseer: Knowledge-Driven Acceleration of Memory-Bound Matrix Multiplications for Large Language Model InferenceProceedings of the 17th ACM International Systems and Storage Conference10.1145/3688351.3689153(53-67)Online publication date: 16-Sep-2024
  • (2024)The Droplet Search Algorithm for Kernel SchedulingACM Transactions on Architecture and Code Optimization10.1145/365010921:2(1-28)Online publication date: 21-May-2024
  • Show More Cited By

Index Terms

  1. Heron: Automatically Constrained High-Performance Library Generation for Deep Learning Accelerators

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASPLOS 2023: Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3
      March 2023
      820 pages
      ISBN:9781450399180
      DOI:10.1145/3582016
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 25 March 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      Author Tags

      1. code generation
      2. compiler optimization
      3. tensor computation

      Qualifiers

      • Research-article

      Funding Sources

      • CAS Project for Young Scientists in Basic Research
      • Youth Innovation Promotion Association CAS
      • Beijing Academy of Artificial Intelligence
      • the NSF of China

      Conference

      ASPLOS '23

      Acceptance Rates

      Overall Acceptance Rate 535 of 2,713 submissions, 20%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)444
      • Downloads (Last 6 weeks)36
      Reflects downloads up to 05 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)GTA: Generating high-performance tensorized program with dual-task schedulingJournal of Systems Architecture10.1016/j.sysarc.2025.103359(103359)Online publication date: Feb-2025
      • (2024)Foreseer: Knowledge-Driven Acceleration of Memory-Bound Matrix Multiplications for Large Language Model InferenceProceedings of the 17th ACM International Systems and Storage Conference10.1145/3688351.3689153(53-67)Online publication date: 16-Sep-2024
      • (2024)The Droplet Search Algorithm for Kernel SchedulingACM Transactions on Architecture and Code Optimization10.1145/365010921:2(1-28)Online publication date: 21-May-2024
      • (2024)AtRec: Accelerating Recommendation Model Training on CPUsIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2024.338118635:6(905-918)Online publication date: Jun-2024
      • (2024)Soter: Analytical Tensor-Architecture Modeling and Automatic Tensor Program Tuning for Spatial Accelerators2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA)10.1109/ISCA59077.2024.00076(991-1004)Online publication date: 29-Jun-2024
      • (2024)OAA: An Abstraction for Efficient Accelerator Adaptation in Deep Learning Frameworks2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10649961(1-8)Online publication date: 30-Jun-2024
      • (2023)Automatic Kernel Generation for Large Language Models on Deep Learning Accelerators2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)10.1109/ICCAD57390.2023.10323944(1-9)Online publication date: 28-Oct-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media