Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3607199.3607200acmotherconferencesArticle/Chapter ViewAbstractPublication PagesraidConference Proceedingsconference-collections
research-article
Open access

Black-box Attacks Against Neural Binary Function Detection

Published: 16 October 2023 Publication History

Abstract

Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates.
In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks – instruction sequences that could be emitted by common compiler toolchains and configurations – that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.

References

[1]
Martín Abadi, Mihai Budiu, Úlfar Erlingsson, and Jay Ligatti. 2009. Control-Flow Integrity Principles, Implementations, and Applications. 13, 1 (2009), 4:1–4:40. https://doi.org/10.1145/1609956.1609960
[2]
Ioannis Agadakos, Di Jin, David Williams-King, Vasileios P. Kemerlis, and Georgios Portokalidis. 2019. Nibbler: Debloating Binary Shared Libraries. In Proceedings of the Annual Computer Security Applications Conference (San Juan Puerto Rico USA, 2019-12-09). ACM, 70–83. https://doi.org/10.1145/3359789.3359823
[3]
Abdullah Al-Dujaili, Alex Huang, Erik Hemberg, and Una-May OReilly. 2018. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware. In Proceedings of the IEEE Security and Privacy Workshops (San Francisco, CA, 2018-05). IEEE, 76–82. https://doi.org/10.1109/SPW.2018.00020
[4]
Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, and Phil Roth. 2018. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning. https://doi.org/10.48550/ARXIV.1801.08917
[5]
Dennis Andriesse, Xi Chen, Victor van der Deen, Asia Slowinska, and Herbert Bos. 2016. An In-Depth Analysis of Disassembly on Full-Scale X86/X64 Binaries. In Proceedings of the USENIX Security Symposium (2016). 19.
[6]
Dennis Andriesse, Asia Slowinska, and Herbert Bos. 2017. Compiler-Agnostic Function Detection in Binaries. In Proceedings of the IEEE European Symposium on Security and Privacy (2017-04). 177–189. https://doi.org/10.1109/EuroSP.2017.11
[7]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning (2018). PMLR, 274–283.
[8]
Stefan Axelsson. 2000. The Base-Rate Fallacy and the Difficulty of Intrusion Detection. 3, 3 (2000), 20.
[9]
Tiffany Bao, Jonathan Burket, Maverick Woo, Rafael Turner, and David Brumley. 2014. ByteWeight: Learning to Recognize Functions in Binary Code. In Proceedings of the USENIX Security Symposium (2014-08). 17.
[10]
Erick Bauman, Zhiqiang Lin, and Kevin W. Hamlen. 2018. Superset Disassembly: Statically Rewriting X86 Binaries Without Heuristics. In Proceedings of the ISOC Network and Distributed System Security Symposium (San Diego, CA, 2018). Internet Society. https://doi.org/10.14722/ndss.2018.23300
[11]
Eli Bendersky. 2012. Pyelftools. https://github.com/eliben/pyelftools
[12]
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases (2013). Springer, 387–402.
[13]
Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. 84 (2018), 317–331.
[14]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models Are Few-Shot Learners. (2020). https://doi.org/10.48550/ARXIV.2005.14165
[15]
Capstone Developers. 2014. Capstone Disassembler. https://www.capstone-engine.org
[16]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (2017). IEEE, 39–57.
[17]
Zheng Leong Chua, Shiqi Shen, Prateek Saxena, and Zhenkai Liang. 2017. Neural Nets Can Learn Function Type Signatures From Binaries. In Proceedings of the USENIX Security Symposium (2017). 19.
[18]
Crispan Cowan, Calton Pu, Dave Maier, Jonathan Walpole, and Peat Bakke. 1998. StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks. In Proceedings of the USENIX Security Symposium (1998). 63–78.
[19]
Hanjun Dai, Bo Dai, and Le Song. 2016. Discriminative Embeddings of Latent Variable Models for Structured Data. (2016). https://doi.org/10.48550/ARXIV.1603.05629
[20]
DeepBits Developers. 2022. DeepDi. DeepBits. https://www.deepbitstech.com/deepdi.html
[21]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (2019). arxiv:1810.04805 [cs] http://arxiv.org/abs/1810.04805
[22]
Steven H. H. Ding, Benjamin C. M. Fung, and Philippe Charland. 2019. Asm2Vec: Boosting Static Representation Robustness for Binary Clone Search against Code Obfuscation and Compiler Optimization. In Proceedings of the IEEE Symposium on Security and Privacy (2019-05). 472–489. https://doi.org/10.1109/SP.2019.00003
[23]
Yue Duan, Xuezixiang Li, Jinghan Wang, and Heng Yin. 2020. DeepBinDiff: Learning Program-Wide Code Representations for Binary Diffing. In Proceedings of the ISOC Network and Distributed System Security Symposium (San Diego, CA, 2020). Internet Society. https://doi.org/10.14722/ndss.2020.24311
[24]
Qian Feng, Rundong Zhou, Chengcheng Xu, Yao Cheng, Brian Testa, and Heng Yin. 2016. Scalable Graph-based Bug Search for Firmware Images. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (Vienna Austria, 2016-10-24). ACM, 480–491. https://doi.org/10.1145/2976749.2978370
[25]
Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, and Jishen Zhao. 2019. Coda: An End-to-End Neural Program Decompiler. In Proceedings of the Conference on Neural Information Processing Systems (2019-06-27). arxiv:1906.12029http://arxiv.org/abs/1906.12029
[26]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural Message Passing for Quantum Chemistry. In Proceedings of the International Conference on Machine Learning (2017-04-04). https://doi.org/10.48550/arXiv.1704.01212
[27]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. (2014).
[28]
Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable Feature Learning for Networks. (2016). https://doi.org/10.48550/ARXIV.1607.00653
[29]
Ilfak Guilfanov. 2022. IDA Pro. Hex Rays. https://hex-rays.com/ida-pro/
[30]
Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. 2019. Simple black-box adversarial attacks. In International Conference on Machine Learning (2019). PMLR, 2484–2493.
[31]
Wenbo Guo, Dongliang Mu, Xinyu Xing, Min Du, and Dawn Song. 2019. DeepVSA: Facilitating Value-set Analysis with Deep Learning for Postmortem Program Analysis. In Proceedings of the USENIX Security Symposium (2019). 1787–1804. https://www.usenix.org/conference/usenixsecurity19/presentation/guo
[32]
Irfan Ul Haq and Juan Caballero. 2019. A Survey of Binary Code Similarity. (2019). arxiv:1909.11424 [cs] http://arxiv.org/abs/1909.11424
[33]
Wenyi Huang and Jack W. Stokes. 2016. MtNet: A Multi-Task Neural Network for Dynamic Malware Classification. In Proceedings of the International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (Berlin, Heidelberg, 2016-07-07) (DIMVA 2016). Springer-Verlag, 399–418. https://doi.org/10.1007/978-3-319-40667-1_20
[34]
Richard M. Karp. 1972. Reducibility among Combinatorial Problems. Springer US, 85–103. https://doi.org/10.1007/978-1-4684-2001-2_9
[35]
Kexin Pei. 2021. XDA. Columbia University. https://github.com/CUMLSec/XDA
[36]
Dongkwan Kim, Eunsoo Kim, Sang Kil Cha, Sooel Son, and Yongdae Kim. 2022. Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned. IEEE Transactions on Software Engineering (2022), 1–23. https://doi.org/10.1109/TSE.2022.3187689
[37]
Volodymyr Kuznetsov, László Szekeres, Mathias Payer, George Candea, R. Sekar, and Dawn Song. 2014. Code-Pointer Integrity. In Proceedings of the USENIX Conference on Operating Systems Design and Implementation (Broomfield, CO, 2014-10-06) (OSDI’14). USENIX Association, 147–163.
[38]
Jeff Law. 2020. Stack Clash Mitigation in GCC, Part 3. Red Hat Developer. https://developers.redhat.com/blog/2020/05/22/stack-clash-mitigation-in-gcc-part-3
[39]
Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of the International Conference on Machine Learning (2014). arXiv. https://doi.org/10.48550/ARXIV.1405.4053
[40]
JongHyup Lee, Thanassis Avgerinos, and David Brumley. 2011. TIE: Principled Reverse Engineering of Types in Binary Programs. In Proceedings of the ISOC Network and Distributed System Security Symposium (2011). 18.
[41]
Yongjun Lee, Hyun Kwon, Sang-Hoon Choi, Seung-Ho Lim, Sung Hoon Baek, and Ki-Woong Park. 2019. Instruction2vec: Efficient Preprocessor of Assembly Code to Detect Software Weakness with CNN. 9, 19 (2019), 4086. https://doi.org/10.3390/app9194086
[42]
Xuezixiang Li, Qu Yu, and Heng Yin. 2021. PalmTree: Learning an Assembly Language Model for Instruction Embedding. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (2021-05-07). arxiv:2103.03809http://arxiv.org/abs/2103.03809
[43]
Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre. 2021. Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes. In Proceedings of the ACM Asia Conference on Computer and Communications Security (Virtual Event Hong Kong, 2021-05-24). ACM, 744–758. https://doi.org/10.1145/3433210.3453086
[44]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2019. Towards Deep Learning Models Resistant to Adversarial Attacks. (2019). arxiv:1706.06083 [cs, stat] http://arxiv.org/abs/1706.06083
[45]
Andrea Marcelli, Mariano Graziano, Xabier Ugarte-Pedrero, Yanick Fratantonio, Mohamad Mansouri, and Davide Balzarotti. 2022. How Machine Learning Is Solving the Binary Function Similarity Problem. In Proceedings of the USENIX Security Symposium (2022). 18.
[46]
Luca Massarelli, Giuseppe Antonio Di Luna, Fabio Petroni, Leonardo Querzoni, and Roberto Baldoni. 2019. SAFE: Self-Attentive Function Embeddings for Binary Similarity. In Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, Cham, 309–329.
[47]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. (2013). arxiv:1301.3781 [cs] http://arxiv.org/abs/1301.3781
[48]
Kenneth Miller, Yonghwi Kwon, Yi Sun, Zhuo Zhang, Xiangyu Zhang, and Zhiqiang Lin. 2019. Probabilistic Disassembly. In Proceedings of the International Conference on Software Engineering (Montreal, Quebec, Canada, 2019-05-25) (ICSE ’19). IEEE Press, 1187–1198. https://doi.org/10.1109/ICSE.2019.00121
[49]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016). 2574–2582.
[50]
NSA. 2019. Ghidra. US National Security Agency. https://ghidra-sre.org
[51]
Mott Ott, Sergey Edunov, Alexey Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. Fairseq. In NAACL-HLT 2019: Demonstrations (2019). https://github.com/pytorch/fairseq
[52]
Chengbin Pang, Ruotong Yu, Yaohui Chen, Eric Koskinen, Georgios Portokalidis, Bing Mao, and Jun Xu. 2021. SoK: All You Ever Wanted to Know About X86/X64 Binary Disassembly But Were Afraid to Ask. In Proceedings of the IEEE Symposium on Security and Privacy (2021-05). 833–851. https://doi.org/10.1109/SP40001.2021.00012
[53]
Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P. Wellman. 2018. SoK: Security and Privacy in Machine Learning. In Proceedings of the IEEE European Symposium on Security and Privacy (2018). 399–414. https://doi.org/10.1109/EuroSP.2018.00035
[54]
Kexin Pei, Jonas Guan, David Williams-King, Junfeng Yang, and Suman Jana. 2021. XDA: Accurate, Robust Disassembly with Transfer Learning. In Proceedings of the ISOC Network and Distributed System Security Symposium (2021). arxiv:2010.00770http://arxiv.org/abs/2010.00770
[55]
Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing Properties of Adversarial ML Attacks in the Problem Space. In Proceedings of the IEEE Symposium on Security and Privacy (2020). arXiv. https://doi.org/10.48550/ARXIV.1911.02142
[56]
Qualys. 2017. Qualys Security Advisory: The Stack Clash. https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[57]
Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles Nicholas. 2018. Malware Detection by Eating a Whole EXE. In Proceedings of the AAAI Workshop on Artificial Intelligence for Cyber Security (2018). arXiv. https://doi.org/10.48550/ARXIV.1710.09435
[58]
Edward Raff, Jared Sylvester, and Charles Nicholas. 2017. Learning the PE Header, Malware Detection with Minimal Domain Knowledge. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (New York, NY, USA, 2017). Association for Computing Machinery, 121–132. https://doi.org/10.1145/3128572.3140442
[59]
Xiaolei Ren, Michael Ho, Jiang Ming, Yu Lei, and Li Li. 2021. Unleashing the hidden power of compiler optimization on binary code difference: an empirical study. In PLDI. ACM, New York, NY, USA, 142–157. https://doi.org/10.1145/3453483.3454035 arxiv:2103.12357
[60]
Henry Gordon Rice. 1953. Classes of recursively enumerable sets and their decision problems. 74, 2 (1953), 358–366.
[61]
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling Relational Data with Graph Convolutional Networks. In Proceedings of the European Semantic Web Conference (2018) (Lecture Notes in Computer Science, Vol. 10843). Springer International Publishing, 593–607. https://doi.org/10.1007/978-3-319-93417-4_38
[62]
Edward J Schwartz, JongHyup Lee, Maverick Woo, and David Brumley. 2013. Native X86 Decompilation Using Semantics-Preserving Structural Analysis and Iterative Control-Flow Structuring. In Proceedings of the USENIX Security Symposium (2013-08). 17.
[63]
Vedvyas Shanbhogue, Deepak Gupta, and Ravi Sahita. 2019. Security Analysis of Processor Instruction Set Architecture for Enforcing Control-Flow Integrity. In Proceedings of the International Workshop on Hardware and Architectural Support for Security and Privacy (Phoenix AZ USA, 2019-06-23). ACM, 1–11. https://doi.org/10.1145/3337167.3337175
[64]
Eui Chul Richard Shin, Dawn Song, and Reza Moazzezi. 2015. Recognizing Functions in Binaries with Neural Networks. In Proceedings of the USENIX Security Symposium (2015). 17.
[65]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. (2013).
[66]
Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. (2015). https://doi.org/10.48550/ARXIV.1503.00075
[67]
Caroline Tice, Tom Roeder, Peter Collingbourne, Stephen Checkoway, Úlfar Erlingsson, Luis Lozano, and Geoff Pike. 2014. Enforcing Forward-Edge Control-Flow Integrity in GCC & LLVM. In Proceedings of the USENIX Security Symposium (2014). 16.
[68]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proceedings of the Conference on Neural Information Processing Systems (2017). 11.
[69]
Vector 35. 2016. Binary Ninja. Vector 35. https://binary.ninja
[70]
R. Vinayakumar, Mamoun Alazab, K. P. Soman, Prabaharan Poornachandran, and Sitalakshmi Venkatraman. 2019. Robust Intelligent Malware Detection Using Deep Learning. 7 (2019), 46717–46738. https://doi.org/10.1109/ACCESS.2019.2906934
[71]
David Williams-King, Hidenori Kobayashi, Kent Williams-King, Graham Patterson, Frank Spano, Yu Jian Wu, Junfeng Yang, and Vasileios P. Kemerlis. 2020. Egalito: Layout-Agnostic Binary Recompilation. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (Lausanne Switzerland, 2020-03-09). ACM, 133–147. https://doi.org/10.1145/3373376.3378470
[72]
Xiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, and Dawn Song. 2017. Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (2017). 363–376. https://doi.org/10.1145/3133956.3134018 arxiv:1708.06525
[73]
Khaled Yakdan, Sebastian Eschweiler, Elmar Gerhards-Padilla, and Matthew Smith. 2015. No More Gotos: Decompilation Using Pattern-Independent Control-Flow Structuring and Semantics-Preserving Transformations. In Proceedings of the ISOC Network and Distributed System Security Symposium (San Diego, CA, 2015). Internet Society. https://doi.org/10.14722/ndss.2015.23185
[74]
Jia Yang, Cai Fu, Xiao-Yang Liu, Heng Yin, and Pan Zhou. 2021. Codee: A Tensor Embedding Scheme for Binary Code Search. (2021), 1–1. https://doi.org/10.1109/TSE.2021.3056139
[75]
Miuyin Yong Wong, Matthew Landen, Manos Antonakakis, Douglas M. Blough, Elissa M. Redmiles, and Mustaque Ahamad. 2021. An Inside Look into the Practice of Malware Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (New York, NY, USA, 2021-11-12) (CCS ’21). Association for Computing Machinery, 3053–3069. https://doi.org/10.1145/3460120.3484759
[76]
Sheng Yu, Yu Qu, Xunchao Hu, and Heng Yin. 2022. DeepDi: Learning a Relational Graph Convolutional Network Model on Instructions for Fast and Accurate Disassembly. In Proceedings of the USENIX Security Symposium (2022). 17.
[77]
Zeping Yu, Rui Cao, Qiyi Tang, Sen Nie, Junzhou Huang, and Shi Wu. 2020. Order Matters: Semantic-Aware Neural Networks for Binary Code Similarity Detection. In Proceedings of the AAAI Conference on Artificial Intelligence (2020-04-03), Vol. 34. 1145–1152. https://doi.org/10.1609/aaai.v34i01.5466
[78]
Fei Zuo, Xiaopeng Li, Patrick Young, Lannan Luo, Qiang Zeng, and Zhexin Zhang. 2019. Neural Machine Translation Inspired Binary Code Similarity Comparison beyond Function Pairs. In Proceedings of the ISOC Network and Distributed System Security Symposium (2019). https://doi.org/10.14722/ndss.2019.23492 arxiv:1808.04706

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
RAID '23: Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses
October 2023
769 pages
ISBN:9798400707650
DOI:10.1145/3607199
This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 October 2023

Check for updates

Author Tags

  1. binary analysis
  2. deep neural network
  3. disassembly
  4. function boundary detection

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

RAID 2023

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 438
    Total Downloads
  • Downloads (Last 12 months)438
  • Downloads (Last 6 weeks)55
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media