Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3597503.3623333acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Open access

Inferring Data Preconditions from Deep Learning Models for Trustworthy Prediction in Deployment

Published: 06 February 2024 Publication History

Abstract

Deep learning models are trained with certain assumptions about the data during the development stage and then used for prediction in the deployment stage. It is important to reason about the trustworthiness of the model's predictions with unseen data during deployment. Existing methods for specifying and verifying traditional software are insufficient for this task, as they cannot handle the complexity of DNN model architecture and expected outcomes. In this work, we propose a novel technique that uses rules derived from neural network computations to infer data preconditions for a DNN model to determine the trustworthiness of its predictions. Our approach, DeepInfer involves introducing a novel abstraction for a trained DNN model that enables weakest precondition reasoning using Dijkstra's Predicate Transformer Semantics. By deriving rules over the inductive type of neural network abstract representation, we can overcome the matrix dimensionality issues that arise from the backward non-linear computation from the output layer to the input layer. We utilize the weakest precondition computation using rules of each kind of activation function to compute layer-wise precondition from the given postcondition on the final output of a deep neural network. We extensively evaluated DeepInfer on 29 real-world DNN models using four different datasets collected from five different sources and demonstrated the utility, effectiveness, and performance improvement over closely related work. DeepInfer efficiently detects correct and incorrect predictions of high-accuracy models with high recall (0.98) and high F-1 score (0.84) and has significantly improved over the prior technique, SelfChecker. The average runtime overhead of DeepInfer is low, 0.22 sec for all the unseen datasets. We also compared runtime overhead using the same hardware settings and found that DeepInfer is 3.27 times faster than SelfChecker, the state-of-the-art in this area.

References

[1]
2018. Uber's fatal self-driving crash reportedly caused by software. https://www.cnet.com/roadshow/news/uber-reportedly-finds-false-positive-self-driving-car-accident/. [Online; accessed Mar-2023].
[2]
2019. AI in Medicine Is Overhyped. https://www.scientificamerican.com/article/ai-in-medicine-is-overhyped/. [Online; accessed Mar-2023].
[3]
2019. Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk. https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281. [Online; accessed Mar-2023].
[4]
2022. Bank Customer dataset. https://www.kaggle.com/datasets/kidoen/bank-customers-data. [Online; accessed Aug-2022].
[5]
2022. German Credit Risk Classification dataset. https://www.kaggle.com/code/twunderbar/german-credit-risk-classification-with-keras/data. [Online; accessed Aug-2022].
[6]
2022. House Price Prediction dataset. here. [Online; accessed Aug-2022].
[7]
2023. Repository of DeepInfer. https://github.com/shibbirtanvin/DeepInfer. [Online; accessed September-2023].
[8]
2023. Repository of DISSECTOR. https://github.com/ParagonLight/dissector. [Online; accessed July-2023].
[9]
Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2019. Black Box Fairness Testing of Machine Learning Models. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Tallinn, Estonia) (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA, 625--635.
[10]
Shibbir Ahmed, Sayem Mohammad Imtiaz, Samantha Syeda Khairunnesa, Breno Dantas Cruz, and Hridesh Rajan. 2023. Design by Contract for Deep Learning APIs. In ESEC/FSE'2023: The 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (San Francisco, California).
[11]
Aws Albarghouthi. 2021. Introduction to Neural Network Verification. Found. Trends Program. Lang. 7, 1--2 (dec 2021), 1--157.
[12]
Sumon Biswas and Hridesh Rajan. 2020. Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Virtual Event, USA) (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA, 642--653.
[13]
Sumon Biswas and Hridesh Rajan. 2021. Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Athens, Greece) (ESEC/FSE 2021). Association for Computing Machinery, New York, NY, USA, 981--993.
[14]
Sumon Biswas and Hridesh Rajan. 2023. Fairify: Fairness Verification of Neural Networks. In Proceedings of the 45th International Conference on Software Engineering (Melbourne, Victoria, Australia) (ICSE '23). IEEE Press, 1546--1558.
[15]
Sumon Biswas, Mohammad Wardat, and Hridesh Rajan. 2022. The Art and Practice of Data Science Pipelines: A Comprehensive Study of Data Science Pipelines in Theory, in-the-Small, and in-the-Large. In Proceedings of the 44th International Conference on Software Engineering (Pittsburgh, Pennsylvania) (ICSE '22). Association for Computing Machinery, New York, NY, USA, 2091--2103.
[16]
Marcello M Bonsangue and Joost N Kok. 1994. The weakest precondition calculus: Recursion and duality. Formal Aspects of Computing 6, 1 (1994), 788--800.
[17]
Leo Breiman. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science 16, 3 (2001), 199--231.
[18]
Murat Cenk and M Anwar Hasan. 2017. On the arithmetic complexity of Strassen-like matrix multiplications. Journal of Symbolic Computation 80 (2017), 484--501.
[19]
Jürgen Cito, Isil Dillig, Seohyun Kim, Vijayaraghavan Murali, and Satish Chandra. 2021. Explaining Mispredictions of Machine Learning Models Using Rule Induction. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Athens, Greece) (ESEC/FSE 2021). Association for Computing Machinery, New York, NY, USA, 716--727.
[20]
Charles Corbière, Nicolas THOME, Avner Bar-Hen, Matthieu Cord, and Patrick Pérez. 2019. Addressing Failure Prediction by Learning Model Confidence. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2019/file/757f843a169cc678064d9530d12a1881-Paper.pdf
[21]
Frank S de Boer. 1999. A wp-calculus for OO. In International Conference on Foundations of Software Science and Computation Structure. Springer, 135--140.
[22]
Terrance DeVries and Graham W Taylor. 2018. Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865 (2018).
[23]
Ellie D'hondt and Prakash Panangaden. 2006. Quantum weakest preconditions. Mathematical Structures in Computer Science 16, 3 (2006), 429--451.
[24]
Edsger W. Dijkstra. 1975. Guarded Commands, Nondeterminacy and Formal Derivation of Programs. Commun. ACM 18, 8 (aug 1975), 453--457.
[25]
Tommaso Dreossi, Daniel J Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, and Sanjit A Seshia. 2019. Verifai: A toolkit for the formal design and analysis of artificial intelligence-based systems. In International Conference on Computer Aided Verification. Springer, 432--442.
[26]
Anna Fariha, Ashish Tiwari, Arjun Radhakrishna, Sumit Gulwani, and Alexandra Meliou. 2021. Conformance Constraint Discovery: Measuring Trust in Data-Driven Systems. In Proceedings of the 2021 International Conference on Management of Data (Virtual Event, China) (SIGMOD '21). Association for Computing Machinery, New York, NY, USA, 499--512.
[27]
Yang Feng, Qingkai Shi, Xinyu Gao, Jun Wan, Chunrong Fang, and Zhenyu Chen. 2020. Deepgini: prioritizing massive tests to enhance the robustness of deep neural networks. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis. 177--188.
[28]
Sainyam Galhotra, Anna Fariha, Raoni Lourenço, Juliana Freire, Alexandra Meliou, and Divesh Srivastava. 2022. DataPrism: Exposing Disconnect between Data and Systems. In Proceedings of the 2022 International Conference on Management of Data (Philadelphia, PA, USA) (SIGMOD '22). Association for Computing Machinery, New York, NY, USA, 217--231.
[29]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In 2018 IEEE Symposium on Security and Privacy (SP). 3--18.
[30]
Jiri Gesi, Xinyun Shen, Yunfan Geng, Qihong Chen, and Iftekhar Ahmed. 2023. Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). 1559--1570.
[31]
Usman Gohar, Sumon Biswas, and Hridesh Rajan. 2023. Towards Understanding Fairness and Its Composition in Ensemble Machine Learning. In Proceedings of the 45th International Conference on Software Engineering (Melbourne, Victoria, Australia) (ICSE '23). IEEE Press, 1533--1545.
[32]
Divya Gopinath, Hayes Converse, Corina Pasareanu, and Ankur Taly. 2019. Property Inference for Deep Neural Networks. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). 797--809.
[33]
Charles Antony Richard Hoare. 1969. An axiomatic basis for computer programming. Commun. ACM 12, 10 (1969), 576--580.
[34]
Charles Antony Richard Hoare and Jifeng He. 1987. The weakest prespecification. Inform. Process. Lett. 24, 2 (1987), 127--132.
[35]
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety verification of deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24--28, 2017, Proceedings, Part I 30. Springer, 3--29.
[36]
Sayem Mohammad Imtiaz, Fraol Batole, Astha Singh, Rangeet Pan, Breno Dantas Cruz, and Hridesh Rajan. 2023. Decomposing a Recurrent Neural Network into Modules for Enabling Reusability and Replacement. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). 1020--1032.
[37]
Md Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan. 2019. A Comprehensive Study on Deep Learning Bug Characteristics. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Tallinn, Estonia) (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA, 510--520.
[38]
Md Johirul Islam, Rangeet Pan, Giang Nguyen, and Hridesh Rajan. 2020. Repairing Deep Neural Networks: Fix Patterns and Challenges. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE '20). Association for Computing Machinery, New York, NY, USA, 1135--1146.
[39]
William H Jefferys. 1980. On the method of least-squares. The Astronomical Journal 85 (1980), 177.
[40]
Heinrich Jiang, Been Kim, Melody Y. Guan, and Maya Gupta. 2018. To Trust or Not to Trust a Classifier. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (Montréal, Canada) (NIPS'18). Curran Associates Inc., Red Hook, NY, USA, 5546--5557.
[41]
Kaggle. 2010. The world's largest data science community with powerful tools and resources to help you achieve your data science goals. www.kaggle.com.
[42]
Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24--28, 2017, Proceedings, Part I 30. Springer, 97--117.
[43]
Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, et al. 2019. The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15--18, 2019, Proceedings, Part I 31. Springer, 443--452.
[44]
Samantha Syeda Khairunnesa, Shibbir Ahmed, Sayem Mohammad Imtiaz, Hridesh Rajan, and Gary T. Leavens. 2023. What Kinds of Contracts Do ML APIs Need? Empirical Software Engineering 1, 1 (March 2023).
[45]
Jinhan Kim, Robert Feldt, and Shin Yoo. 2023. Evaluating Surprise Adequacy for Deep Learning System Testing. ACM Trans. Softw. Eng. Methodol. 32, 2, Article 42 (mar 2023), 29 pages.
[46]
Max Kuhn, Kjell Johnson, et al. 2013. Applied predictive modeling. Vol. 26. Springer.
[47]
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 6405--6416.
[48]
Yan Luo, Yongkang Wong, Mohan S Kankanhalli, and Qi Zhao. 2021. Learning to Predict Trustworthiness with Steep Slope Loss. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 21533--21544. https://proceedings.neurips.cc/paper_files/paper/2021/file/b432f34c5a997c8e7c806a895ecc5e25-Paper.pdf
[49]
Denis Mazzucato and Caterina Urban. 2021. Reduced products of abstract domains for fairness certification of neural networks. In Static Analysis: 28th International Symposium, SAS 2021, Chicago, IL, USA, October 17--19, 2021, Proceedings 28. Springer, 308--322.
[50]
Giang Nguyen, Sumon Biswas, and Hridesh Rajan. 2023. Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML. In ESEC/FSE'2023: The 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (San Francisco, California).
[51]
Giang Nguyen, Md Johirul Islam, Rangeet Pan, and Hridesh Rajan. 2022. Manas: Mining Software Repositories to Assist AutoML. In Proceedings of the 44th International Conference on Software Engineering (Pittsburgh, Pennsylvania) (ICSE '22). Association for Computing Machinery, New York, NY, USA, 1368--1380.
[52]
Rangeet Pan and Hridesh Rajan. 2020. On Decomposing a Deep Neural Network into Modules. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Virtual Event, USA) (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA, 889--900.
[53]
Rangeet Pan and Hridesh Rajan. 2022. Decomposing Convolutional Neural Networks into Reusable and Replaceable Modules. In Proceedings of the 44th International Conference on Software Engineering (Pittsburgh, Pennsylvania) (ICSE '22). Association for Computing Machinery, New York, NY, USA, 524--535.
[54]
Nicolas Papernot and Patrick McDaniel. 2018. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765 (2018).
[55]
Tim Pearce, Alexandra Brintrup, Mohamed Zaki, and Andy Neely. 2018. High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, 4075--4084. https://proceedings.mlr.press/v80/pearce18a.html
[56]
Christopher M Poskitt and Detlef Plump. 2010. A Hoare calculus for graph programs. In International Conference on Graph Transformation. Springer, 139--154.
[57]
William R Rice. 1989. Analyzing tables of statistical tests. Evolution 43, 1 (1989), 223--225.
[58]
Sanjit A Seshia, Ankush Desai, Tommaso Dreossi, Daniel J Fremont, Shromona Ghosh, Edward Kim, Sumukh Shivakumar, Marcell Vazquez-Chanlatte, and Xiangyu Yue. 2018. Formal specification for deep neural networks. In International Symposium on Automated Technology for Verification and Analysis. Springer, 20--34.
[59]
David Shriver, Sebastian Elbaum, and Matthew B. Dwyer. 2021. Reducing DNN Properties to Enable Falsification with Adversarial Attacks. In Proceedings of the 43rd International Conference on Software Engineering (Madrid, Spain) (ICSE '21). IEEE Press, 275--287.
[60]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An Abstract Domain for Certifying Neural Networks. Proc. ACM Program. Lang. 3, POPL, Article 41 (jan 2019), 30 pages.
[61]
Jack W Smith, James E Everhart, WC Dickson, William C Knowler, and Robert Scott Johannes. 1988. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the annual symposium on computer application in medical care. American Medical Informatics Association, 261.
[62]
Matthew Sotoudeh and Aditya V Thakur. 2020. Abstract neural networks. In International Static Analysis Symposium. Springer, 65--88.
[63]
Andrea Stocco, Michael Weiss, Marco Calzana, and Paolo Tonella. 2020. Misbehaviour Prediction for Autonomous Driving Systems. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE '20). Association for Computing Machinery, New York, NY, USA, 359--371.
[64]
Sakshi Udeshi, Pryanshu Arora, and Sudipta Chattopadhyay. 2018. Automated Directed Fairness Testing. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (Montpellier, France) (ASE '18). Association for Computing Machinery, New York, NY, USA, 98--108.
[65]
Caterina Urban, Maria Christakis, Valentin Wüstholz, and Fuyuan Zhang. 2020. Perfectly parallel fairness certification of neural networks. Proceedings of the ACM on Programming Languages 4, OOPSLA (2020), 1--30.
[66]
Muhammad Usman, Divya Gopinath, Youcheng Sun, Yannic Noller, and Corina S. Păsăreanu. 2021. NNrepair: Constraint-based Repair of Neural Network Classifiers. In Computer Aided Verification: 33rd International Conference, CAV 2021, Virtual Event, July 20--23, 2021, Proceedings, Part I. Springer-Verlag, Berlin, Heidelberg, 3--25.
[67]
Chengpeng Wang, Gang Fan, Peisen Yao, Fuxiong Pan, and Charles Zhang. ICSE 2023. Verifying Data Constraint Equivalence in FinTech Systems. (ICSE 2023).
[68]
Huiyan Wang, Jingwei Xu, Chang Xu, Xiaoxing Ma, and Jian Lu. 2020. Dissector: Input Validation for Deep Learning Applications by Crossing-Layer Dissection. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE '20). Association for Computing Machinery, New York, NY, USA, 727--738.
[69]
Zi Wang, Aws Albarghouthi, Gautam Prakriya, and Somesh Jha. 2022. Interval universal approximation for neural networks. Proceedings of the ACM on Programming Languages 6, POPL (2022), 1--29.
[70]
Michael Weiss and Paolo Tonella. 2022. Simple techniques work surprisingly well for neural network test prioritization and active learning (replicability study). In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. 139--150.
[71]
Yan Xiao, Ivan Beschastnikh, Yun Lin, Rajdeep Singh Hundal, Xiaofei Xie, David S. Rosenblum, and Jin Song Dong. 2022. Self-Checking Deep Neural Networks for Anomalies and Adversaries in Deployment. IEEE Transactions on Dependable and Secure Computing (2022), 1--18.
[72]
Yan Xiao, Ivan Beschastnikh, David S. Rosenblum, Changsheng Sun, Sebastian Elbaum, Yun Lin, and Jin Song Dong. 2021. Self-Checking Deep Neural Networks in Deployment. In Proceedings of the 43rd International Conference on Software Engineering (Madrid, Spain) (ICSE '21). IEEE Press, 372--384.
[73]
Yan Xiao, Yun Lin, Ivan Beschastnikh, Changsheng Sun, David Rosenblum, and Jin Song Dong. 2023. Repairing Failure-Inducing Inputs with Input Reflection. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (Rochester, MI, USA) (ASE '22). Association for Computing Machinery, New York, NY, USA, Article 85, 13 pages.
[74]
Chenyang Yang, Rachel A Brower-Sinning, Grace A Lewis, and Christian Kästner. 2022. Data leakage in notebooks: Static detection and better processes. (2022).
[75]
Mingsheng Ying. 2012. Floyd-hoare logic for quantum programs. ACM Transactions on Programming Languages and Systems (TOPLAS) 33, 6 (2012), 1--49.
[76]
Peixin Zhang, Jingyi Wang, Jun Sun, Guoliang Dong, Xinyu Wang, Xingen Wang, Jin Song Dong, and Ting Dai. 2020. White-Box Fairness Testing through Adversarial Sampling. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE '20). Association for Computing Machinery, New York, NY, USA, 949--960.

Index Terms

  1. Inferring Data Preconditions from Deep Learning Models for Trustworthy Prediction in Deployment

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICSE '24: Proceedings of the IEEE/ACM 46th International Conference on Software Engineering
      May 2024
      2942 pages
      ISBN:9798400702174
      DOI:10.1145/3597503
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      In-Cooperation

      • Faculty of Engineering of University of Porto

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 February 2024

      Check for updates

      Badges

      Author Tags

      1. deep neural networks
      2. weakest precondition
      3. trustworthiness

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      ICSE '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 276 of 1,856 submissions, 15%

      Upcoming Conference

      ICSE 2025

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 572
        Total Downloads
      • Downloads (Last 12 months)572
      • Downloads (Last 6 weeks)119
      Reflects downloads up to 13 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media