Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485447.3511998acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction

Published: 25 April 2022 Publication History

Abstract

Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.

References

[1]
Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task. In Proceedings of ACL 2020.
[2]
Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic Representation for Dialogue Modeling. In Proceedings of ACL/IJCNLP 2021.
[3]
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the Blanks: Distributional Similarity for Relation Learning. In Proceedings of ACL/IJCNLP 2019.
[4]
Anson Bastos, Abhishek Nadgeri, Kuldeep Singh, Isaiah Onando Mulang, Saeedeh Shekarpour, Johannes Hoffart, and Manohar Kaul. 2021. RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network. In Proceedings of the Web Conference 2021. 1673–1685.
[5]
Matthias Baumgartner, Wen Zhang, Bibek Paudel, Daniele Dell’Aglio, Huajun Chen, and Abraham Bernstein. 2018. Aligning Knowledge Base and Document Embedding Models Using Regularized Multi-Task Learning. In International Semantic Web Conference (1)(Lecture Notes in Computer Science, Vol. 11136). Springer, 21–37.
[6]
Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021. PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains. arXiv preprint arXiv:2102.12206(2021).
[7]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Proceedings of NeurIPS 2020.
[8]
Mary Elaine Califf and Raymond J. Mooney. 1999. Relational Learning of Pattern-Match Rules for Information Extraction. In Proceedings of AAAI. AAAI Press / The MIT Press, 328–334.
[9]
Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Jeff Z. Pan, Yuan He, Wen Zhang, Ian Horrocks, and Huajun Chen. 2021. Low-resource Learning with Knowledge Graphs: A Comprehensive Survey. CoRR abs/2112.10006(2021). arXiv:2112.10006https://arxiv.org/abs/2112.10006
[10]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019.
[11]
Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021. Prompt-Learning for Fine-Grained Entity Typing. CoRR abs/2108.10604(2021). arXiv:2108.10604https://arxiv.org/abs/2108.10604
[12]
Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021. Prompt-Learning for Fine-Grained Entity Typing. arXiv preprint arXiv:2108.10604(2021).
[13]
Bayu Distiawan, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural relation extraction for knowledge base enrichment. In Proceedings of ACL. 229–240.
[14]
Bowen Dong, Yuan Yao, Ruobing Xie, Tianyu Gao, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Meta-Information Guided Meta-Learning for Few-Shot Relation Classification. In Proceedings of COLING 2020.
[15]
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of ACL.
[16]
Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification. In Proceedings of AAAI.
[17]
Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural Snowball for Few-Shot Relation Learning. In Proceedings of AAAI 2020.
[18]
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. 2021. PPT: Pre-trained Prompt Tuning for Few-shot Learning. CoRR abs/2109.04332(2021). arXiv:2109.04332https://arxiv.org/abs/2109.04332
[19]
Zhijiang Guo, Guoshun Nan, Wei Lu, and Shay B Cohen. 2020. Learning Latent Forests for Medical Relation Extraction. In IJCAI. 3651–3657.
[20]
Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention Guided Graph Convolutional Networks for Relation Extraction. In Proceedings of ACL 2019.
[21]
Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: Word-level Adversarial ReProgramming. In Proceedings of ACL/IJCNLP 2021.
[22]
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. PTR: Prompt Tuning with Rules for Text Classification. CoRR abs/2105.11259(2021). arxiv:2105.11259https://arxiv.org/abs/2105.11259
[23]
Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A Large-Scale Supervised Few-shot Relation Classification Dataset with State-of-the-Art Evaluation. In Proceedings of EMNLP, 2018.
[24]
Tom Harting, Sepideh Mesbah, and Christoph Lofi. 2020. LOREM: Language-consistent Open Relation Extraction from Unstructured Text. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, Yennun Huang, Irwin King, Tie-Yan Liu, and Maarten van Steen (Eds.). ACM / IW3C2, 1830–1838. https://doi.org/10.1145/3366423.3380252
[25]
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2021. Towards a Unified View of Parameter-Efficient Transfer Learning. CoRR abs/2110.04366(2021). arXiv:2110.04366https://arxiv.org/abs/2110.04366
[26]
Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals. In Proceedings of SemEval. 33–38. https://www.aclweb.org/anthology/S10-1006/
[27]
Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification. CoRR abs/2108.02035(2021). arXiv:2108.02035https://arxiv.org/abs/2108.02035
[28]
Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification. arXiv preprint arXiv:2108.02035(2021).
[29]
Scott B. Huffman. 1995. Learning information extraction patterns from examples. In Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing.
[30]
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. Trans. Assoc. Comput. Linguistics 8 (2020), 64–77. https://transacl.org/ojs/index.php/tacl/article/view/1853
[31]
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv preprint arXiv:2104.08691(2021). https://arxiv.org/abs/2104.08691
[32]
Juan Li, Ruoxu Wang, Ningyu Zhang, Wen Zhang, Fan Yang, and Huajun Chen. 2020. Logic-guided Semantic Representation Learning for Zero-Shot Relation Classification. In Proceedings of COLING. 2967–2978.
[33]
Pengfei Li, Kezhi Mao, Xuefeng Yang, and Qi Li. 2019. Improving Relation Extraction with Knowledge-attention. In Proceedings of EMNLP. 229–239.
[34]
Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of ACL/IJCNLP 2021.
[35]
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586(2021).
[36]
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT Understands, Too. CoRR abs/2103.10385(2021). arxiv:2103.10385https://arxiv.org/abs/2103.10385
[37]
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. arXiv preprint arXiv:2104.08786(2021).
[38]
Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge Enhanced Contextual Word Representations. In Proceedings of EMNLP-IJCNLP. 43–54. https://www.aclweb.org/anthology/D19-1005
[39]
Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, and Jian Tang. 2020. Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs. In Proceedings of ICML 2020.
[40]
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In Proceeding of CHI. 1–7.
[41]
Teven Le Scao and Alexander M. Rush. 2021. How Many Data Points is a Prompt Worth?CoRR abs/2103.08493(2021). arxiv:2103.08493https://arxiv.org/abs/2103.08493
[42]
Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification. In Proceedings of COLING.
[43]
Timo Schick and Hinrich Schütze. 2020. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. CoRR abs/2009.07118(2020). arxiv:2009.07118https://arxiv.org/abs/2009.07118
[44]
Timo Schick and Hinrich Schütze. 2021. Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference. In Proceedings of EACL 2021.
[45]
Yongliang Shen, Xinyin Ma, Yechun Tang, and Weiming Lu. 2021. A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction. In WWW ’21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, Jure Leskovec, Marko Grobelnik, Marc Najork, Jie Tang, and Leila Zia (Eds.). ACM / IW3C2, 1704–1715. https://doi.org/10.1145/3442381.3449895
[46]
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of EMNLP 2020.
[47]
George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-TACRED: Addressing Shortcomings of the TACRED Dataset. arXiv preprint arXiv:2104.08398(2021). https://arxiv.org/abs/2104.08398
[48]
Zifeng Wang, Rui Wen, Xi Chen, Shao-Lun Huang, Ningyu Zhang, and Yefeng Zheng. 2020. Finding influential instances for distantly supervised relation extraction. arXiv preprint arXiv:2009.09841(2020).
[49]
Shanchan Wu and Yifan He. 2019. Enriching Pre-trained Language Model with Entity Information for Relation Classification. In Proceedings of theCIKM 2019.
[50]
Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, and Guoqiang Xu. 2021. Curriculum-Meta Learning for Order-Robust Continual Relation Extraction. CoRR abs/2101.01926(2021). arxiv:2101.01926https://arxiv.org/abs/2101.01926
[51]
Fuzhao Xue, Aixin Sun, Hao Zhang, and Eng Siong Chng. 2021. GDPNet: Refining Latent Multi-View Graph for Relation Extraction. In Proceedings of AAAI 2021.
[52]
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. In Proceedings of EMNLP 2020.
[53]
Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Contrastive Triple Extraction with Generative Transformer. In Proceedings of AAAI, 2021.
[54]
Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-Based Relation Extraction. In Proceedings of ACL 2020.
[55]
Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei Zhang, and Huajun Chen. 2020. Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction. In Proceedings of COLING. International Committee on Computational Linguistics, 6399–6410. https://doi.org/10.18653/v1/2020.coling-main.563
[56]
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. In Proceedings of EMNLP 2015.
[57]
Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level Relation Extraction as Semantic Segmentation. In Proceedings of IJCAI, Zhi-Hua Zhou (Ed.). ijcai.org, 3999–4006. https://doi.org/10.24963/ijcai.2021/551
[58]
Ningyu Zhang, Shumin Deng, Zhanling Sun, Xi Chen, Wei Zhang, and Huajun Chen. 2018. Attention-Based Capsule Network with Dynamic Routing for Relation Extraction. In Proceedings of EMNLP 2018.
[59]
Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019. Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks. In Proceedings of NAACL-HLT.
[60]
Ningyu Zhang, Qianghuai Jia, Shumin Deng, Xiang Chen, Hongbin Ye, Hui Chen, Huaixiao Tou, Gang Huang, Zhao Wang, Nengwei Hua, and Huajun Chen. 2021. AliCG: Fine-grained and Evolvable Conceptual Graph Construction for Semantic Search at Alibaba. In Proceedings of KDD. ACM, 3895–3905. https://doi.org/10.1145/3447548.3467057
[61]
Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In Proceedings of EMNLP, 2018.
[62]
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware Attention and Supervised Data Improve Slot Filling. In Proceedings of EMNLP 2017.
[63]
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of EMNLP. 35–45. https://nlp.stanford.edu/pubs/zhang2017tacred.pdf
[64]
Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng Zhang, Ningyu Zhang, Bin Qin, Xu Ming, and Yefeng Zheng. 2021. PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction. In Proceedings of ACL/IJCNLP 2021.
[65]
Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. In Proceedings of ACL 2016.
[66]
Wenxuan Zhou and Muhao Chen. 2021. An Improved Baseline for Sentence-level Relation Extraction. CoRR abs/2102.01373(2021). arxiv:2102.01373https://arxiv.org/abs/2102.01373
[67]
Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, and Xiang Ren. 2020. NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, Yennun Huang, Irwin King, Tie-Yan Liu, and Maarten van Steen (Eds.). ACM / IW3C2, 2166–2176. https://doi.org/10.1145/3366423.3380282

Cited By

View all
  • (2025)Construction of Prompt Verbalizer Based on Dynamic Search Tree for Text ClassificationIEEE Access10.1109/ACCESS.2025.353245813(16338-16351)Online publication date: 2025
  • (2025) LLM-mediated domain-specific voice agents: the case of TextileBot Behaviour & Information Technology10.1080/0144929X.2025.2456667(1-33)Online publication date: 3-Feb-2025
  • (2025)A prompt tuning method based on relation graphs for few-shot relation extractionNeural Networks10.1016/j.neunet.2025.107214185(107214)Online publication date: May-2025
  • Show More Cited By

Index Terms

  1. KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            WWW '22: Proceedings of the ACM Web Conference 2022
            April 2022
            3764 pages
            ISBN:9781450390965
            DOI:10.1145/3485447
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 25 April 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. Knowledge-aware
            2. Prompt-tuning
            3. Relation Extraction

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Funding Sources

            • NSFC
            • National Key R&D Program of China

            Conference

            WWW '22
            Sponsor:
            WWW '22: The ACM Web Conference 2022
            April 25 - 29, 2022
            Virtual Event, Lyon, France

            Acceptance Rates

            Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)780
            • Downloads (Last 6 weeks)57
            Reflects downloads up to 06 Feb 2025

            Other Metrics

            Citations

            Cited By

            View all
            • (2025)Construction of Prompt Verbalizer Based on Dynamic Search Tree for Text ClassificationIEEE Access10.1109/ACCESS.2025.353245813(16338-16351)Online publication date: 2025
            • (2025) LLM-mediated domain-specific voice agents: the case of TextileBot Behaviour & Information Technology10.1080/0144929X.2025.2456667(1-33)Online publication date: 3-Feb-2025
            • (2025)A prompt tuning method based on relation graphs for few-shot relation extractionNeural Networks10.1016/j.neunet.2025.107214185(107214)Online publication date: May-2025
            • (2025)DIRel: Joint relational triple extraction through dual implicit relationNeurocomputing10.1016/j.neucom.2025.129374624(129374)Online publication date: Apr-2025
            • (2025)Domain adaptation for textual adversarial defense via prompt-tuningNeurocomputing10.1016/j.neucom.2024.129192620(129192)Online publication date: Mar-2025
            • (2025) MPE : Learning meta-prompt with entity-enhanced semantics for few-shot named entity recognition Neurocomputing10.1016/j.neucom.2024.129031620(129031)Online publication date: Mar-2025
            • (2025)KG-prompt: Interpretable knowledge graph prompt for pre-trained language modelsKnowledge-Based Systems10.1016/j.knosys.2025.113118(113118)Online publication date: Feb-2025
            • (2025)MPLinker: Multi-template Prompt-tuning with adversarial training for Issue–commit Link recoveryJournal of Systems and Software10.1016/j.jss.2025.112351223(112351)Online publication date: May-2025
            • (2025)Basis is also explanation: Interpretable Legal Judgment Reasoning prompted by multi-source knowledgeInformation Processing & Management10.1016/j.ipm.2024.10399662:3(103996)Online publication date: May-2025
            • (2025)Automatically learning linguistic structures for entity relation extractionInformation Processing & Management10.1016/j.ipm.2024.10390462:1(103904)Online publication date: Jan-2025
            • Show More Cited By

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media