Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Fully Test-time Training Framework for Semi-supervised Node Classification on Out-of-Distribution Graphs

Published: 19 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Graph neural networks (GNNs) have shown great potential in representation learning for various graph tasks. However, the distribution shift between the training and test sets poses a challenge to the efficiency of GNNs. To address this challenge, HomoTTT   proposes a fully test-time training framework for GNNs to enhance the model’s generalization capabilities for node classification tasks. Specifically, our proposed HomoTTT   designs a homophily-based and parameter-free graph contrastive learning task with adaptive augmentation to guide the model’s adaptation during the test-time training, allowing the model to adapt for specific target data. In the inference stage, HomoTTT   proposes to integrate the original GNN model and the adapted model after TTT using a homophily-based model selection method, which prevents potential performance degradation caused by unconstrained model adaptation. Extensive experimental results on six benchmark datasets demonstrate the effectiveness of our proposed framework. Additionally, the exploratory study further validates the rationality of the homophily-based graph contrastive learning task with adaptive augmentation and the homophily-based model selection designed in HomoTTT.

    References

    [1]
    Alexander Bartler, Andre Bühler, Felix Wiewel, Mario Döbler, and Bin Yang. 2022. Mt3: Meta test-time training for self-supervised test-time adaption. In Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR, 3080–3090.
    [2]
    Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 7464–7471.
    [3]
    Guanzi Chen, Jiying Zhang, Xi Xiao, and Yang Li. 2022. GraphTTA: Test time adaptation on graph neural networks. arXiv:2208.09126. Retrieved from https://arxiv.org/abs/2208.09126
    [4]
    Yongqiang Chen, Yonggang Zhang, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, and James Cheng. 2022. Invariance principle meets out-of-distribution generalization on graphs. arXiv:2202.05441. Retrieved from https://arxiv.org/abs/2202.05441
    [5]
    Shobeir Fakhraei, James Foulds, Madhusudana Shashanka, and Lise Getoor. 2015. Collective spammer detection in evolving multi-relational social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1769–1778.
    [6]
    Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In Proceedings of the World Wide Web Conference. 417–426.
    [7]
    Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, Vol. 30 (2017).
    [8]
    Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. 2022. G-mixup: Graph data augmentation for graph classification. In Proceedings of the International Conference on Machine Learning. PMLR, 8230–8248.
    [9]
    Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
    [10]
    Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv:1903.12261. Retrieved from https://arxiv.org/abs/1903.12261
    [11]
    Michael J. Horry, Subrata Chakraborty, Manoranjan Paul, Anwaar Ulhaq, Biswajeet Pradhan, Manas Saha, and Nagesh Shukla. 2020. COVID-19 detection through transfer learning using multimodal imaging data. IEEE Access 8 (2020), 149808–149824.
    [12]
    Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, Vol. 33 (2020), 22118–22133.
    [13]
    Xingchen Hu, Xinwang Liu, Witold Pedrycz, Qing Liao, Yinghua Shen, Yan Li, and Siwei Wang. 2022. Multi-view fuzzy classification with subspace clustering and information granules. IEEE Trans. Knowl. Data Eng. (2022).
    [14]
    Xingchen Hu, Yinghua Shen, Witold Pedrycz, Xianmin Wang, Adam Gacek, and Bingsheng Liu. 2021. Identification of fuzzy rule-based models with collaborative fuzzy clustering. IEEE Trans. Cybernet. 52, 7 (2021), 6406–6419.
    [15]
    Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. 2020. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1857–1867.
    [16]
    Jincheng Huang, Lun Du, Xu Chen, Qiang Fu, Shi Han, and Dongmei Zhang. 2023. Robust mid-pass filtering graph convolutional networks. In Proceedings of the ACM Web Conference 2023. 328–338.
    [17]
    Jincheng Huang, Ping Li, Rui Huang, Na Chen, and Acong Zhang. 2023. Revisiting the role of heterophily in graph representation learning: An edge classification perspective. ACM Trans. Knowl. Discov. Data (2023).
    [18]
    Jincheng Huang, Pin Li, and Kai Zhang. 2022. Semantic consistency for graph representation learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’22). IEEE, 1–8.
    [19]
    Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2019. Text level graph neural network for text classification. arXiv:1910.02356. Retrieved from https://arxiv.org/abs/1910.02356
    [20]
    Zhongyu Huang, Yingheng Wang, Chaozhuo Li, and Huiguang He. 2022. Going deeper into permutation-sensitive graph neural networks. In International Conference on Machine Learning. PMLR, 9377–9409.
    [21]
    Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, and Jiliang Tang. 2021. Automated self-supervised learning for graphs. arXiv:2106.05470. Retrieved from https://arxiv.org.abs/2106.05470
    [22]
    Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, and Neil Shah. 2022. Empowering graph representation learning with test-time graph transformation. arXiv:2210.03561. Retrieved from https://arxiv.org/abs/2210.03561
    [23]
    Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907. Retrieved from https://arxiv.org/abs/1609.02907
    [24]
    Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, and Tom Goldstein. 2022. Robust optimization as data augmentation for large-scale graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 60–69.
    [25]
    Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, and Qi Zhang. 2021. Adsgnn: Behavior-graph augmented relevance modeling in sponsored search. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 223–232.
    [26]
    Haoyang Li, Xin Wang, Ziwei Zhang, and Wenwu Zhu. 2022. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Trans. Knowl. Data Eng. (2022).
    [27]
    Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. 2022. Learning invariant graph representations for out-of-distribution generalization. In Advances in Neural Information Processing Systems, Vol. 35, 11828–11841.
    [28]
    Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, and Jian-Huang Lai. 2023. HomoGCL: Rethinking homophily in graph contrastive learning. arXiv:2306.09614. Retrieved from https://arxiv.org/abs/2306.09614
    [29]
    Derek Lim, Xiuyu Li, Felix Hohne, and Ser-Nam Lim. 2021. New benchmarks for learning on non-homophilous graphs. arXiv:2104.01404. Retrieved from https://arxiv.org/abs/2104.01404
    [30]
    Meng Liu, Ke Liang, Yawei Zhao, Wenxuan Tu, Sihang Zhou, Xinwang Liu, and Kunlun He. 2023. Self-supervised temporal graph learning with temporal and structural intensity alignment. arXiv:2302.07491. Retrieved from https://arxiv.org/abs/2302.07491
    [31]
    Yuejiang Liu, Parth Kothari, Bastien Van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, and Alexandre Alahi. 2021. TTT++: When does self-supervised test-time training fail or thrive? In Advances in Neural Information Processing Systems, Vol. 34, 21808–21820.
    [32]
    Yue Liu, Xihong Yang, Sihang Zhou, Xinwang Liu, Siwei Wang, Ke Liang, Wenxuan Tu, and Liang Li. 2023. Simple contrastive graph clustering. IEEE Trans. Neural Netw. Learn. Syst. (2023).
    [33]
    Yao Ma, Suhang Wang, Charu C. Aggarwal, and Jiliang Tang. 2019. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 723–731.
    [34]
    Massimiliano Mancini, Zeynep Akata, Elisa Ricci, and Barbara Caputo. 2020. Towards recognizing unseen categories in unseen domains. In European Conference on Computer Vision. Springer, 466–483.
    [35]
    Yujie Mo, Yuhuan Chen, Yajie Lei, Liang Peng, Xiaoshuang Shi, Changan Yuan, and Xiaofeng Zhu. 2023. Multiplex graph representation learning via dual correlation reduction. IEEE Trans. Knowl. Data Eng. (2023), 1–14. DOI:
    [36]
    Yujie Mo, Yajie Lei, Jialie Shen, Xiaoshuang Shi, Heng Tao Shen, and Xiaofeng Zhu. 2023. Disentangled multiplex graph representation learning. In Proceedings of the 40st International Conference on Machine Learning.
    [37]
    Yujie Mo, Liang Peng, Jie Xu, Xiaoshuang Shi, and Xiaofeng Zhu. 2022. Simple unsupervised graph representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’22). 7797–7805.
    [38]
    Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao Schardl, and Charles Leiserson. 2020. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 5363–5370.
    [39]
    Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 701–710.
    [40]
    Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2008. Dataset Shift in Machine Learning. MIT Press.
    [41]
    Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2019. Dropedge: Towards deep graph convolutional networks on node classification. arXiv:1907.10903. Retrieved from https://arxiv.org/abs/1907.10903
    [42]
    Andrea Rossi, Denilson Barbosa, Donatella Firmani, Antonio Matinata, and Paolo Merialdo. 2021. Knowledge graph embedding for link prediction: A comparative analysis. ACM Trans. Knowl. Discov. Data 15, 2 (2021), 1–49.
    [43]
    Benedek Rozemberczki, Carl Allen, and Rik Sarkar. 2021. Multi-scale attributed node embedding. J. Complex Netw. 9, 2 (2021), cnab014.
    [44]
    Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Pitfalls of graph neural network evaluation. arXiv:1811.05868. Retrieved from https://arxiv.org/abs/1811.05868
    [45]
    Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. 2021. Towards out-of-distribution generalization: A survey. arXiv:2108.13624. Retrieved from https://arxiv.org/abs/2108.13624
    [46]
    Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv:1908.01000. Retrieved from https://arxiv.org/abs/1908.01000
    [47]
    Ke Sun, Zhouchen Lin, and Zhanxing Zhu. 2020. Multi-stage self-supervised learning for graph convolutional networks on graphs with few labeled nodes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 5892–5899.
    [48]
    Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, and Moritz Hardt. 2019. Test-time training for out-of-distribution generalization.
    [49]
    Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, Vol. 34, 15920–15933.
    [50]
    Qiaoyu Tan, Ninghao Liu, and Xia Hu. 2019. Deep representation learning for social network analysis. Front. Big Data 2 (2019), 2.
    [51]
    Devavrat Tomar, Guillaume Vray, Behzad Bozorgtabar, and Jean-Philippe Thiran. 2023. TeSLA: Test-time self-learning with automatic adversarial augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20341–20350.
    [52]
    Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv:1710.10903. Retrieved from https://arxiv.org/abs/1710.10903
    [53]
    Petar Velickovic, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R. Devon Hjelm. 2019. Deep graph infomax. ICLR (Poster) 2, 3 (2019), 4.
    [54]
    Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. 2020. Tent: Fully test-time adaptation by entropy minimization. arXiv:2006.10726. Retrieved from https://arxiv.org/abs/2006.10726
    [55]
    Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. IEEE Trans. Knowl. Data Eng. (2022).
    [56]
    Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29, 12 (2017), 2724–2743.
    [57]
    Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang, and Xing Xie. 2022. Test-time training for graph neural networks. arXiv:2210.08813. Retrieved from https://arxiv.org/abs/2210.08813
    [58]
    Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, and Jiliang Tang. 2022. Localized graph collaborative filtering. In Proceedings of the SIAM International Conference on Data Mining (SDM’22). SIAM, 540–548.
    [59]
    Lirong Wu, Haitao Lin, Yufei Huang, and Stan Z. Li. 2022. Knowledge distillation improves graph structure augmentation for graph neural networks. In Advances in Neural Information Processing Systems, Vol. 35, 11815–11827.
    [60]
    Qitian Wu, Yiting Chen, Chenxiao Yang, and Junchi Yan. 2023. Energy-based out-of-distribution detection for graph neural networks. arXiv:2302.02914. Retrieved from https://arxiv.org/abs/2302.02914
    [61]
    Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. 2022. Handling distribution shifts on graphs: An invariance perspective. arXiv:2202.02466. Retrieved from https://arxiv.org/abs/2202.02466
    [62]
    Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv:1906.04214. Retrieved from https://arxiv.org/abs/1906.04214
    [63]
    Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv:1810.00826. Retrieved from https://arxiv.org/abs/1810.00826
    [64]
    Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, and En Zhu. 2022. Interpolation-based contrastive learning for few-label semi-supervised learning. IEEE Trans. Neural Netw. Learn. Syst. (2022).
    [65]
    Xihong Yang, Yue Liu, Sihang Zhou, Xinwang Liu, and En Zhu. 2022. Mixed graph contrastive network for semi-supervised node classification. arXiv:2206.02796. Retrieved from https://arxiv.org/abs/2206.02796
    [66]
    Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Xinwang Liu, and En Zhu. 2022. Contrastive deep graph clustering with learnable augmentation. arXiv:2212.03559. Retrieved from https://arxiv.org/abs/2212.03559
    [67]
    Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, and En Zhu. 2023. Cluster-guided contrastive graph clustering network. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 10834–10842.
    [68]
    Xihong Yang, Cheng Tan, Yue Liu, Ke Liang, Siwei Wang, Sihang Zhou, Jun Xia, Stan Z. Li, Xinwang Liu, and En Zhu. 2023. Convert: Contrastive graph clustering with reliable augmentation. In Proceedings of the 31st ACM International Conference on Multimedia. 319–327.
    [69]
    Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning. PMLR, 40–48.
    [70]
    Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, Vol. 31.
    [71]
    Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In Advances in Neural Information Processing Systems, Vol. 33, 5812–5823.
    [72]
    Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. 2020. When does self-supervision help graph convolutional networks? In Proceedings of the International Conference on Machine Learning. PMLR, 10871–10880.
    [73]
    Amir R. Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3712–3722.
    [74]
    Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2017. Mixup: Beyond empirical risk minimization. arXiv:1710.09412. Retrieved from https://arxiv.org/abs/1710.09412
    [75]
    Xinyi Zhang and Pietro Lio. 2018. Skip-gram based convolutional networks for graph classification. In Proceedings of the International Conference on Learning Representations.
    [76]
    Zaixi Zhang and Qi Liu. 2023. Learning subpocket prototypes for generalizable structure-based drug design. Proceedings of the International Conference on Machine Learning (ICML’23).
    [77]
    Zaixi Zhang, Zepu Lu, Zhongkai Hao, Marinka Zitnik, and Qi Liu. 2023. Full-atom protein pocket design via iterative refinement. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS’23).
    [78]
    Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. 2022. Learning from counterfactual links for link prediction. In Proceedings of the International Conference on Machine Learning. PMLR, 26911–26926.
    [79]
    Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2022. Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell. (2022).
    [80]
    Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. 2021. Shift-robust gnns: Overcoming the limitations of localized graph training data. In Advances in Neural Information Processing Systems, Vol 34, 27965–27977.
    [81]
    Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv:2006.04131. Retrieved from https://arxiv.org/abs/2006.04131
    [82]
    Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021. 2069–2080.

    Cited By

    View all
    • (2024)Efficient algorithms to mine concise representations of frequent high utility occupancy patternsApplied Intelligence10.1007/s10489-024-05296-254:5(4012-4042)Online publication date: 18-Mar-2024

    Index Terms

    1. A Fully Test-time Training Framework for Semi-supervised Node Classification on Out-of-Distribution Graphs

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Knowledge Discovery from Data
      ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 7
      August 2024
      505 pages
      ISSN:1556-4681
      EISSN:1556-472X
      DOI:10.1145/3613689
      • Editor:
      • Jian Pei
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 June 2024
      Online AM: 26 February 2024
      Accepted: 21 January 2024
      Revised: 14 January 2024
      Received: 15 September 2023
      Published in TKDD Volume 18, Issue 7

      Check for updates

      Author Tag

      1. Out-of-distribution; contrastive learning; graph neural network;

      Qualifiers

      • Research-article

      Funding Sources

      • National Key R&D Program of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)302
      • Downloads (Last 6 weeks)71
      Reflects downloads up to 12 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Efficient algorithms to mine concise representations of frequent high utility occupancy patternsApplied Intelligence10.1007/s10489-024-05296-254:5(4012-4042)Online publication date: 18-Mar-2024

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media