Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Egocentric Early Action Prediction via Adversarial Knowledge Distillation

Published: 06 February 2023 Publication History

Abstract

Egocentric early action prediction aims to recognize actions from the first-person view by only observing a partial video segment, which is challenging due to the limited context information of the partial video. In this article, to tackle the egocentric early action prediction problem, we propose a novel multi-modal adversarial knowledge distillation framework. In particular, our approach involves a teacher network to learn the enhanced representation of the partial video by considering the future unobserved video segment, and a student network to mimic the teacher network to produce the powerful representation of the partial video and based on that predicting the action label. To promote the knowledge distillation between the teacher and the student network, we seamlessly integrate adversarial learning with latent and discriminative knowledge regularizations encouraging the learned representations of the partial video to be more informative and discriminative toward the action prediction. Finally, we devise a multi-modal fusion module toward comprehensively predicting the action label. Extensive experiments on two public egocentric datasets validate the superiority of our method over the state-of-the-art methods. We have released the codes and involved parameters to benefit other researchers.

References

[1]
Yijun Cai, Haoxin Li, Jian-Fang Hu, and Wei-Shi Zheng. 2019. Action knowledge transfer for action prediction with partial videos. In Proceedings of the AAAI Conference on Artificial Intelligence. 8118–8125.
[2]
Yue Cao, Bin Liu, Mingsheng Long, and Jianmin Wang. 2018. HashGAN: Deep learning to hash with pair conditional Wasserstein GAN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1287–1296.
[3]
Xiaolin Chen, Xuemeng Song, Guozhen Peng, Shanshan Feng, and Liqiang Nie. 2021. Adversarial-enhanced hybrid graph network for user identity linkage. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 1084–1093.
[4]
Xinyuan Chen, Chang Xu, Xiaokang Yang, and Dacheng Tao. 2018. Attention-GAN for object transfiguration in wild images. In Proceedings of the European Conference on Computer Vision. 164–180.
[5]
Ali Cheraghian, Shafin Rahman, Pengfei Fang, Soumava Kumar Roy, Lars Petersson, and Mehrtash Harandi. 2021. Semantic-aware knowledge distillation for few-shot class-incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 2534–2543.
[6]
Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, et al. 2018. Scaling egocentric vision: The EPIC-KITCHENS dataset. In Proceedings of the European Conference on Computer Vision. 720–736.
[7]
Eadom Dessalene, Chinmaya Devaraj, Michael Maynord, Cornelia Fermuller, and Yiannis Aloimonos. 2021. Forecasting action through contact representations from first person video. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 1–12.
[8]
Alireza Fathi, Ali Farhadi, and James M. Rehg. 2011. Understanding egocentric activities. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 407–414.
[9]
Antonino Furnari, Sebastiano Battiato, and Giovanni Maria Farinella. 2018. Leveraging uncertainty to rethink loss functions and evaluation measures for egocentric action anticipation. In Proceedings of the European Conference on Computer Vision. 1–17.
[10]
Antonino Furnari and Giovanni Maria Farinella. 2019. What would you expect? Anticipating egocentric actions with rolling-unrolling LSTMs and modality attention. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 6252–6261.
[11]
Antonino Furnari and Giovanni Maria Farinella. 2021. Rolling-unrolling LSTMs for action anticipation from first-person video. IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (2021), 4021–4036.
[12]
Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. 2019. Predicting the future: A jointly learnt model for action anticipation. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 5562–5571.
[13]
Chuang Gan, Boqing Gong, Kun Liu, Hao Su, and Leonidas J. Guibas. 2018. Geometry guided convolutional neural networks for self-supervised video representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 5589–5597.
[14]
Chuang Gan, Ting Yao, Kuiyuan Yang, Yi Yang, and Tao Mei. 2016. You lead, we exceed: Labor-free video concept learning by jointly exploiting web videos and images. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 923–932.
[15]
Nuno C. Garcia, Pietro Morerio, and Vittorio Murino. 2018. Modality distillation with multiple stream networks for action recognition. In Proceedings of the European Conference on Computer Vision. 1–16.
[16]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. MIT, Cambridge, MA 2672–2680.
[17]
Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18 (2005), 602–610.
[18]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[19]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
[20]
Hengtong Hu, Lingxi Xie, Richang Hong, and Qi Tian. 2020. Creating something from nothing: Unsupervised knowledge distillation for cross-modal hashing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 3123–3132.
[21]
Tao Hu, Kripasindhu Sarkar, Lingjie Liu, Matthias Zwicker, and Christian Theobalt. 2021. EgoRenderer: Rendering human avatars from egocentric camera images. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 14528–14538.
[22]
Min Huang, Song-Zhi Su, Hong-Bo Zhang, Guo-Rong Cai, Dongying Gong, Donglin Cao, and Shao-Zi Li. 2018. Multifeature selection for 3D human action recognition. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2018), 1–18.
[23]
Shao Huang, Weiqiang Wang, Shengfeng He, and Rynson W. H. Lau. 2017. Egocentric hand detection via dynamic region growing. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2017), 1–17.
[24]
Yingfan Huang, HuiKun Bi, Zhaoxin Li, Tianlu Mao, and Zhaoqi Wang. 2019. STGAT: Modeling spatial-temporal interactions for human trajectory prediction. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 6272–6281.
[25]
Yi Huang, Xiaoshan Yang, Junyu Gao, Jitao Sang, and Changsheng Xu. 2020. Knowledge-driven egocentric multimodal activity recognition. ACM Transactions on Multimedia Computing, Communications, and Applications 16 (2020), 1–133.
[26]
Hao Jiang and Vamsi Krishna Ithapu. 2021. Egocentric pose estimation from human vision span. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 11006–11014.
[27]
Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, and Dima Damen. 2019. EPIC-Fusion: Audio-visual temporal binding for egocentric action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 5492–5501.
[28]
Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, and Artem Babenko. 2021. Latent transformations via NeuralODEs for GAN-based image editing. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 14428–14437.
[29]
Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. 2021. Task-aware variational adversarial active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 8166–8175.
[30]
Yu Kong, Zhiqiang Tao, and Yun Fu. 2017. Deep sequential context networks for action prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1473–1481.
[31]
Yu Kong, Zhiqiang Tao, and Yun Fu. 2018. Adversarial action prediction networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (2018), 539–553.
[32]
Haoxin Li, Yijun Cai, and Wei-Shi Zheng. 2019. Deep dual relation modeling for egocentric interaction recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 7932–7941.
[33]
Sheng Li, Kang Li, and Yun Fu. 2018. Early recognition of 3D human actions. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2018), 1–21.
[34]
Yin Li, Miao Liu, and James M. Rehg. 2018. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision. 619–635.
[35]
Yin Li, Mian Liu, and James M. Rehg. 2021. In the eye of the beholder: Gaze and actions in first person video. IEEE Transactions on Pattern Analysis and Machine Intelligence 2021 (2021), 1–16.
[36]
Yin Li, Zhefan Ye, and James M. Rehg. 2015. Delving into egocentric actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 287–295.
[37]
Fan Liu, Zhiyong Cheng, Changchang Sun, Yinglong Wang, Liqiang Nie, and Mohan Kankanhalli. 2019. User diverse preference modeling by multimodal attentive metric learning. In Proceedings of the 27th ACM International Conference on Multimedia. ACM, New York, NY, 1526–1534.
[38]
Minlong Lu, Ze-Nian Li, Yueming Wang, and Gang Pan. 2019. Deep attention network for egocentric action recognition. IEEE Transactions on Image Processing 28, 8 (2019), 3703–3713.
[39]
Minghuang Ma, Haoqi Fan, and Kris M. Kitani. 2016. Going deeper into first-person activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1894–1903.
[40]
Shugao Ma, Leonid Sigal, and Stan Sclaroff. 2016. Learning activity progression in LSTMs for activity detection and early detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1942–1950.
[41]
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9 (2008), 2579–2605.
[42]
Tushar Nagarajan, Christoph Feichtenhofer, and Kristen Grauman. 2019. Grounded human-object interaction hotspots from video. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 8688–8697.
[43]
Boxiao Pan, Haoye Cai, De-An Huang, Kuan-Hui Lee, Adrien Gaidon, Ehsan Adeli, and Juan Carlos Niebles. 2020. Spatio-temporal graph for video captioning with knowledge distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 10879–10870.
[44]
Zhimao Peng, Zechao Li, Junge Zhang, Yan Li, Guo-Jun Qi, and Jinhui Tang. 2019. Few-shot image recognition with knowledge transfer. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 441–449.
[45]
Rafael Possas, Sheila Pinto Caceres, and Fabio Ramos. 2018. Egocentric activity recognition on a budget. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 5967–5976.
[46]
Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, and Francesc Moreno-Noguer. 2018. Unsupervised person image synthesis in arbitrary poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 8620–8628.
[47]
Xiaofeng Ren and Chunhui Gu. 2010. Figure-ground segmentation improves handled object recognition in egocentric video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 3137–3144.
[48]
Mohamad Shahbazi, Zhiwu Huang, Danda Pani Paudel, Ajad Chhatkuli, and Luc Van Gool. 2021. Efficient conditional GAN transfer with knowledge propagation across classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 12167–12176.
[49]
Gil Shamai, Ron Slossberg, and Ron Kimmel. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Transactions on Multimedia Computing, Communications, and Applications 15 (2019), 1–24.
[50]
Yang Shen, Bingbing Ni, Zefan Li, and Ning Zhuang. 2018. Egocentric activity prediction via event modulated attention. In Proceedings of the European Conference on Computer Vision. 197–212.
[51]
Yuge Shi, Basura Fernando, and Richard Hartley. 2018. Action anticipation with RBF kernelized feature mapping RNN. In Proceedings of the European Conference on Computer Vision. 301–317.
[52]
Xuemeng Song, Fuli Feng, Xianjing Han, Xin Yang, Wei Liu, and Liqiang Nie. 2018. Neural compatibility modeling with attentive knowledge distillation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 5–14.
[53]
Swathikiran Sudhakaran, Sergio Escalera, and Oswald Lanz. 2019. LSTA: Long short-term attention for egocentric action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 9954–9963.
[54]
Swathikiran Sudhakaran and Oswald Lanz. 2018. Attention is all we need: Nailing down object-centric attention for egocentric activity recognition. arXiv preprint arXiv:1807.11794 (2018).
[55]
Francisco Rivera Valverde, Juana Valeria Hurtado, and Abhinav Valada. 2021. There is more than meets the eye: Self-supervised multi-object detection and tracking with sound by distilling multimodal knowledge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 11612–11621.
[56]
Chaoyang Wang, Chen Kong, and Simon Lucey. 2019. Distill knowledge from NRSfM for weakly supervised 3D pose learning. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 743–752.
[57]
Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. IRGAN: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 515–524.
[58]
Lichen Wang, Zhengming Ding, Zhiqiang Tao, Yunyu Liu, and Yun Fu. 2019. Generative multi-view human action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 6212–6221.
[59]
Xionghui Wang, Jian-Fang Hu, Jian-Huang Lai, Jianguo Zhang, and Wei-Shi Zheng. 2019. Progressive teacher-student learning for early action prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 3556–3565.
[60]
Xueping Wang, Yunhong Wang, and Weixin Li. 2019. U-Net conditional GANs for photo-realistic and identity-preserving facial expression synthesis. ACM Transactions on Multimedia Computing, Communications, and Applications 15 (2019), 1–23.
[61]
Xiaohan Wang, Linchao Zhu, Heng Wang, and Yi Yang. 2021. Interactive prototype learning for egocentric action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 8168–8177.
[62]
Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, and Bolei Zhou. 2019. Reasoning about human-object interactions through dual attention networks. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 3919–3928.
[63]
Xin Yang, Xuemeng Song, Xianjing Han, Jie Nie, and Liqiang Nie. 2020. Generative attribute manipulation scheme for flexible fashion search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 941–950.
[64]
Jun Ye, Hao Hu, Guo-Jun Qi, and Kien A. Hua. 2017. A temporal order modeling approach to human action recognition from multimodal sensor data. ACM Transactions on Multimedia Computing, Communications, and Applications 13 (2017), 1–22.
[65]
Cong Yu, Yang Hu, Yan Chen, and Bing Zeng. 2019. Personalized fashion design. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 9046–9055.
[66]
Ruichi Yu, Ang Li, Vlad I. Morariu, and Larry S. Davis. 2017. Visual relationship detection with internal and external linguistic knowledge distillation. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1974–1982.
[67]
Yi Yu, Abhishek Srivastava, and Simon Canales. 2021. Conditional LSTM-GAN for melody generation from lyrics. ACM Transactions on Multimedia Computing, Communications, and Applications 17 (2021), 1–20.
[68]
Hasan F. M. Zaki, Faisal Shafait, and Ajmal Mian. 2017. Modeling sub-event dynamics in first-person action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 7253–7262.
[69]
Chenrui Zhang and Peng Yuxin. 2018. Better and faster: Knowledge transfer from multiple self-supervised learning tasks via graph distillation for video classification. In Proceedings of the International Joint Conference on Artificial Intelligence. 1135–1141.
[70]
Jingyi Zhang, Zhen Wei, Ionut Cosmin Duta, Fumin Shen, Li Liu, Fan Zhu, Xing Xu, Ling Shao, and Heng Tao Shen. 2019. Generative reconstructive hashing for incomplete video analysis. In Proceedings of the ACM International Conference on Multimedia. ACM, New York, NY, 845–854.

Cited By

View all
  • (2025)Uncertain features exploration in temporal moment localization via language by utilizing customized temporal transformerKnowledge-Based Systems10.1016/j.knosys.2024.112667309(112667)Online publication date: Jan-2025
  • (2025)Multi-modal integrated proposal generation network for weakly supervised video moment retrievalExpert Systems with Applications10.1016/j.eswa.2025.126497269(126497)Online publication date: Apr-2025
  • (2024)Intentional evolutionary learning for untrimmed videos with long tail distributionProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i7.28605(7713-7721)Online publication date: 20-Feb-2024
  • Show More Cited By

Index Terms

  1. Egocentric Early Action Prediction via Adversarial Knowledge Distillation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 2
    March 2023
    540 pages
    ISSN:1551-6857
    EISSN:1551-6865
    DOI:10.1145/3572860
    • Editor:
    • Abdulmotaleb El Saddik
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 February 2023
    Online AM: 16 June 2022
    Accepted: 07 June 2022
    Revised: 04 April 2022
    Received: 09 November 2021
    Published in TOMM Volume 19, Issue 2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Early action prediction
    2. teacher-student knowledge distillation
    3. egocentric video understanding
    4. generative adversarial networks

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • National Key Research and Development Project of New Generation Artificial Intelligence

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)177
    • Downloads (Last 6 weeks)28
    Reflects downloads up to 03 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Uncertain features exploration in temporal moment localization via language by utilizing customized temporal transformerKnowledge-Based Systems10.1016/j.knosys.2024.112667309(112667)Online publication date: Jan-2025
    • (2025)Multi-modal integrated proposal generation network for weakly supervised video moment retrievalExpert Systems with Applications10.1016/j.eswa.2025.126497269(126497)Online publication date: Apr-2025
    • (2024)Intentional evolutionary learning for untrimmed videos with long tail distributionProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i7.28605(7713-7721)Online publication date: 20-Feb-2024
    • (2024)CL2CMProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i6.28376(5651-5659)Online publication date: 20-Feb-2024
    • (2024)Multimodal LLM Enhanced Cross-lingual Cross-modal RetrievalProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680886(8296-8305)Online publication date: 28-Oct-2024
    • (2024)Maskable Retentive Network for Video Moment RetrievalProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680746(1476-1485)Online publication date: 28-Oct-2024
    • (2024)Multimodal Score Fusion with Sparse Low-rank Bilinear Pooling for Egocentric Hand Action RecognitionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365604420:7(1-22)Online publication date: 16-May-2024
    • (2024)Transform-Equivariant Consistency Learning for Temporal Sentence GroundingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363474920:4(1-19)Online publication date: 11-Jan-2024
    • (2024)Emotional Video Captioning With Vision-Based Emotion Interpretation NetworkIEEE Transactions on Image Processing10.1109/TIP.2024.335904533(1122-1135)Online publication date: 1-Feb-2024
    • (2024)Multilevel Joint Association Networks for Diverse Human Motion PredictionIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2024.33868408:6(4165-4178)Online publication date: Dec-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media