Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Moment is Important: Language-Based Video Moment Retrieval via Adversarial Learning

Published: 16 February 2022 Publication History

Abstract

The newly emerging language-based video moment retrieval task aims at retrieving a target video moment from an untrimmed video given a natural language as the query. It is more applicable in reality since it is able to accurately localize a specific video moment, as compared to traditional whole video retrieval. In this work, we propose a novel solution to thoroughly investigate the language-based video moment retrieval issue under the adversarial learning. The key of our solution is to formulate the language-based video moment retrieval task as an adversarial learning problem with two tightly connected components. Specifically, a reinforcement learning is employed as a generator to produce a set of possible video moments. Meanwhile, a multi-task learning is utilized as a discriminator, which integrates inter-modal and intra-modal in a unified framework by employing a sequential update strategy. Finally, the generator and the discriminator are mutually reinforced in the adversarial learning, which is able to jointly optimize the performance of both video moment ranking and video moment localization. Extensive experimental results on two challenging benchmarks, i.e., Charades-STA and TACoS datasets, have well demonstrated the effectiveness and rationality of our proposed solution. Meanwhile, on the larger and unbiased datasets, i.e., ActivityNet Captions and ActivityNet-CD, our proposed framework exhibits excellent robustness.

References

[1]
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 5803–5812.
[2]
Da Cao, Xiangnan He, Lianhai Miao, Yahui An, Chao Yang, and Richang Hong. 2018. Attentive group recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 645–654.
[3]
Da Cao, Xiangnan He, Lianhai Miao, Guangyi Xiao, Hao Chen, and Jiao Xu. 2019. Social-enhanced attentive group recommendation. IEEE Transactions on Knowledge and Data Engineering (2019).
[4]
Da Cao, Zhiwang Yu, Hanling Zhang, Jiansheng Fang, Liqiang Nie, and Qi Tian. 2019. Video-based cross-modal recipe retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 1685–1693.
[5]
Da Cao, Yawen Zeng, Meng Liu, Xiangnan He, Meng Wang, and Zhen Qin. 2020. STRONG: Spatio-temporal reinforcement learning for cross-modal video moment localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 4162–4170.
[6]
Da Cao, Yawen Zeng, Xiaochi Wei, Liqiang Nie, Richang Hong, and Zeng Qin. 2020. Adversarial video moment retrieval by jointly modeling ranking and localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 898–906.
[7]
Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the Empirical Methods in Natural Language Processing. ACL, 162–171.
[8]
Jingyuan Chen, Lin Ma, Xinpeng Chen, Zequn Jie, and Jiebo Luo. 2019. Localizing natural language in videos. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 8175–8182.
[9]
Jing-Jing Chen, Chong-Wah Ngo, Fu-Li Feng, and Tat-Seng Chua. 2018. Deep understanding of cooking procedure for cross-modal recipe retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 1020–1028.
[10]
Shaoxiang Chen and Yu-Gang Jiang. 2019. Semantic proposal for activity localization in videos via sentence query. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 8199–8206.
[11]
Zhenfang Chen, Lin Ma, Wenhan Luo, Peng Tang, and Kwan-Yee K. Wong. 2020. Look closer to ground better: Weakly-supervised temporal grounding of sentence in video. arXiv preprint arXiv:2001.09308 (2020), 1–10.
[12]
Jingtao Ding, Yuhan Quan, Xiangnan He, Yong Li, and Depeng Jin. 2019. Reinforced negative sampling for recommendation with exposure data. In Proceedings of the International Joint Conference on Artificial Intelligence. AAAI, 2230–2236.
[13]
Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. TALL: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 5267–5275.
[14]
Longteng Guo, Jing Liu, Yuhang Wang, Zhonghua Luo, Wei Wen, and Hanqing Lu. 2017. Sketch-based image retrieval using generative adversarial networks. In Proceedings of the ACM International Conference on Multimedia. ACM, 1267–1268.
[15]
Yike Guo and Faisal Farooq. 2018. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1930–1939.
[16]
Dongliang He, Xiang Zhao, Jizhou Huang, Fu Li, Xiao Liu, and Shilei Wen. 2019. Read, watch, and move: Reinforcement learning for temporally grounding natural language descriptions in videos. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 8393–8400.
[17]
Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial personalized ranking for recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 355–364.
[18]
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 3207–3214.
[19]
Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 4555–4564.
[20]
Bin Jiang, Xin Huang, Chao Yang, and Junsong Yuan. 2019. Cross-modal video moment retrieval with spatial and language-temporal attention. In Proceedings of the ACM SIGMM International Conference on Multimedia Retrieval. ACM, 217–225.
[21]
Shangsong Liang. 2019. Unsupervised semantic generative adversarial networks for expert retrieval. In Proceedings of the International Conference on World Wide Web. ACM, 1039–1050.
[22]
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
[23]
Zhijie Lin, Zhou Zhao, Zhu Zhang, Qi Wang, and Huasheng Liu. 2020. Weakly-supervised video moment retrieval via semantic completion network. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 11539–11546.
[24]
Zhijie Lin, Zhou Zhao, Zhu Zhang, Zijian Zhang, and Deng Cai. 2020. Moment retrieval via cross-modal interaction networks with query reconstruction. IEEE Transactions on Image Processing 29 (2020), 3750–3762.
[25]
Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, and Tat-Seng Chua. 2018. Attentive moment retrieval in videos. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 15–24.
[26]
Meng Liu, Xiang Wang, Liqiang Nie, Qi Tian, Baoquan Chen, and Tat-Seng Chua. 2018. Cross-modal moment localization in videos. In Proceedings of the ACM International Conference on Multimedia. ACM, 843–851.
[27]
Amir Mazaheri, Boqing Gong, and Mubarak Shah. 2018. Learning a multi-concept video retrieval model with multiple latent variables. ACM Trans. Multimedia Comput. Commun. Appl. 14, 2 (2018), 46:1–46:21.
[28]
Hahn Meera, Kadav Asim, M. Rehg James, and Peter Graf Hans. 2019. Tripping through time: Efficient localization of activities in videos. arXiv preprint arXiv:1904-09936 (2019), 1–13.
[29]
Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. 2016. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 3994–4003.
[30]
Niluthpol Chowdhury Mithun, Sujoy Paul, and Amit K. Roy-Chowdhury. 2019. Weakly supervised video moment retrieval from text queries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 11592–11601.
[31]
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. ACM, 1928–1937.
[32]
German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks 113 (2019), 54–71.
[33]
Yuxin Peng and Jinwei Qi. 2019. CM-GANs: Cross-modal generative adversarial networks for common representation learning. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1 (2019), 22:1–22:24.
[34]
Krishna Ranjay, Hata Kenji, Ren Frederic, Fei-Fei Li, and Carlos Niebles Juan. 2017. Dense-captioning events in videos. In Proceedings of the IEEE Conference on Computer Vision. IEEE, 706–715.
[35]
Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics 1 (2013), 25–36.
[36]
Marcus Rohrbach, Michaela Regneri, Mykhaylo Andriluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele. 2012. Script data for attribute-based recognition of composite activities. In Proceedings of the European Conference on Computer Vision. Springer, 144–157.
[37]
Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ferda Ofli, Ingmar Weber, and Antonio Torralba. 2017. Learning cross-modal embeddings for cooking recipes and food images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 3020–3028.
[38]
Ari Seff, Alex Beatson, Daniel Suo, and Han Liu. 2017. Continual learning in generative adversarial nets. arXiv preprint arXiv:1705.08395 (2017).
[39]
Gil Shamai, Ron Slossberg, and Ron Kimmel. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Trans. Multimedia Comput. Commun. Appl. 15, 3s (2019), 87:1–87:24.
[40]
Ling Shen, Richang Hong, Haoran Zhang, Xinmei Tian, and Meng Wang. 2019. Video retrieval with similarity-preserving deep temporal hashing. ACM Trans. Multimedia Comput. Commun. Appl. 15, 4 (2019), 109:1–109:16.
[41]
Gunnar A. Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision. Springer, 510–526.
[42]
Jingkuan Song, Tao He, Lianli Gao, Xing Xu, Alan Hanjalic, and Heng Tao Shen. 2018. Binary generative adversarial networks for image retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 394–401.
[43]
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. AUAI Press, 452–461.
[44]
Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 8968–8975.
[45]
Hongyan Tang, Junning Liu, Ming Zhao, and Xudong Gong. 2020. Progressive layered extraction (PLE): A novel multi-task learning (MTL) model for personalized recommendations. In Proceedings of the ACM Conference on Recommender Systems. ACM, 269–278.
[46]
Hoang Thanh-Tung, Truyen Tran, and Svetha Venkatesh. 2018. On catastrophic forgetting and mode collapse in generative adversarial networks. arXiv preprint arXiv:1807.04015 (2018).
[47]
Bokun Wang, Yang Yang, Xing Xu, Alan Hanjalic, and Heng Tao Shen. 2017. Adversarial cross-modal retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 154–162.
[48]
Cheng Wang, Haojin Yang, and Christoph Meinel. 2018. Image captioning with deep bidirectional LSTMs and multi-task learning. ACM Trans. Multimedia Comput. Commun. Appl. 14, 2s (2018), 40:1–40:20.
[49]
Hao Wang, Zheng-Jun Zha, Xuejin Chen, Zhiwei Xiong, and Jiebo Luo. 2020. Dual path interaction network for video moment localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 4116–4124.
[50]
Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. IRGAN: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 515–524.
[51]
Weining Wang, Yan Huang, and Liang Wang. 2019. Language-driven temporal activity localization: A semantic matching reinforcement learning model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 334–343.
[52]
Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, and Tat-Seng Chua. 2020. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of the International Conference on World Wide Web. ACM.
[53]
Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. 2019. Characterizing and avoiding negative transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 11293–11302.
[54]
Zitai Wang, Qianqian Xu, Ke Ma, Yangbangyan Jiang, Xiaochun Cao, and Qingming Huang. 2019. Adversarial preference learning with pairwise comparisons. In Proceedings of the ACM International Conference on Multimedia. ACM, 656–664.
[55]
Jie Wu, Guanbin Li, Xiaoguang Han, and Liang Lin. 2020. Reinforcement learning for weakly supervised temporal grounding of natural language in untrimmed videos. In Proceedings of the ACM International Conference on Multimedia. ACM, 1283–1291.
[56]
Yiling Wu, Shuhui Wang, Guoli Song, and Qingming Huang. 2020. Augmented adversarial training for cross-modal retrieval. IEEE Transactions on Multimedia (2020).
[57]
Shaoning Xiao, Long Chen, Songyang Zhang, Wei Ji, Jian Shao, Lu Ye, and Jun Xiao. 2021. Boundary proposal network for two-stage natural language video localization. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 2986–2994.
[58]
Huijuan Xu, Kun He, Bryan A. Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 9062–9069.
[59]
Wenfei Yang, Tianzhu Zhang, Yongdong Zhang, and Feng Wu. [n.d.]. Local correspondence network for weakly supervised temporal sentence grounding. IEEE ([n. d.]).
[60]
Yitian Yuan, Xiaohan Lan, Long Chen, Wei Liu, Xin Wang, and Wenwu Zhu. 2021. A closer look at temporal sentence grounding in videos: Datasets and metrics. arXiv preprint arXiv:2101.09028v2 (2021), 1–10.
[61]
Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, and Chuang Gan. 2020. Dense regression network for video grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 10284–10293.
[62]
Yawen Zeng, Da Cao, Xiaochi Wei, Meng Liu, Zhou Zhao, and Zheng Qin. 2021. Multi-modal relational graph for cross-modal video moment retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2215–2224.
[63]
Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, and Larry S. Davis. 2019. MAN: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1247–1257.
[64]
Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, and Rick Siow Mong Goh. 2021. Natural language video localization: A revisit in span-based question answering framework. arXiv preprint arXiv:2102.13558 (2021), 1–10.
[65]
Jian Zhang and Yuxin Peng. 2019. Multi-pathway generative adversarial hashing for unsupervised cross-modal retrieval. IEEE Transactions on Multimedia 22, 1 (2019), 174–187.
[66]
Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. 2020. Learning 2D temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 12870–12877.
[67]
Songyang Zhang, Jinsong Su, and Jiebo Luo. 2019. Exploiting temporal relationships in video moment localization with natural language. In Proceedings of the ACM International Conference on Multimedia. ACM, 1230–1238.
[68]
Zhu Zhang, Zhijie Lin, Zhou Zhao, and Zhenxin Xiao. 2019. Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 655–664.
[69]
Rui-Wei Zhao, Qi Zhang, Zuxuan Wu, Jianguo Li, and Yu-Gang Jiang. 2019. Visual content recognition by exploiting semantic feature map with attention and multi-task learning. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1s (2019), 6:1–6:22.
[70]
Fan Zhou, Ruiyang Yin, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Jin Wu. 2019. Adversarial point-of-interest recommendation. In Proceedings of the International Conference on World Wide Web. ACM, 3462–3468.
[71]
Bin Zhu, Chong-Wah Ngo, Jingjing Chen, and Yanbin Hao. 2019. R2GAN: Cross-modal recipe retrieval with generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 11477–11486.

Cited By

View all
  • (2024)Exploiting Instance-level Relationships in Weakly Supervised Text-to-Video RetrievalACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3663571Online publication date: 3-May-2024
  • (2024)Backdoor Two-Stream Video Models on Federated LearningACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3651307Online publication date: 7-Mar-2024
  • (2024)Transform-Equivariant Consistency Learning for Temporal Sentence GroundingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363474920:4(1-19)Online publication date: 11-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Multimedia Computing, Communications, and Applications
ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2
May 2022
494 pages
ISSN:1551-6857
EISSN:1551-6865
DOI:10.1145/3505207
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 February 2022
Accepted: 01 July 2021
Revised: 01 June 2021
Received: 01 January 2021
Published in TOMM Volume 18, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Video moment retrieval
  2. cross-modal retrieval
  3. adversarial learning
  4. reinforcement learning
  5. continual multi-task learning

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • National Natural Science Foundation of China
  • Natural Science Foundation of Hunan Province
  • National Key Research and Development Project of China
  • Science and Technology Key Projects of Hunan Province
  • Special Funds for the Construction of Innovative Provinces in Hunan Province of China
  • Science and Technology Project of Changsha City
  • Fundamental Research Funds for the Central Universities

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)178
  • Downloads (Last 6 weeks)4
Reflects downloads up to 02 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploiting Instance-level Relationships in Weakly Supervised Text-to-Video RetrievalACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3663571Online publication date: 3-May-2024
  • (2024)Backdoor Two-Stream Video Models on Federated LearningACM Transactions on Multimedia Computing, Communications, and Applications10.1145/3651307Online publication date: 7-Mar-2024
  • (2024)Transform-Equivariant Consistency Learning for Temporal Sentence GroundingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363474920:4(1-19)Online publication date: 11-Jan-2024
  • (2024)Temporally Language Grounding With Multi-Modal Multi-Prompt TuningIEEE Transactions on Multimedia10.1109/TMM.2023.331028226(3366-3377)Online publication date: 2024
  • (2024)Contrastive topic-enhanced network for video captioningExpert Systems with Applications10.1016/j.eswa.2023.121601237(121601)Online publication date: Mar-2024
  • (2023)Fine-Grained Text-to-Video Temporal Grounding from Coarse BoundaryACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357982519:5(1-21)Online publication date: 16-Mar-2023
  • (2023)Keyword-Based Diverse Image Retrieval by Semantics-aware Contrastive Learning and TransformerProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591705(1262-1272)Online publication date: 19-Jul-2023
  • (2023)Temporal Sentence Grounding in Videos: A Survey and Future DirectionsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.325862845:8(10443-10465)Online publication date: 1-Aug-2023
  • (2023)Keyword-Based Diverse Image Retrieval With Variational Multiple Instance GraphIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.316843134:12(10528-10537)Online publication date: Dec-2023
  • (2023)Multi-Modal Cross-Domain Alignment Network for Video Moment RetrievalIEEE Transactions on Multimedia10.1109/TMM.2022.322296525(7517-7532)Online publication date: 1-Jan-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media