Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Survey on Active Deep Learning: From Model Driven to Data Driven

Published: 13 September 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Which samples should be labelled in a large dataset is one of the most important problems for the training of deep learning. So far, a variety of active sample selection strategies related to deep learning have been proposed in the literature. We defined them as Active Deep Learning (ADL) only if their predictor or selector is a deep model, where the basic learner is called the predictor and the labeling schemes are called the selector. In this survey, we categorize ADL into model-driven ADL and data-driven ADL by whether its selector is model driven or data driven. We also introduce the different characteristics of the two major types of ADL, respectively. We summarized three fundamental factors in the designation of a selector. We pointed out that, with the development of deep learning, the selector in ADL also is experiencing the stage from model driven to data driven. The advantages and disadvantages between data-driven ADL and model-driven ADL are thoroughly analyzed. Furthermore, different sub-classes of data-drive or model-driven ADL are also summarized and discussed emphatically. Finally, we survey the trend of ADL from model driven to data driven.

    References

    [1]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 770–778.
    [2]
    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE Computer Society, 248–255.
    [3]
    Peng Liu, Kim-Kwang Raymond Choo, Lizhe Wang, and Fang Huang. 2017. SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 21, 23 (2017), 7053–7065.
    [4]
    Peng Liu, Liping Di, Qian Du, and Lizhe Wang. 2018. Remote sensing big data: Theory, methods and applications. Remote Sensing 10, 5 (2018), 711.
    [5]
    David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Mach. Learn. 15, 2 (1994), 201–221.
    [6]
    Burr Settles. 2009. Active Learning Literature Survey. Technical Report. University of Wisconsin—Madison Department of Computer Sciences.
    [7]
    Valerii Fedorov. 2010. Optimal experimental design. Wiley Interdisc. Rev.: Comput. Stat. 2, 5 (2010), 581–589.
    [8]
    Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. https://www.ccs.neu.edu/home/vip/teach/MLcourse/4_boosting/materials/SICS-T--2009-06--SE.pdf.
    [9]
    Mehdi Elahi, Francesco Ricci, and Neil Rubens. 2016. A survey of active learning in collaborative filtering recommender systems. Comput. Sci. Rev. 20 (2016), 29–50.
    [10]
    D. Tuia, M. Volpi, L. Copa, M. Kanevski, and J. Munoz-Mari. 2011. A survey of active learning algorithms for supervised remote sensing image classification. IEEE J. Select. Top. Sign. Process. 5, 3 (2011), 606–617.
    [11]
    Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2020. A survey of deep active learning. arXiv:2009.00236. Retrieved from https://arxiv.org/abs/2009.00236.
    [12]
    Christopher Schröder and Andreas Niekler. 2020. A survey of active learning for text classification using deep neural networks.arXiv:2008.07267. Retrieved from https://arxiv.org/abs/2008.07267.
    [13]
    Samuel Budd, Emma C. Robinson, and Bernhard Kainz. 2021. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Medical Image Analysis 71 (2021), 102062.
    [14]
    Yazhou Yang and Marco Loog. 2018. A benchmark and comparison of active learning for logistic regression. Pattern Recogn. 83 (2018), 401–415. https://www.sciencedirect.com/science/article/pii/S0031320318302140.
    [15]
    Jamshid Sourati, Ali Gholipour, Jennifer G. Dy, Xavier Tomas-Fernandez, Sila Kurugol, and Simon K. Warfield. 2019. Intelligent labeling based on fisher information for medical image segmentation using deep learning. IEEE Trans. Med. Imag. 38, 11 (2019), 2642–2653.
    [16]
    Jordan T. Ash and Ryan P. Adams. 2019. On the difficulty of warm-starting neural network training. arxiv:1910.08475. Retrieved from http://arxiv.org/abs/1910.08475.
    [17]
    Jin Yuan, Xingxing Hou, Yaoqiang Xiao, Da Cao, Weili Guan, and Liqiang Nie. 2019. Multi-criteria active deep learning for image classification. Knowl. Bas. Syst. 172 (2019), 86–94.
    [18]
    P. Liu, H. Zhang, and K. B. Eom. 2017. Active deep learning for classification of hyperspectral images. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 10, 2 (February 2017), 712–724.
    [19]
    Xiaoming Lv, Fajie Duan, Jia-Jia Jiang, Xiao Fu, and Lin Gan. 2020. Deep active learning for surface defect detection. Sensors 20, 6 (2020).
    [20]
    Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. arXiv:1703.02910. Retrieved from https://arxiv.org/abs/1703.02910.
    [21]
    Donggeun Yoo and In So Kweon. 2019. Learning loss for active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). 93–102.
    [22]
    Mark Woodward and Chelsea Finn. 2017. Active one-shot learning. arXiv:1702.06559. Retrieved from http://arxiv.org/abs/1702.06559.
    [23]
    Kunkun Pang, Mingzhi Dong, Yang Wu, and Timothy M. Hospedales. 2018. Meta-learning transferable active learning policies by deep reinforcement learning. arxXiv:1806.04798. Retrieved from http://arxiv.org/abs/1806.04798.
    [24]
    Sachin Ravi and Hugo Larochelle. 2018. Meta-learning for batch mode active learning. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). OpenReview.net.
    [25]
    Gabriella Contardo, Ludovic Denoyer, and Thierry Artières. 2017. A meta-learning approach to one-step active learning. arXiv:1706.08334. Retrieved from http://arxiv.org/abs/1706.08334.
    [26]
    Jia-Jie Zhu and José Bento. 2017. Generative adversarial active learning. arXiv:1702.07956. Retrieved from http://arxiv.org/abs/1702.07956.
    [27]
    Christoph Mayer and Radu Timofte. 2020. Adversarial sampling for active learning. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. 3071–3079.
    [28]
    Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: A margin based approach. arXiv:1802.09841. Retrieved from https://arxiv.org/abs/1802.09841.
    [29]
    Ajay J. Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi-class active learning for image classification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE Computer Society, 2372–2379.
    [30]
    Vít Ruzicka, Stefano D’Aronco, Jan Dirk Wegner, and Konrad Schindler. 2020. Deep active learning in remote sensing for data efficient change detection. CoRR abs/2008.11201.
    [31]
    Buyu Liu and Vittorio Ferrari. 2017. Active learning for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE Computer Society, 4373–4382.
    [32]
    Seho Kee, Enrique del Castillo, and George Runger. 2018. Query-by-committee improvement with diversity and density in batch active learning. Inf. Sci. 454-455 (2018), 401–418.
    [33]
    Xiangyong Cao, Jing Yao, Zongben Xu, and Deyu Meng. 2020. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. (2020), 1–13.
    [34]
    G. Joo and C. Kim. 2018. MIDAS: Model-independent training data selection under cost constraints. IEEE Access 6 (2018), 74462–74474.
    [35]
    Yuhao Wu, Yuzhou Fang, Shuaikang Shang, Jing Jin, Lai Wei, and Haizhou Wang. 2021. A novel framework for detecting social bots with deep neural networks and active learning. Knowl.-Bas. Syst. 211 (2021), 106525.
    [36]
    Peng Peng, Wenjia Zhang, Yi Zhang, Yanyan Xu, Hongwei Wang, and Heming Zhang. 2020. Cost sensitive active learning using bidirectional gated recurrent neural networks for imbalanced fault diagnosis. Neurocomputing 407 (2020), 232–245.
    [37]
    Cheng Deng, Yumeng Xue, Xianglong Liu, Chao Li, and Dacheng Tao. 2019. Active transfer learning network: A unified deep joint spectral-spatial feature learning model for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 57, 3 (2019), 1741–1754.
    [38]
    J. Xu, L. Xiang, Q. Liu, H. Gilmore, J. Wu, J. Tang, and A. Madabhushi. 2016. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imag. 35, 1 (2016), 119–130.
    [39]
    Yan Tian and Guohua Cheng et al. 2020. Joint temporal context exploitation and active learning for video segmentation. Pattern Recogn. 100 (2020), 107158.
    [40]
    Pawe Ksieniewicz, Micha Woniak, Bogusaw Cyganek, Andrzej Kasprzak, and Krzysztof Walkowiak. 2019. Data stream classification using active learned neural networks. Neurocomputing 353 (2019), 74–82. Recent Advancements in Hybrid Artificial Intelligence Systems.
    [41]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 2574–2582.
    [42]
    Ozan Sener and Silvio Savarese. 2017. A geometric approach to active learning for convolutional neural networks. arXiv: abs/1708.00489. Retrieved from https://arxiv.org/abs/1708.00489.
    [43]
    Ozan Sener and Silvio Savarese. 2017. Active learning for convolutional neural networks: A core-set approach. arXiv:1708.00489. Retrieved from https://arxiv.org/abs/1708.00489.
    [44]
    Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. 2020. Towards robust and reproducible active learning using neural networks (unpublished).
    [45]
    R. A. Fisher. 1992. On the Mathematical Foundations of Theoretical Statistics. Springer, New York, NY, 11–44.
    [46]
    Kenji Fukumizu. 2000. Statistical active learning in multilayer perceptrons. IEEE Trans. Neural Netw. Learn. Syst. 11, 1 (2000), 17–26.
    [47]
    Tong Zhang. 2000. The value of unlabeled data for classification problems. In Proceedings of the 17th International Conference on Machine Learning. Morgan Kaufmann, 1191–1198.
    [48]
    Burr Settles, Mark Craven, and Soumya Ray. 2008. Multiple-instance active learning. In Advances in Neural Information Processing Systems. MIT Press, 1289–1296.
    [49]
    Steven C. H. Hoi, Rong Jin, and Michael R. Lyu. 2009. Batch mode active learning with applications to text categorization and image retrieval. IEEE Trans. Knowl. Data Eng. 21, 9 (2009), 1233–1248.
    [50]
    Kamalika Chaudhuri, Sham M. Kakade, and Praneeth Netrapalli, et al. 2015. Convergence rates of active learning for maximum likelihood estimation. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, Corinna Cortes, Neil D. Lawrence, and Daniel D. Lee, et al. (Eds.). 1090–1098.
    [51]
    Jamshid Sourati, Murat Akçakaya, and Todd K. Leen, et al. 2017. Asymptotic analysis of objectives based on fisher information in active learning. J. Mach. Learn. Res. 18 (2017), 34:1–34:41.
    [52]
    Ye Zhang and Matthew Lease, et al. 2017. Active discriminative text representation learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, Satinder P. Singh and Shaul Markovitch (Eds.). AAAI Press, 3386–3392.
    [53]
    Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2019. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv:1906.03671. Retrieved from http://arxiv.org/abs/1906.03671.
    [54]
    Chin-Chun Chang and Po-Yi Lin. 2015. Active learning for semi-supervised clustering based on locally linear propagation reconstruction. Neural Netw. 63 (2015), 170–184.
    [55]
    Lijun Zhang, Chun Chen, and Jiajun Bu, et al. 2011. Active learning based on locally linear reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 33, 10 (2011), 2026–2038.
    [56]
    Oriane Simoni, Mateusz Budnik, Yannis Avrithis, and Guillaume Gravier. 2019. Rethinking deep active learning: Using unlabeled data at model training. arXiv:cs.CV/1911.08177. Retrieved from https://arxiv.org/abs/1911.08177.
    [57]
    Andrew McCallum and Kamal Nigam. 1998. Employing EM and pool-based active learning for text classification. In Proceedings of the 15th International Conference on Machine Learning (ICML’98). Morgan Kaufmann, San Francisco, CA, 350–358.
    [58]
    Yazhou Yang and Marco Loog. 2019. Single shot active learning using pseudo annotators. Pattern Recogn. 89 (2019), 22–31.
    [59]
    Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z. Chen. 2017. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 399–407.
    [60]
    Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE Computer Society, 3431–3440.
    [61]
    Ping Zhong, Zhiqiang Gong, Shutao Li, and Carola-Bibiane Schönlieb. 2017. Learning to diversify deep belief networks for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 55, 6 (2017), 3516–3530.
    [62]
    Xiaofeng Cao. 2020. A divide-and-conquer approach to geometric sampling for active learning. Expert Syst. Appl. 140 (2020).
    [63]
    P. Ruiz and J. Mateos, et al. 2014. Bayesian active remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 52, 4 (2014), 2186–2196.
    [64]
    S. Sun, P. Zhong, H. Xiao, and R. Wang. 2015. An MRF model-based active learning framework for the spectral-spatial classification of hyperspectral imagery. IEEE J. Select. Top. Sign. Process. 9, 6 (2015), 1074–1088.
    [65]
    J. M. Haut, M. E. Paoletti, J. Plaza, J. Li, and A. Plaza. 2018. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 56, 11 (Nov 2018), 6440–6461.
    [66]
    Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural networks. arXiv:1505.05424. Retrieved from https://arxiv.org/abs/1505.05424.
    [67]
    Robert Pinsler, Jonathan Gordon, Eric T. Nalisnick, and José Miguel Hernández-Lobato. 2019. Bayesian batch active learning as sparse subset approximation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS’19), Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 6356–6367.
    [68]
    Jonathan Gordon and Jos Miguel Hernndez-Lobato. 2020. Combining deep generative and discriminative models for bayesian semi-supervised learning. Pattern Recogn. 100 (2020), 107156.
    [69]
    Firat Ozdemir and Zixuan Peng, et al. 2020. Active learning for segmentation based on bayesian sample queries. Knowl.-Bas. Syst. (2020), 106531.
    [70]
    Wen Shu, Peng Liu, Guojin He, and Guizhou Wang. 2019. Hyperspectral image classification using spectral-spatial features with informative samples. IEEE Access 7 (2019), 20869–20878.
    [71]
    Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan O. Arik, Larry S. Davis, and Tomas Pfister. 2020. Consistency-based semi-supervised active learning: Towards minimizing labeling cost. arXiv:cs.LG/1910.07153. Retrieved from https://arxiv.org/abs/1910.07153.
    [72]
    David Muñoz and Camilo Narváez, et al. 2020. Incremental learning model inspired in rehearsal for deep convolutional networks. Knowl.-Bas. Syst. 208 (2020), 106460.
    [73]
    Luiz F. S. Coletta, Moacir Ponti, Eduardo R. Hruschka, Ayan Acharya, and Joydeep Ghosh. 2019. Combining clustering and active learning for the detection and learning of new image classes. Neurocomputing 358 (2019), 150–165.
    [74]
    Soumi Das, Sayan Mandal, Ashwin Bhoyar, Madhumita Bharde, Niloy Ganguly, Suparna Bhattacharya, and Sourangshu Bhattacharya. 2020. Multi-criteria online frame-subset selection for autonomous vehicle videos. Pattern Recogn. Lett. 133 (2020), 349–355.
    [75]
    Tianxiang Yin, Ningzhong Liu, and Han Sun. 2020. Self-paced active learning for deep CNNs via effective loss function. Neurocomputing (2020).
    [76]
    Sergio Matiz and Kenneth E. Barner. 2019. Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. Pattern Recogn. 90 (2019), 172–182.
    [77]
    William H. Beluch, Tim Genewein, Andreas Nürnberger, and Jan M. Köhler. 2018. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9368–9377.
    [78]
    Yann LeCun, Corinna Cortes, and Christopher J. C. Burges. 2017. The Mnist Database of Handwritten Digits. Technical Report. Retrieved from http://yann.lecun.com/exdb/mnist.
    [79]
    Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.
    [80]
    Changjian Shui, Fan Zhou, et al. 2020. Deep active learning: Unified and principled method for query and training. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS’20), Silvia Chiappa and Roberto Calandra (Eds.), Vol. 108. PMLR, 1308–1318.
    [81]
    Timothy M. Hospedales, Antreas Antoniou, Paul Micaelli, and Amos J. Storkey. 2020. Meta-learning in neural networks: A survey. CoRR abs/2004.05439.
    [82]
    Li Chen, Honglan Huang, Yanghe Feng, Guangquan Cheng, Jincai Huang, and Zhong Liu. 2020. Active one-shot learning by a deep q-network strategy. Neurocomputing 383 (2020), 324–335.
    [83]
    Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In Proceedings of the 34th International Conference on Machine Learning (ICML’17), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, 301–310.
    [84]
    Oriol Vinyals, Charles Blundell, et al. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (Eds.). 3630–3638.
    [85]
    S. Zagoruyko and N. Komodakis. 2015. Learning to compare image patches via convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 4353–4361.
    [86]
    Gregory R. Koch. 2015. Siamese neural networks for one-shot image recognition.
    [87]
    F. Sung, Y. Yang, et al. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1199–1208.
    [88]
    Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). OpenReview.net.
    [89]
    Luo Zhang, Peng Liu, Lei Zhao, Guizhou Wang, Wangfeng Zhang, and Jianbo Liu. 2021. Air quality predictions with a semi-supervised bidirectional LSTM neural network. Atmos. Poll. Res. 12, 1 (2021), 328–339.
    [90]
    Ksenia Konyushkova, Raphael Sznitman, et al. 2017. Learning active learning from data. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, Isabelle Guyon and Ulrike von Luxburg, et al (Eds.). 4225–4235.
    [91]
    Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In Proceedings of the 33nd International Conference on Machine Learning (ICML’16), Maria-Florina Balcan and Kilian Q. Weinberger (Eds.), Vol. 48. 1842–1850.
    [92]
    Andreas Kvistad, Massimiliano Ruocco, Eliezer de Souza da Silva, and Erlend Aune. 2019. Augmented memory networks for streaming-based active one-shot learning. arXiv:1909.01757. Retrieved from http://arxiv.org/abs/1909.01757.
    [93]
    Richard Stuart Sutton. 1984. Temporal credit assignment in reinforcement learning.
    [94]
    Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17), Martha Palmer, Rebecca Hwa, and Sebastian Riedel (Eds.). Association for Computational Linguistics, 595–605.
    [95]
    Volodymyr Mnih, Koray Kavukcuoglu, David Silver, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529–533.
    [96]
    Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 1928–1937.
    [97]
    Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv:1312.5602. Retrieved from https://arxiv.org/abs/1312.5602.
    [98]
    Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. 2016. Dueling network architectures for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 1995–2003.
    [99]
    Honglan Huang, Yanghe Feng, Jincai Huang, Jiarui Zhang, and Li Chen. 2019. A reinforcement one-shot active learning approach for aircraft type recognition. IEEE Access 7 (2019), 147204–147214.
    [100]
    Adriana Romero, Pierre Luc Carrier, et al. 2017. Diet networks: Thin parameters for fat genomics. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). OpenReview.net.
    [101]
    Paul Budnarain, Renato Ferreira Pinto Junior, and Ilan Kogan. 2019. RadGrad: Active learning with loss gradients.arXiv:1906.07838. Retrieved from http://arxiv.org/abs/1906.07838.
    [102]
    Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17), Martha Palmer, Rebecca Hwa, and Sebastian Riedel (Eds.). Association for Computational Linguistics, 595–605.
    [103]
    Sarah Dean, Horia Mania, Nikolai Matni, Benjamin Recht, and Stephen Tu. 2020. On the sample complexity of the linear quadratic regulator. Found. Comput. Math. 20, 4 (2020), 633–679.
    [104]
    Sébastien Racanière, Theophane Weber, et al. 2017. Imagination-augmented agents for deep reinforcement learning. In Annual Conference on Neural Information Processing Systems (NIPS’17), Isabelle Guyon, Ulrike von Luxburg, and Samy Bengio, et al (Eds.). 5690–5701.
    [105]
    Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. 2018. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS’18), Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (Eds.). 8234–8244.
    [106]
    Vladimir Feinberg, Alvin Wan et al. 2018. Model-based value estimation for efficient model-free reinforcement learning. arXiv:0803.00101. Retrieved from http://arxiv.org/abs/1803.00101.
    [107]
    Shixiang Gu, Timothy P. Lillicrap, Ilya Sutskever, and Sergey Levine. 2016. Continuous deep q-learning with model-based acceleration. In Proceedings of the 33nd International Conference on Machine Learning (ICML’16), Maria-Florina Balcan and Kilian Q. Weinberger (Eds.), Vol. 48. JMLR.org, 2829–2838.
    [108]
    Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. 2018. Model-ensemble trust-region policy optimization. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). OpenReview.net.
    [109]
    Dwarikanath Mahapatra, Behzad Bozorgtabar, Jean-Philippe Thiran, and Mauricio Reyes. 2018. Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 580–588.
    [110]
    Xiao-Yu Zhang, Haichao Shi, Xiaobin Zhu, and Peng Li. 2019. Active semi-supervised learning based on self-expressive correlation with generative adversarial networks. Neurocomputing 345 (2019), 103–113.
    [111]
    Miriam W. Huijser and Jan C. van Gemert. 2017. Active decision boundary annotation with deep generative models. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE Computer Society, 5296–5305.
    [112]
    Toan Tran, Thanh-Toan Do, Ian D. Reid, and Gustavo Carneiro. 2019. Bayesian generative active deep learning. In Proceedings of the 36th International Conference on Machine Learning (ICML’19), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Vol. 97. PMLR, 6295–6304.
    [113]
    Xueying Shi, Qi Dou, et al. 2019. An active learning approach for reducing annotation cost in skin lesion analysis. In Proceedings of the 10th International Workshop on Machine Learning in Medical Imaging (MLMI’19), Heung-Il Suk and Mingxia Liu et al (Eds.), Vol. 11861. Springer, 628–636.
    [114]
    Noel C. F. Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen W. Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael A. Marchetti, Harald Kittler, and Allan Halpern. 2019. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). CoRR abs/1902.03368.
    [115]
    Ali Mottaghi and Serena Yeung. 2019. Adversarial representation active learning. arXiv:cs.CV/1912.09720. Retrieved from https://arxiv.org/abs/1912.09720.
    [116]
    Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. arXiv:1907.06347. Retrieved from https://arxiv.org/abs/1907.06347.
    [117]
    Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. 2019. Variational adversarial active learning. arXiv:1904.00370. Retrieved from http://arxiv.org/abs/1904.00370.
    [118]
    Kwan-Young Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. 2020. Task-aware variational adversarial active learning.arxiv:2002.04709. Retrieved from https://arxiv.org/abs/2002.04709.
    [119]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 2574–2582.
    [120]
    Yassir Saquil, Kwang In Kim, and Peter M. Hall. 2018. Ranking CGANs: Subjective control over semantic image attributes. In Proceedings of the British Machine Vision Conference BMVC’18)). BMVA Press, 131.
    [121]
    Zhao Lei, Yi Zeng, Peng Liu, and Xiaohui Su. 2021. Active deep learning for hyperspectral image classification with uncertainty learning. IEEE Geosci. Remote Sens. Lett. (2021).
    [122]
    B. H. Menze, A. Jakab, and S. Bauer etc.2015. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34, 10 (2015), 1993–2024.
    [123]
    Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (2010), 1345–1359.
    [124]
    Karl R. Weiss, Taghi M. Khoshgoftaar, and Dingding Wang. 2016. A survey of transfer learning. J. Big Data 3 (2016), 9.
    [125]
    Timothy M. Hospedales, Antreas Antoniou, Paul Micaelli, and Amos J. Storkey. 2020. Meta-learning in neural networks: A survey. CoRR abs/2004.05439.
    [126]
    Pengzhen Ren, Yun Xiao, Xiaojun Chang, et al. 2020. A comprehensive survey of neural architecture search: Challenges and solutions. CoRR abs/2006.02903.
    [127]
    Mahmut Kaya and Hasan Sakir Bilge. 2019. Deep metric learning: A survey. Symmetry 11, 9 (2019), 1066.
    [128]
    Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. 53, 3 (2020), 63:1–63:34.
    [129]
    Zhi-Hua Zhou. 2018. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 5, 1 (2018), 44–53.

    Cited By

    View all
    • (2024)A Comprehensive Investigation of Active Learning Strategies for Conducting Anti-Cancer Drug ScreeningCancers10.3390/cancers1603053016:3(530)Online publication date: 26-Jan-2024
    • (2024)A framework of evolutionary optimized convolutional neural network for classification of shang and chow dynasties bronze decorative patternsPLOS ONE10.1371/journal.pone.029351719:5(e0293517)Online publication date: 14-May-2024
    • (2024)A Survey of Dataset Refinement for Problems in Computer Vision DatasetsACM Computing Surveys10.1145/362715756:7(1-34)Online publication date: 9-Apr-2024
    • Show More Cited By

    Index Terms

    1. A Survey on Active Deep Learning: From Model Driven to Data Driven

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 54, Issue 10s
      January 2022
      831 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3551649
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 September 2022
      Online AM: 23 March 2022
      Accepted: 03 January 2022
      Revised: 27 November 2021
      Received: 29 May 2021
      Published in CSUR Volume 54, Issue 10s

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Active learning
      2. data-driven
      3. model-driven
      4. labelling samples

      Qualifiers

      • Survey
      • Refereed

      Funding Sources

      • NSFC

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,431
      • Downloads (Last 6 weeks)88
      Reflects downloads up to 26 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Comprehensive Investigation of Active Learning Strategies for Conducting Anti-Cancer Drug ScreeningCancers10.3390/cancers1603053016:3(530)Online publication date: 26-Jan-2024
      • (2024)A framework of evolutionary optimized convolutional neural network for classification of shang and chow dynasties bronze decorative patternsPLOS ONE10.1371/journal.pone.029351719:5(e0293517)Online publication date: 14-May-2024
      • (2024)A Survey of Dataset Refinement for Problems in Computer Vision DatasetsACM Computing Surveys10.1145/362715756:7(1-34)Online publication date: 9-Apr-2024
      • (2024)Active Learning Strategy Using Contrastive Learning and K-means for Aquatic Invasive Species Recognition2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)10.1109/WACVW60836.2024.00097(848-858)Online publication date: 1-Jan-2024
      • (2024)Improving Geological Remote Sensing Interpretation Via a Contextually Enhanced Multiscale Feature Fusion NetworkIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing10.1109/JSTARS.2024.337481817(6158-6173)Online publication date: 2024
      • (2024)Class Probability-guided Ensemble Learning-based Semantic Segmentation to Predict Cancerous Regions on Hematoxylin and Eosin-stained Images2024 IEEE 18th International Conference on Semantic Computing (ICSC)10.1109/ICSC59802.2024.00014(49-56)Online publication date: 5-Feb-2024
      • (2024)Object Detection via Active Learning Strategy Based on Saliency of Local Features and Posterior ProbabilityIEEE Access10.1109/ACCESS.2024.337259512(35462-35474)Online publication date: 2024
      • (2024)Online meta-learned gradient norms for active learning in science and technologyMachine Learning: Science and Technology10.1088/2632-2153/ad2e175:1(015041)Online publication date: 8-Mar-2024
      • (2024)Fast semantic segmentation for remote sensing images with an improved Short-Term Dense-Connection (STDC) networkInternational Journal of Digital Earth10.1080/17538947.2024.235612217:1Online publication date: 3-Jun-2024
      • (2024)MREDNet: A Multi-Residual Encoder-Decoder Network for recovering LFM interfered by intra-pulse retransmission interferencesSignal Processing10.1016/j.sigpro.2023.109329217(109329)Online publication date: May-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media