Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Survey of Deep Active Learning

Published: 08 October 2021 Publication History
  • Get Citation Alerts
  • Abstract

    Active learning (AL) attempts to maximize a model’s performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have a relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely according the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to a large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.). Therefore, AL is gradually coming to receive the attention it is due.
    It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DeepAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DeepAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DeepAL from an application perspective. Finally, we discuss the confusion and problems associated with DeepAL and provide some possible development directions.

    References

    [1]
    Mohammed Abdel-Wahab and Carlos Busso. 2019. Active learning for speech emotion recognition using deep neural network. In Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction. IEEE, 1–7.
    [2]
    Gediminas Adomavicius and Alexander Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering 17, 6 (2005), 734–749.
    [3]
    Charu C. Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and Philip S. Yu. 2014. Active learning: A survey. In Data Classification: Algorithms and Applications. CRC Press, 571–606.
    [4]
    Hamed Habibi Aghdam, Abel Gonzalez-Garcia, Antonio M. López, and Joost van de Weijer. 2019. Active learning for deep detection neural networks. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. IEEE, 3671–3679.
    [5]
    Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 3874–3884.
    [6]
    Saeed S. Alahmari, Dmitry B. Goldgof, Lawrence O. Hall, and Peter R. Mouton. 2019. Automatic cell counting using active deep learning and unbiased stereology. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 1708–1713.
    [7]
    Bang An, Wenjun Wu, and Huimin Han. 2018. Deep active learning for text classification. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing. ACM, 22:1–22:6.
    [8]
    Olov Andersson, Mariusz Wzorek, and Patrick Doherty. 2017. Deep learning quadcopter control via risk-aware active learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California. AAAI Press, 3812–3818.
    [9]
    Ralph G. Andrzejak, Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E. Elger. 2001. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E 64, 6 (2001), 061907.
    [10]
    Dana Angluin. 1988. Queries and concept learning. Machine Learning 2, 4 (1988), 319–342.
    [11]
    Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In Proceedings of the 2015 IEEE International Conference on Computer Vision. IEEE Computer Society, 2425–2433.
    [12]
    Samuel G. Armato III, Geoffrey McLennan, Luc Bidaut, Michael F. McNitt-Gray, Charles R. Meyer, Anthony P. Reeves, Binsheng Zhao, Denise R. Aberle, Claudia I. Henschke, Eric A. Hoffman, Ella A. Kazerooni, Heber MacMahon, Edwin J. R. Van Beeke, David Yankelevitz, Alberto M. Biancardi, Peyton H. Bland, Matthew S. Brown, Roger M. Engelmann, Gary E. Laderach, Daniel Max, Richard C. Pais, David P. Y. Qing, Rachael Y. Roberts, Amanda R. Smith, Adam Starkey, Poonam Batrah, Philip Caligiuri, Ali Farooqi, Gregory W. Gladish, C. Matilda Jude, Reginald F. Munden, Iva Petkovska, Leslie E. Quint, Lawrence H. Schwartz, Baskaran Sundaram, Lori E. Dodd, Charles Fenimore, David Gur, Nicholas Petrick, John Freymann, Justin Kirby, Brian Hughes, Alessi Vande Casteele, Sangeeta Gupte, Maha Sallamm, Michael D. Heath, Michael H. Kuhn, Ekta Dharaiya, Richard Burns, David S. Fryd, Marcos Salganicoff, Vikram Anand, Uri Shreter, Stephen Vastagh, and Barbara Y. Croft. 2011. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Medical Physics 38, 2 (2011), 915–931.
    [13]
    Nabiha Asghar, Pascal Poupart, Xin Jiang, and Hang Li. 2017. Deep active learning for dialogue generation. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, 78–83.
    [14]
    Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In Proceedings of the 8th International Conference on Learning Representations.
    [15]
    Marc Bachlin, Meir Plotnik, Daniel Roggen, Inbal Maidan, Jeffrey M. Hausdorff, Nir Giladi, and Gerhard Troster. 2009. Wearable assistant for Parkinson’s disease patients with the freezing of gait symptom. IEEE Transactions on Information Technology in Biomedicine 14, 2 (2009), 436–446.
    [16]
    Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2017. Designing neural network architectures using reinforcement learning. In Proceedings of the 5th International Conference on Learning Representations.
    [17]
    Mariaflorina Balcan, Alina Beygelzimer, and John Langford. 2009. Agnostic active learning. Journal of Computer and System Sciences 75, 1 (2009), 78–89.
    [18]
    Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James R. Glass. 2019. Identifying and controlling important neurons in neural machine translation. In Proceedings of the 7th International Conference on Learning Representations.
    [19]
    William H. Beluch, Tim Genewein, Andreas Nürnberger, and Jan M. Köhler. 2018. The power of ensembles for active learning in image classification. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 9368–9377.
    [20]
    Shai Bendavid, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning 79, 1 (2010), 151–175.
    [21]
    Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2006. Greedy layer-wise training of deep networks. In Proceedings of the 19th International Conference on Neural Information Processing Systems (2006), 153–160.
    [22]
    Sreyasee Das Bhattacharjee, Ashit Talukder, and Bala Venkatram Balantrapu. 2017. Active learning based news veracity detection with feature weighting and deep-shallow fusion. In Proceedings of the IEEE International Conference on Big Data (2017), 556–565.
    [23]
    Mustafa Bilgic and Lise Getoor. 2009. Link-based active learning. In Proceedings of the NIPS Workshop on Analyzing Networks and Learning with Graphs.
    [24]
    John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics.
    [25]
    Michael Bloodgood. 2010. Chris Callison-Burch: Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation. ACL, 854–864.
    [26]
    Michael Bloodgood and K. Vijay-Shanker. 2009. Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalanced datasets. In Proceedings of the Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. The Association for Computational Linguistics, 137–140.
    [27]
    Michael Bloodgood and K. Vijay-Shanker. 2009. A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping. CoNLL, 39–47.
    [28]
    Erik Bochinski, Ghassen Bacha, Volker Eiselein, Tim J. W. Walles, Jens C. Nejstgaard, and Thomas Sikora. 2018. Deep active learning for in situ plankton classification. In Proceedings of the Pattern Recognition and Information Forensics - ICPR 2018 International Workshops, CVAUI, IWCF, and MIPPSNAZ. Zhang, D. Suter, Y. Tian, A. Branzan Albu, N. Sidére, H. Jair Escalante (Eds.), Lecture Notes in Computer Science, Vol. 11188. Springer, 5–15.
    [29]
    Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine Learning. AAAI Press, 59–66.
    [30]
    Samuel Budd, Emma C. Robinson, Bernhard Kainz. 2021. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Medical Image Anal. 71 (2021), 102062.
    [31]
    Alex Burka and Katherine J. Kuchenbecker. 2017. How much haptic surface data is enough? In Proceedings of the 2017 AAAI Spring Symposia. AAAI Press.
    [32]
    Sylvain Calinon, Florent Guenter, and Aude Billard. 2007. On learning, representing, and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man, and Cybernetics Part B 37, 2 (2007), 286–298.
    [33]
    Trevor Campbell and Tamara Broderick. 2019. Automated scalable bayesian inference via hilbert coresets. Journal of Machine Learning Research 20, 15 (2019), 1–38.
    [34]
    Haw-Shiuan Chang, Shankar Vembu, Sunil Mohan, Rheeya Uppaal, and Andrew McCallum. 2020. Using error decay prediction to overcome practical issues of deep active learning for named entity recognition. Machine Learning 109, 9-10 (2020), 1749–1778.
    [35]
    Bor-Chun Chen, Chu-Song Chen, and Winston H. Hsu. 2014. Cross-age reference coding for age-invariant face recognition and retrieval. In Proceedings of the Computer Vision - ECCV 2014-13th European Conference. D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars (Eds.), Lecture Notes in Computer Science, Vol. 8694. Springer, 768–783.
    [36]
    Xuhui Chen, Jinlong Ji, Tianxi Ji, and Pan Li. 2018. Cost-sensitive deep active learning for epileptic seizure detection. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. ACM, 226–235.
    [37]
    Anfeng Cheng, Chuan Zhou, Hong Yang, Jia Wu, Lei Li, Jianlong Tan, and Li Guo. 2019. Deep active learning for anchor user prediction. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2151–2157.
    [38]
    Kashyap Chitta, Jose M. Alvarez, Elmar Haussmann, and Clement Farabet. 2019. Training data distribution search with ensemble active learning. arXiv:1905.12737. Retrieved from https://arxiv.org/abs/1905.12737.
    [39]
    Kashyap Chitta, Jose M. Alvarez, and Adam Lesnikowski. 2018. Large-scale visual active learning with deep probabilistic ensembles. arXiv:1811.03575. Retrieved from https://arxiv.org/abs/1811.03575.
    [40]
    Diane J. Cook and Maureen Schmitter-Edgecombe. 2009. Assessing the quality of activities in a smart environment. Methods of Information in Medicine 48, 5 (2009), 480.
    [41]
    Ido Dagan and Sean P. Engelson. 1995. Committee-based sampling for training probabilistic classifiers. In Proceedings of the 12th International Conference on Machine Learning, 150–157.
    [42]
    Navneet Dalal and Bill Triggs. 2005. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 886–893.
    [43]
    Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics. Association for Computational Linguistics, 76–87.
    [44]
    Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical Computer Science 412, 19 (2011), 1767–1781.
    [45]
    Sajib Dasgupta and Vincent Ng. 2009. Mine the easy, classify the hard: A semi-supervised approach to automatic sentiment classification. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLPKeh-Yih Su, Jian Su, and Janyce Wiebe (Eds.). The Association for Computer Linguistics, 701–709. Retrieved from https://www.aclweb.org/anthology/P09-1079/.
    [46]
    Diana L. Delibaltov, Utkarsh Gaur, Jennifer Kim, Matthew Kourakis, Erin Newman-Smith, William Smith, Samuel A. Belteton, Daniel Szymanski, and B. S. Manjunath. 2016. CellECT: Cell evolution capturing tool. BMC Bioinformatics 17, 1 (2016), 88.
    [47]
    Diana L. Delibaltov, Pratim Ghosh, Volkan Rodoplu, Michael Veeman, William Smith, and B. S. Manjunath. 2013. A linear program formulation for the segmentation of ciona membrane volumes. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention - MICCAI 2013-16th International Conference, Kensaku Mori, Ichiro Sakuma, Yoshinobu Sato, Christian Barillot, and Nassir Navab (Eds.). Lecture Notes in Computer Science, Vol. 8149. Springer, 444–451.
    [48]
    Cheng Deng, Yumeng Xue, Xianglong Liu, Chao Li, and Dacheng Tao. 2019. Active transfer learning network: A unified deep joint spectral–spatial feature learning model for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 57, 3 (2019), 1741–1754.
    [49]
    Jia Deng, Alex Berg, Sanjeev Satheesh, Hao Su, Aditya Khosla, and Fei-Fei Li. 2012. Large scale visual recognition challenge. Retrieved August 25, 2021 from www. image-net. org/challenges/LSVRC/2012. 1 (2012).
    [50]
    Li Deng. 2014. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing 3, 2 (2014), 1–29.
    [51]
    Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics 47 (2014), 1–10.
    [52]
    Piotr Dollár, Christian Wojek, Bernt Schiele, and Pietro Perona. 2012. Pedestrian detection: An evaluation of the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 4 (2012), 743–761.
    [53]
    Jeff Donahue and Karen Simonyan. 2019. Large scale adversarial representation learning. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019. 10541–10551.
    [54]
    David L. Donoho. 2000. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math Challenges Lecture 1, 2000 (2000), 32.
    [55]
    Baolin Du, Qi Qi, Han Zheng, Yue Huang, and Xinghao Ding. 2018. Breast cancer histopathological image classification via deep active learning and confidence boosting. In Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2018-27th International Conference on Artificial Neural Networks, Vera Kurkova, Yannis Manolopoulos, Barbara Hammer, Lazaros S. Iliadis, and Ilias Maglogiannis (Eds.). Lecture Notes in Computer Science, Vol. 11140. Springer, 109–116.
    [56]
    Xuefeng Du, Dexing Zhong, and Huikai Shao. 2019. Building an active palmprint recognition system. In Proceedings of the 2019 IEEE International Conference on Image Processing. IEEE, 1685–1689.
    [57]
    Melanie Ducoffe and Frédéric Precioso. 2018. Adversarial active learning for deep networks: A margin based approach. arXiv:1802.09841. Retrieved from https://arxiv.org/abs/1802.09841.
    [58]
    Rana El Kaliouby and Peter Robinson. 2004. Mind reading machines: Automated inference of cognitive mental states from video. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics, Vol. 1. IEEE, 682–688.
    [59]
    Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. 2010. The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88, 2 (2010), 303–338.
    [60]
    Caspar J. Fall, A. Törcsvári, K. Benzineb, and G. Karetka. 2003. Automated categorization in the international patent classification. SIGIR Forum 37, 1 (2003), 10–25.
    [61]
    Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 595–605.
    [62]
    Reza Zanjirani Farahani and Masoud Hekmatfar. 2009. Facility Location: Concepts, Models, Algorithms and Case Studies. Springer.
    [63]
    Di Feng, Xiao Wei, Lars Rosenbaum, Atsuto Maki, and Klaus Dietmayer. 2019. Deep active learning for efficient training of a LiDAR 3D object detector. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium. IEEE, 667–674.
    [64]
    Rosa L. Figueroa, Qing Zeng-Treitler, Long H. Ngo, Sergey Goryachev, and Eduardo P. Wiechmann. 2012. Active learning for clinical text classification: Is it better than random sampling?Journal of the American Medical Informatics Association 19, 5 (2012), 809–816.
    [65]
    Frederic Brenton Fitch. 1944. McCulloch warren s. and pitts walter. a logical calculus of the ideas immanent in nervous activity. bulletin of mathematical biophysics, vol. 5 (1943), pp. 115–133.Journal of Symbolic Logic 9, 2 (1944), 49–50.
    [66]
    Jonathan Folmsbee, Xulei Liu, Margaret Brandwein-Weber, and Scott Doyle. 2018. Active deep learning: Improved training efficiency of convolutional neural networks for tissue classification in oral cavity cancer. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging. IEEE, 770–773.
    [67]
    Tavis Forrester, William J. McShea, R. W. Keys, Robert Costello, Megan Baker, and Arielle Parsons. 2013. eMammal–citizen science camera trapping as a solution for broad-scale, long-term monitoring of wildlife populations. In Proceedings of the 98th ESA Annual Convention 2013 Sustainable Pathways: Learning from the Past and Shaping the Future (2013).
    [68]
    Marguerite Frank, Philip Wolfe1956. An algorithm for quadratic programming. Naval Research Logistics Quarterly 3, 1-2 (1956), 95–110.
    [69]
    Alexander Freytag, Erik Rodner, and Joachim Denzler. 2014. Selecting influential examples: Active learning with expected model output changes. In Proceedings of the Computer Vision - ECCV 2014-13th European Conference.D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars (Eds.), Lecture Notes in Computer Science, Vol. 8692. Springer, 562–577.
    [70]
    Yarin Gal and Zoubin Ghahramani. 2015. Bayesian convolutional neural networks with bernoulli approximate variational inference. arXiv:1506.02158. Retrieved from https://arxiv.org/abs/1506.02158.
    [71]
    Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on Machine Learning.1050–1059.
    [72]
    Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning. 1183–1192.
    [73]
    Vijay Garla, Caroline Taylor, and Cynthia Brandt. 2013. Semi-supervised clinical text classification with Laplacian SVMs: An application to cancer case management. Journal of Biomedical Informatics 46, 5 (2013), 869–875.
    [74]
    Utkarsh Gaur, Matthew Kourakis, Erin Newman-Smith, William Smith, and B. S. Manjunath. 2016. Membrane segmentation via active learning with deep networks. In Proceedings of the 2016 IEEE International Conference on Image Processing. IEEE, 1943–1947.
    [75]
    Yonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. arXiv:1711.00941. Retrieved from https://arxiv.org/abs/1711.00941.
    [76]
    Yonatan Geifman and Ran El-Yaniv. 2019. Deep active learning with a neural architecture search. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019. 5974–5984.
    [77]
    Andreas Geiger, Philip Lenz, and Raquel Urtasun. [2012]. Are we ready for autonomous driving. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. 3354–3361.
    [78]
    Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 3354–3361.
    [79]
    Daniel Gissin and Shai Shalev-Shwartz. 2019. Discriminative active learning. arXiv:1907.06347. Retrieved from https://arxiv.org/abs/1907.06347.
    [80]
    Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014. 2672–2680.
    [81]
    Gregory Griffin, Alex Holub, and Pietro Perona. 2007. Caltech-256 object category dataset. Retrieved on August 25, 2021 from http://www.vision.caltech.edu/Image_Datasets/Caltech256/.
    [82]
    Denis A. Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, and Sotaro Tsukizawa. 2020. Deep active learning for biased datasets via fisher kernel self-supervision. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 9038–9046.
    [83]
    Gautham Krishna Gudur, Prahalathan Sundaramoorthy, and Venkatesh Umaashankar. 2019. ActiveHARNet: Towards on-device deep bayesian active learning for human activity recognition. arXiv:1906.00108. Retrieved from https://arxiv.org/abs/1906.00108.
    [84]
    Yuhong Guo. 2010. Active instance sampling via matrix partition. In Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010.802–810.
    [85]
    Kazim Hanbay. 2019. Deep neural network based approach for ECG classification using hybrid differential features and active learning. Iet Signal Processing 13, 2 (2019), 165–175.
    [86]
    Manuel Haußmann, Fred A. Hamprecht, and Melih Kandemir. 2019. Deep active learning with adaptive acquisition. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2470–2476.
    [87]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 770–778.
    [88]
    Tao He, Xiaoming Jin, Guiguang Ding, Lan Yi, and Chenggang Yan. 2019. Towards better uncertainty sampling: Active learning with multiple views for deep convolutional neural network. In Proceedings of the IEEE International Conference on Multimedia and Expo. IEEE, 1360–1365.
    [89]
    Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation/IEEE, 558–567.
    [90]
    José Miguel Hernández-Lobato and Ryan P. Adams. 2015. Probabilistic backpropagation for scalable learning of bayesian neural networks. In Proceedings of the 32nd International Conference on Machine Learning. 1861–1869.
    [91]
    Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation 18, 7 (2006), 1527–1554.
    [92]
    Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580. Retrieved from https://arxiv.org/abs/1207.0580.
    [93]
    Martin Hirzer, Csaba Beleznai, Peter M. Roth, and Horst Bischof. 2011. Person re-identification by descriptive and discriminative classification. In Proceedings of the Scandinavian Conference on Image Analysis. Springer, 91–102.
    [94]
    Steven C. H. Hoi, Rong Jin, Jianke Zhu, and Michael R. Lyu. 2006. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd International Conference on Machine Learning. ACM, 417–424.
    [95]
    H. M. Sajjad Hossain, M. D. Abdullah Al Haiz Khan, and Nirmalya Roy. 2018. DeActive: Scaling activity recognition with active deep learning. In Proceedings of the ACM on Interactive, Mobile, Wearable Ubiquitous Technologies 2, 2 (2018), 66:1–66:23.
    [96]
    H. M. Sajjad Hossain and Nirmalya Roy. 2019. Active deep learning for activity recognition with context aware annotator selection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 1862–1870.
    [97]
    Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. arXiv:1112.5745. Retrieved from https://arxiv.org/abs/1112.5745.
    [98]
    Yue Huang, Zhenwei Liu, Minghui Jiang, Xian Yu, and Xinghao Ding. 2020. Cost-effective vehicle type recognition in surveillance images with deep active learning and web data. IEEE Transactions on Intelligent Transportation Systems 21, 1 (2020), 79–86.
    [99]
    Jonathan H. Huggins, Trevor Campbell, and Tamara Broderick. 2016. Coresets for scalable Bayesian logistic regression. In Proceedings of the 30th International Conference on Neural Information Processing Systems, 4080–4088.
    [100]
    Ahmed Hussein, Mohamed Medhat Gaber, and Eyad Elyan. 2016. Deep active learning for autonomous navigation. In Proceedings of the Engineering Applications of Neural Networks - 17th International Conference Vol. 629. Springer, 3–17.
    [101]
    Rania Ibrahim, Noha A. Yousri, Mohamed A. Ismail, and Nagwa M. El-Makky. 2014. Multi-level gene/MiRNA feature selection using deep belief nets and active learning. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 3957–3960.
    [102]
    Prateek Jain and Ashish Kapoor. 2009. Active learning for large multi-class problems. In Proceedings of the Active learning for large multi-class problems (2009), 762–769.
    [103]
    David Janz, Jos van der Westhuizen, and José Miguel Hernández-Lobato. 2017. Actively learning what makes a discrete sequence valid. arXiv:1708.04465. Retrieved from https://arxiv.org/abs/1708.04465.
    [104]
    Khaled Jedoui, Ranjay Krishna, Michael Bernstein, and Fei-Fei Li. 2019. Deep bayesian active learning for multiple correct outputs. arXiv:1912.01119. Retrieved from https://arxiv.org/abs/1912.01119.
    [105]
    Michael I. Jordan. 1986. Serial order: A parallel distributed processing approach. Advances in Psychology 121 (1986), 471–495.
    [106]
    Ajay Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi-class active learning for image classification. In Proceedings of the Conference on Computer Vision and Pattern Recognition (2009), 2372–2379.
    [107]
    J. Ajay Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2010. Multi-class batch-mode active learning for image classification. In Proceedings of the IEEE International Conference on Robotics and Automation (2010), 1873–1878.
    [108]
    Takeo Kanade, Jeffrey F. Cohn, and Yingli Tian. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 46–53.
    [109]
    Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li, and Lucian Popa. 2019. Low-resource deep entity resolution with transfer and active learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019), 5851–5861.
    [110]
    Benjamin Kellenberger, Diego Marcos, Sylvain Lobry, and Devis Tuia. 2019. Half a percent of labels is enough: Efficient animal detection in UAV imagery using deep CNNs and active learning. IEEE Transactions on Geoscience and Remote Sensing 57, 12 (2019), 9524–9533.
    [111]
    Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. 2017. Bayesian SegNet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. In Proceedings of the British Machine Vision Conference 2017. BMVA Press.
    [112]
    Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. 2020. Task-aware variational adversarial active learning. arXiv:2002.04709. Retrieved from https://arxiv.org/abs2002.04709.
    [113]
    Ross D. King, Kenneth E. Whelan, Ffion M. Jones, Philip G. K. Reiser, Christopher H. Bryant, Stephen Muggleton, Douglas B. Kell, and Stephen G. Oliver. 2004. Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427, 6971 (2004), 247–252.
    [114]
    Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations.
    [115]
    Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch acquisition for deep bayesian active learning. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019. 7024–7035.
    [116]
    Xiangnan Kong, Jiawei Zhang, and Philip S. Yu. 2013. Inferring anchor links across multiple heterogeneous social networks. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management. ACM, 179–188.
    [117]
    Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123, 1 (2017), 32–73.
    [118]
    Vikram Krishnamurthy. 2002. Algorithms for optimal scheduling and management of hidden markov model sensors. IEEE Transactions on Signal Processing 50, 6 (2002), 1382–1397.
    [119]
    Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, Toronto. Citeseer.
    [120]
    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, 2 (2012), 1097–1105.
    [121]
    M. P. Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Proceedings of the Advances in Neural Information Processing Systems (2010), 1189–1197.
    [122]
    Jennifer R. Kwapisz, Gary M. Weiss, and Samuel Moore. 2010. Activity recognition using cell phone accelerometers. SIGKDD Explorations 12, 2 (2010), 74–82.
    [123]
    Bogdan Kwolek, Michal Koziarski, Andrzej Bukala, Zbigniew Antosz, Boguslaw Olborski, Pawel Wąsowicz, Jakub Swadźba, and Boguslaw Cyganek. 2019. Breast cancer classification on histopathological images affected by data imbalance using active learning and deep convolutional neural network. In Proceedings of the Artificial NeuralNetworks and Machine Learning ICANN 2019: Workshop and Special Sessions (2019), 299–312.
    [124]
    Sejeong Kwon, Meeyoung Cha, and Kyomin Jung. 2017. Rumor detection over varying time windows. PloS One 12, 1 (2017), e0168344.
    [125]
    Leah S. Larkey. 1999. A patent search and classification system. In Proceedings of the 4th ACM Conference on Digital Libraries. ACM, 179–187.
    [126]
    Yann Lecun, Yoshua Bengio, and Geoffrey E Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
    [127]
    Y. LeCun, B. Boser, S. J. Denker, D. Henderson, E. R. Howard, W. Hubbard, and D. L. Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural Computation 1, 4 (1989), 541–551.
    [128]
    Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE 86, 11 (1998), 2278–2324.
    [129]
    Byungjae Lee and Kyunghyun Paeng. 2018. A robust and effective approach towards accurate metastasis detection and pn-stage classification in breast cancer. In Proceedings of the Medical Image Computing and Computer Assisted Intervention MICCAI2018, Alejandro F. Frangi, Julia A. Schnabel, Christos Davatzikos, Carlos Alberola-Lopez, Gabor Fichtinger. Lecture Notes in Computer Science, 841–850.
    [130]
    Christian Leibig, Vaneeda Allken, Murat Seckin Ayhan, Philipp Berens, and Siegfried Wahl. 2017. Leveraging uncertainty information from deep neural networks for disease detection. Scientific Reports 7, 1 (2017), 17816–17816.
    [131]
    David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and development in Information Retrieval, 3–12.
    [132]
    Ya Li, Keze Wang, Lin Nie, and Qing Wang. 2017. Face recognition via heuristic deep active learning. In Proceedings of the Chinese Conference on Biometric Recognition, 97–107.
    [133]
    Jianzhe Lin, Liang Zhao, Shuying Li, Rabab K. Ward, and Z. Jane Wang. 2018. Active-learning-incorporated deep transfer learning for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11, 11 (2018), 4048–4062.
    [134]
    Xiao Lin and Devi Parikh. 2017. Active learning for visual question answering: An empirical study. arXiv:1711.01732. Retrieved from https://arxiv.org/abs1711.01732.
    [135]
    Peng Liu, Hui Zhang, and Kie B. Eom. 2017. Active deep learning for classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10, 2 (2017), 712–724.
    [136]
    Zimo Liu, Jingya Wang, Shaogang Gong, Huchuan Lu, and Dacheng Tao. 2019. Deep reinforcement active learning for human-in-the-loop person re-identification. In Proceedings of the International Conference on Computer Vision, 6122–6131.
    [137]
    Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.
    [138]
    Reza Lotfian and Carlos Busso. 2017. Building naturalistic emotionally balanced speech corpus by retrieving emotional speech from existing podcast recordings. IEEE Transactions on Affective Computing 10, 4 (2017), 471–483.
    [139]
    David G. Lowe. 1999. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision. 1150–1157.
    [140]
    Xiaoming Lv, Fajie Duan, Jiajia Jiang, Xiao Fu, and Lin Gan. 2020. Deep active learning for surface defect detection. Sensors 20, 6 (2020), 1650.
    [141]
    Ramon Maldonado and Sanda M. Harabagiu. 2019. Active deep learning for the identification of concepts and relations in electroencephalography reports. Journal of Biomedical Informatics 98, Suppl 2 (2019), 103265.
    [142]
    R. G. Mark, P. S. Schluter, G. Moody, P. Devlin, and D. Chernoff. 1982. An annotated ECG database for evaluating arrhythmia detectors. In Proceedings of the IEEE Transactions on Biomedical Engineering, Vol. 29. 600–600.
    [143]
    Giovanna Martínez-Arellano and Svetan M. Ratchev. 2019. Towards an active learning approach to tool condition monitoring with bayesian deep learning. In Proceedings of the 33rd International ECMS Conference on Modelling and Simulation. 223–229.
    [144]
    Muhammad Mateen, Junhao Wen, Nasrullah, Sun Song, and Zhouping Huang. 2019. Fundus image classification using VGG-19 architecture with PCA and SVD. Symmetry 11, 1 (2019), 1.
    [145]
    Taylor R. Mauldin, Marc E. Canby, Vangelis Metsis, Anne H. H. Ngu, and Coralys Cubero Rivera. 2018. SmartFall: A smartwatch-based fall detection system using deep learning. Sensors 18, 10 (2018), 3363.
    [146]
    Ali Mottaghi and Serena Yeung. 2019. Adversarial representation active learning. arXiv:1912.09720. Retrieved from https://arxiv.org/abs1912.09720.
    [147]
    Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018. ACM, 19–34.
    [148]
    Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. 2020. Towards robust and reproducible active learning using neural networks. arXiv:2002.09564. Retrieved from https://arxiv.org/abs2002.09564.
    [149]
    Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 97–109.
    [150]
    Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning.
    [151]
    Ju Gang Nam, Sunggyun Park, Eui Jin Hwang, Jong Hyuk Lee, Kwang Nam Jin, Kun Young Lim, Thienkai Huy Vu, Jae Ho Sohn, Sangheum Hwang, Jin Mo Goo, et al. 2019. Development and validation of deep learning–based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 290, 1 (2019), 218–228.
    [152]
    Ali Bou Nassif, Ismail Shahin, Imtinan B. Attili, Mohammad Azzeh, and Khaled Shaalan. 2019. Speech recognition using deep neural networks: A systematic review. IEEE Access 7 (2019), 19143–19165.
    [153]
    T. Hieu Nguyen and Arnold Smeulders. 2004. Active learning using pre-clustering. In Proceedings of the 21st International Conference on Machine Learning, 79–79.
    [154]
    Mohammad Sadegh Norouzzadeh, Dan Morris, Sara Beery, Neel Joshi, Nebojsa Jojic, and Jeff Clune. 2019. A deep active learning system for species identification and counting in camera trap images. arXiv:1910.09716 Retrieved from https://arxiv.org/abs1910.09716.
    [155]
    Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional image synthesis with auxiliary classifier GANs. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70, 2642–2651.
    [156]
    Natalia Ostapuk, Jie Yang, and Philippe Cudre-Mauroux. 2019. ActiveLink: Deep active learning for link prediction in knowledge graphs. In Proceedings of the Web Conference on The World Wide Web Conference, 1398–1408.
    [157]
    Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. 79–86.
    [158]
    Mercedes Eugenia Paoletti, Juan Mario Haut, Rubén Fernández-Beltran, Javier Plaza, Antonio J. Plaza, Jun Yu Li, and Filiberto Pla. 2019. Capsule networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 57, 4 (2019), 2145–2160.
    [159]
    Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. SpecAugment: A simple data augmentation method for automatic speech recognition. In Proceedings of the 20th Annual Conference of the International Speech Communication Association. 2613–2617.
    [160]
    John P. Pestian, Chris Brew, Pawel Matykiewicz, D. J. Hovermale, Neil Johnson, K. Bretonnel Cohen, and Wlodzislaw Duch. 2007. A shared task involving multi-label classification of clinical free text. In Proceedings of the Workshop on BioNLP 2007: Biological, translational, and clinical language processing. Association for Computational Linguistics, 97–104.
    [161]
    Jeff M. Phillips. 2016. Coresets and sketches. arXiv:1601.00617. Retrieved from https://arxiv.org/abs1601.00617.
    [162]
    Robert Pinsler, Jonathan Gordon, Eric T. Nalisnick, and José Miguel Hernández-Lobato. 2019. Bayesian Batch Active Learning as Sparse Subset Approximation. NeurIPS, 6356–6367.
    [163]
    Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom M. Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 425–435.
    [164]
    Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabás Póczos, and Tom M. Mitchell. 2019. Competence-based Curriculum Learning for Neural Machine Translation. NAACL-HLT 1 (2019), 1162–1172.
    [165]
    Remus Pop and Patric Fulop. 2018. Deep ensemble bayesian active learning: Addressing the mode collapse issue in monte carlo dropout via ensembles. arXiv:1811.03897. Retrieved from https://arxiv.org/abs1811.03897.
    [166]
    Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. Sampling bias in deep active classification: An empirical study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 4056–4066.
    [167]
    Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the 17th Conference on Computational Natural Language Learning. ACL, 143–152.
    [168]
    Shalini Priya, Saharsh Singh, Sourav Kumar Dandapat, Kripabandhu Ghosh, and Joydeep Chandra. 2019. Identifying infrastructure damage during earthquake using deep active learning. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 551–552.
    [169]
    Yao Qin, Nicholas Carlini, Garrison W. Cottrell, Ian J. Goodfellow, and Colin Raffel. 2019. Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. ICML, 5231–5240.
    [170]
    Zhenshen Qu, Jingda Du, Yong Cao, Qiuyu Guan, and Pengbo Zhao. 2020. Deep active learning for remote sensing object detection. arXiv:2003.08793. Retrieved from https://arxiv.org/abs2003.08793.
    [171]
    M. M. Al Rahhal, Yakoub Bazi, Haikel Alhichri, Naif Alajlan, Farid Melgani, and Ronald R. Yager. 2016. Deep learning approach for active classification of electrocardiogram signals. Information Sciences 345, 345 (2016), 340–354.
    [172]
    Hiranmayi Ranganathan, Shayok Chakraborty, and Sethuraman Panchanathan. 2016. Multimodal emotion recognition using deep learning architectures. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision. IEEE, 1–9.
    [173]
    Hiranmayi Ranganathan, Hemanth Venkateswara, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep active learning for image classification. In Proceedings of the IEEE International Conference on Image Processing, 3934–3938.
    [174]
    Pengzhen Ren, Yun Xiao, Xiaojun Chang, Poyao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2021. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Comput. Surv. 54, 4 (2021), 76:1–76:34.
    [175]
    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241.
    [176]
    Matthias Rottmann, Karsten Kahl, and Hanno Gottschalk. 2018. Deep bayesian active semi-supervised learning. In Proceedings of the 17th IEEE International Conference on Machine Learning and Applications. IEEE, 158–164.
    [177]
    Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through monte carlo estimation of error reduction. In Proceedings of the International Conference on Machine Learning, 441–448.
    [178]
    Soumya Roy, Asim Unmesh, and Vinay P. Namboodiri. 2018. Deep active learning for object detection. In Proceedings of the British Machine Vision Conference 2018. BMVA Press, 91.
    [179]
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1. MIT Press. 318–362.
    [180]
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323 (1986), 533–536.
    [181]
    Ario Sadafi, Niklas Koehler, Asya Makhro, Anna Bogdanova, Nassir Navab, Carsten Marr, and Tingying Peng. 2019. Multiclass deep active learning for detecting red blood cell subtypes in brightfield microscopy. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 685–693.
    [182]
    Mathew Salvaris, Danielle Dean, and Wee Hyong Tok. 2018. Generative adversarial networks. arXiv:1406.2661. Retrieved from https://arxiv.org/abs1406.2661.
    [183]
    Conrad Sanderson. 2008. Biometric Person Recognition: Face, Speech and Fusion. Vol. 4. VDM Publishing.
    [184]
    Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the 7th Conference on Natural Language Learning. ACL, 142–147.
    [185]
    Yassir Saquil, Kwang In Kim, and Peter Hall. 2018. Ranking CGANs: Subjective control over semantic image attributes. In Proceedings of the British Machine Vision Conference 2018. BMVA Press, 131.
    [186]
    G. Sayantan, P. T. Kien, and K. V. Kadambari. 2018. Classification of ECG beats using deep belief network and active learning. Medical & Biological Engineering & Computing 56, 10 (2018), 1887–1898.
    [187]
    Melanie Lubrano Di Scandalea, Christian S. Perone, Mathieu Boudreau, and Julien Cohenadad. 2019. Deep active learning for axon-myelin segmentation on histology data. arXiv:1907.05143. Retrieved from https://arxiv.org/abs1907.05143.
    [188]
    Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active hidden markov models for information extraction. In Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis. Frank Hoffmann, David J. Hand, Niall M. Adams, Douglas H. Fisher, Gabriela Guimarães (Eds.), Lecture Notes in Computer Science, Vol. 2189. Springer, 309–318.
    [189]
    Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. INTERSPEECH, 3465–3469.
    [190]
    Christopher Schröder and Andreas Niekler. 2020. A survey of active learning for text classification using deep neural networks. arXiv:2008.07267. Retrieved from https://arxiv.org/abs2008.07267.
    [191]
    Ozan Sener and Silvio Savarese. 2017. A geometric approach to active learning for convolutional neural networks. arXiv:1708.00489. Retrieved from https://arxiv.org/abs1708.00489.
    [192]
    Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In Proceedings of the International Conference on Learning Representations.
    [193]
    Burr Settles. 2009. Active Learning Literature Survey. Technical Report TR-1648. University of Wisconsin-Madison Department of Computer Sciences.
    [194]
    Burr Settles. 2012. Active learning, volume 6 of synthesis lectures on artificial intelligence and machine learning. Synthesis Lectures on Artificial Intelligence and Machine LearningMorgan & Claypool.
    [195]
    Burr Settles, Mark Craven, and Soumya Ray. 2007. Multiple-instance active learning. In Proceedings of the 20th International Conference on Neural Information Processing Systems, 1289–1296.
    [196]
    H. S. Seung, M. Opper, and H. Sompolinsky. 1992. Query by committee. In Proceedings of the 5th Annual Workshop on Computational Learning Theory (1992), 287–294.
    [197]
    Matthew Shardlow, Meizhi Ju, Maolin Li, Christian O’Reilly, Elisabetta Iavarone, John McNaught, and Sophia Ananiadou. 2019. A text mining pipeline using active and deep learning aimed at curating information in computational neuroscience. Neuroinformatics 17, 3 (2019), 391–406.
    [198]
    Artem Shelmanov, Vadim Liventsev, Danil Kireev, Nikita Khromov, Alexander Panchenko, Irina Fedulova, and Dmitry V. Dylov. 2019. Active learning with deep pre-trained models for sequence tagging of clinical and biomedical texts. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 482–489.
    [199]
    Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep Active Learning for Named Entity Recognition. ICLR (Poster).
    [200]
    Changjian Shui, Fan Zhou, Christian Gagné, and Boyu Wang. 2020. Deep active learning: Unified and principled method for query and training. In Proceedings of the International Conference on Artificial Intelligence and Statistics. 1308–1318.
    [201]
    Aditya Siddhant and Zachary C. Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2904–2909.
    [202]
    Oriane Siméoni, Mateusz Budnik, Yannis Avrithis, Guillaume Gravier. 2020. Rethinking deep active learning: Using unlabeled data at model training. ICPR, 1220–1227.
    [203]
    Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations.
    [204]
    Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. 2019. Variational adversarial active learning. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. IEEE, 5971–5980.
    [205]
    Asim Smailagic, Pedro Costa, Alex Gaudio, Kartik Khandelwal, Mostafa Mirshekari, Jonathon Fagert, Devesh Walawalkar, Susu Xu, Adrian Galdran, Pei Zhang, Aurélio Campilho, and Hae Young Noh. 2020. O-MedAL: Online active deep learning for medical image analysis. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 4 (2020), e1353.
    [206]
    Asim Smailagic, Pedro Costa, Hae Young Noh, Devesh Walawalkar, Kartik Khandelwal, Adrian Galdran, Mostafa Mirshekari, Jonathon Fagert, Susu Xu, Pei Zhang, and Aurélio Campilho. 2018. MedAL: Accurate and robust deep active learning for medical image analysis. In Proceedings of the 17th IEEE International Conference on Machine Learning and Applications.481–488.
    [207]
    Justin S. Smith, Benjamin Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E. Roitberg. 2018. Less is more: Sampling chemical space with active learning. Journal of Chemical Physics 148, 24 (2018), 241733.
    [208]
    Kihyuk Sohn, Xinchen Yan, and Honglak Lee. 2015. Learning structured output representation using deep conditional generative models. In Proceedings of the 28th International Conference on Neural Information Processing Systems.3483–3491.
    [209]
    Kechen Song and Yunhui Yan. 2013. A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Applied Surface Science 285, Part B (2013), 858–864.
    [210]
    Fabio A. Spanhol, Luiz S. Oliveira, Caroline Petitjean, and Laurent Heutte. 2016. A dataset for breast cancer histopathological image classification. IEEE Transactions on Biomedical Engineering 63, 7 (2016), 1455–1462.
    [211]
    Akash Srivastava, Lazar Valkoz, Chris Russell, U. Michael Gutmann, and A. Charles Sutton. 2017. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In Neural Information Processing Systems. 3310–3320.
    [212]
    Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
    [213]
    Fabian Stark, Caner Hazırbas, Rudolph Triebel, and Daniel Cremers. 2015. Captcha recognition with active deep learning. In Proceedings of the Workshop New Challenges in Neural Computation, Vol. 2015. Citeseer, 94.
    [214]
    Allan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kjærgaard, Anind Dey, Tobias Sonne, and Mads Møller Jensen. 2015. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. 127–140.
    [215]
    Alexandra Swanson, Margaret Kosmala, Chris Lintott, Robert Simpson, Arfon Smith, and Craig Packer. 2015. Snapshot serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Scientific Data 2, 1 (2015), 1–14.
    [216]
    Kuniyuki Takahashi, Tetsuya Ogata, Jun Nakanishi, Gordon Cheng, and Shigeki Sugano. 2017. Dynamic motion learning for multi-DOF flexible-joint robots using active–passive motor babbling through deep learning. Advanced Robotics 31, 18 (2017), 1002–1015.
    [217]
    Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge Distillation. In ICLR.
    [218]
    Yao Tan, Liu Yang, Qinghua Hu, and Zhibin Du. 2019. Batch mode active learning for semantic segmentation based on multi-clue sample selection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. ACM, 831–840.
    [219]
    Joseph Taylor, H. M. Sajjad Hossain, Mohammad Arif Ul Alam, Md Abdullah Al Hafiz Khan, Nirmalya Roy, Elizabeth Galik, and Aryya Gangopadhyay. 2017. SenseBox: A low-cost smart home system. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications Workshops. IEEE, 60–62.
    [220]
    Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation. European Language Resources Association, 2214–2218.
    [221]
    Simon Tong. 2001. Active Learning: Theory and Applications. Vol. 1. Stanford University, USA.
    [222]
    Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2, 1 (2002), 45–66.
    [223]
    Toan Tran, Thanh-Toan Do, Ian D. Reid, and Gustavo Carneiro. 2019. Bayesian generative active deep learning. In Proceedings of the 36th International Conference on Machine Learning. PMLR, 6295–6304.
    [224]
    Toan Tran, Trung Pham, Gustavo Carneiro, Lyle Palmer, and Ian Reid. 2017. A bayesian data augmentation approach for learning deep models. In Proceedings of the Advances in Neural Information Processing Systems. 2797–2806.
    [225]
    Kailas Vodrahalli, Ke Li, and Jitendra Malik. 2018. Are all training examples created equal? An empirical study. arXiv:1811.12569. Retrieved from https://arxiv.org/abs1811.12569.
    [226]
    Byron C. Wallace, Michael J. Paul, Urmimala Sarkar, Thomas A. Trikalinos, and Mark Dredze. 2014. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. Journal of the American Medical Informatics Association 21, 6 (2014), 1098–1103.
    [227]
    Dan Wang and Yi Shang. 2014. A new active labeling method for deep learning. In Proceedings of the 2014 International Joint Conference on Neural Networks, 112–119.
    [228]
    Jiannan Wang, Guoliang Li, Jeffrey Xu Yu, and Jianhua Feng. 2011. Entity matching: How similar is similar. In Proceedings of the VLDB Endowement 4, 10 (2011), 622–633.
    [229]
    K. Wang, D. Zhang, Y. Li, R. Zhang, and L. Lin. 2017. Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology 27, 12 (2017), 2591–2600.
    [230]
    Menglin Wang, Baisheng Lai, Zhongming Jin, Xiaojin Gong, Jianqiang Huang, and Xiansheng Hua. 2018. Deep active learning for video-based person re-identification. arXiv:1812.05785. Retrieved from https://arxiv.org/abs1812.05785.
    [231]
    Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics. Association for Computational Linguistics, 1810–1822.
    [232]
    Wenzhe Wang, Ruiwei Feng, Jintai Chen, Yifei Lu, Tingting Chen, Hongyun Yu, Danny Z. Chen, and Jian Wu. 2019. Nodule-Plus R-CNN and deep self-paced active learning for 3D instance segmentation of pulmonary nodules. IEEE Access 7 (2019), 128796–128805.
    [233]
    Wenzhe Wang, Yifei Lu, Bian Wu, Tingting Chen, Danny Z. Chen, and Jian Wu. 2018. Deep active self-paced learning for accurate pulmonary nodule segmentation. In Proceedings of the 2018 21st International Conference on Medical Image Computing and Computer Assisted Intervention. Alejandro F. Frangi, Julia A. Schnabel, Christos Davatzikos, Carlos Alberola-López, Gabor Fichtinger (Eds.), Lecture Notes in Computer Science, Vol. 11071, 723–731.
    [234]
    William Yang Wang. 2017. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Regina Barzilay and Min-Yen Kan (Eds.). Association for Computational Linguistics, 422–426.
    [235]
    Zengmao Wang, Bo Du, Lefei Zhang, and Liangpei Zhang. 2016. A batch-mode active learning framework by querying discriminative and representative samples for hyperspectral image classification. Neurocomputing 179 (2016), 88–100.
    [236]
    Max Welling and Whye Yee Teh. 2011. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on International Conference on Machine Learning. 681–688.
    [237]
    Yu Wu, Yutian Lin, Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. 2018. Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 5177–5186.
    [238]
    Xide Xia, Pavlos Protopapas, and Finale Doshivelez. 2016. Cost-sensitive batch mode active learning: Designing astronomical observation by optimizing telescope time and telescope choice. In Proceedings of the 2016 SIAM International Conference on Data Mining (2016), 477–485.
    [239]
    Ismet Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Kumar Mahajan. 2019. Billion-scale semi-supervised learning for image classification. arXiv:1905.00546. Retrieved from https://arxiv.org/abs1905.00546.
    [240]
    Yilin Yan, Min Chen, Saad Sadiq, and Mei-Ling Shyu. 2017. Efficient imbalanced multimedia concept retrieval by deep learning on spark clusters. International Journal of Multimedia Data Engineering and Management 8, 1 (2017), 1–20.
    [241]
    Yilin Yan, Min Chen, Mei-Ling Shyu, and Shu-Ching Chen. 2015. Deep learning for imbalanced multimedia data classification. In Proccedings of the 2015 IEEE International Symposium on Multimedia. IEEE, 483–488.
    [242]
    Jie Yang, Thomas Drake, Andreas Damianou, and Yoelle Maarek. 2018. Leveraging crowdsourcing data for deep active learning–an application: Learning intents in alexa. In Proceedings of the 2018 World Wide Web Conference, 23–32.
    [243]
    Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z. Chen. 2017. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. 399–407.
    [244]
    C. Yin, B. Qian, S. Cao, X. Li, J. Wei, Q. Zheng, and I. Davidson. 2017. Deep similarity-based batch mode active learning with exploration-exploitation. In Proceedings of the 2017 IEEE International Conference on Data Mining. 575–584.
    [245]
    Donggeun Yoo and In So Kweon. 2019. Learning loss for active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation/IEEE, 93–102.
    [246]
    Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. 2020. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. CVPR, 2633–2642.
    [247]
    Jiaming Zeng, Adam Lesnikowski, and Jose M. Alvarez. 2018. The relevance of bayesian layer positioning to model uncertainty in deep bayesian active learning. arXiv:1811.12535. Retrieved from https://arxiv.org/abs1811.12535.
    [248]
    Pei Zhang, Xueying Xu, and Deyi Xiong. 2018. Active learning for neural machine translation. In Proceedings of the 2018 International Conference on Asian Language Processing. IEEE, 153–158.
    [249]
    Shanshan Zhang, Rodrigo Benenson, and Bernt Schiele. 2017. CityPersons: A diverse dataset for pedestrian detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 4457–4465.
    [250]
    Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. 649–657.
    [251]
    Ye Zhang, Matthew Lease, and Byron C. Wallace. 2017. Active discriminative text representation learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, 3386–3392.
    [252]
    Yizhe Zhang, Michael T. C. Ying, Lin Yang, Anil T. Ahuja, and Danny Z. Chen. 2016. Coarse-to-fine stacked fully convolutional nets for lymph node segmentation in ultrasound images. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine. IEEE Computer Society, 443–448.
    [253]
    Wencang Zhao, Yu Kong, Zhengming Ding, and Yun Fu. 2017. Deep active learning through cognitive information parcels. In Proceedings of the 25th ACM International Conference on Multimedia. 952–960.
    [254]
    Ziyuan Zhao, Xiaoyan Yang, Bharadwaj Veeravalli, and Zeng Zeng. 2020. Deeply Supervised Active Learning for Finger Bones Segmentation. EMBC, 1620–1623.
    [255]
    Fedor Zhdanov. 2019. Diverse mini-batch active learning. arXiv:1901.05954. Retrieved from https://arxiv.org/abs1901.05954.
    [256]
    Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang, and Qi Tian. 2016. MARS: A video benchmark for large-scale person re-identification. In Proceedings of the 14th European Conference on Computer VisionLecture Notes in Computer Science, Vol. 9910, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer, 868–884.
    [257]
    Shusen Zhou, Qingcai Chen, and Xiaolong Wang. 2010. Active deep networks for semi-supervised sentiment classification. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. 1515–1523.
    [258]
    Siqi Zhou and Angela P. Schoellig. 2019. Active training trajectory generation for inverse dynamics model learning with deep neural networks. In Proceedings of the 2019 IEEE 58th Conference on Decision and Control. IEEE, 1784–1790.
    [259]
    Jia-Jie Zhu and José Bento. 2017. Generative adversarial active learning. arXiv:1702.07956. Retrieved from https://arxiv.org/abs1702.07956.
    [260]
    Xiaojin Zhu, John Lafferty, and Ronald Rosenfeld. 2005. Semi-supervised Learning with Graphs. Ph.D. Dissertation. Carnegie Mellon University, language technologies institute, school of .... UMI Order Number: AAI 3179046.
    [261]
    Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0. In Proceedings of the 10th International Conference on Language Resources and Evaluation. European Language Resources Association.
    [262]
    Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. In Proceedings of the 5th International Conference on Learning Representations.

    Cited By

    View all
    • (2024)Um Farol para Criação e Avaliação de Cursos de Ciência de Dados: Os Referenciais Curriculares da SBCAnais do IV Simpósio Brasileiro de Educação em Computação (EDUCOMP 2024)10.5753/educomp.2024.237484(266-272)Online publication date: 22-Apr-2024
    • (2024)Bidirectional Complementary Correlation-Based Multimodal Aspect-Level Sentiment AnalysisInternational Journal on Semantic Web & Information Systems10.4018/IJSWIS.33759820:1(1-16)Online publication date: 21-Feb-2024
    • (2024)Cognitive diagnostic assessment: A Q-matrix constraint-based neural network methodBehavior Research Methods10.3758/s13428-024-02404-5Online publication date: 30-Apr-2024
    • Show More Cited By

    Index Terms

    1. A Survey of Deep Active Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 54, Issue 9
      December 2022
      800 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3485140
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 October 2021
      Accepted: 01 June 2021
      Revised: 01 March 2021
      Received: 01 August 2020
      Published in CSUR Volume 54, Issue 9

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Deep learning
      2. active learning
      3. deep active learning

      Qualifiers

      • Survey
      • Refereed

      Funding Sources

      • NSFC
      • Shaanxi Science and Technology Innovation Team Support
      • Australian Research Council Discovery Early Career Researcher Award

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,680
      • Downloads (Last 6 weeks)296
      Reflects downloads up to

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Um Farol para Criação e Avaliação de Cursos de Ciência de Dados: Os Referenciais Curriculares da SBCAnais do IV Simpósio Brasileiro de Educação em Computação (EDUCOMP 2024)10.5753/educomp.2024.237484(266-272)Online publication date: 22-Apr-2024
      • (2024)Bidirectional Complementary Correlation-Based Multimodal Aspect-Level Sentiment AnalysisInternational Journal on Semantic Web & Information Systems10.4018/IJSWIS.33759820:1(1-16)Online publication date: 21-Feb-2024
      • (2024)Cognitive diagnostic assessment: A Q-matrix constraint-based neural network methodBehavior Research Methods10.3758/s13428-024-02404-5Online publication date: 30-Apr-2024
      • (2024)Safe contact-based robot active search using Bayesian optimization and control barrier functionsFrontiers in Robotics and AI10.3389/frobt.2024.134436711Online publication date: 29-Apr-2024
      • (2024)Semi-supervised active learning using convolutional auto- encoder and contrastive learningFrontiers in Artificial Intelligence10.3389/frai.2024.13988447Online publication date: 30-May-2024
      • (2024)Using dropout based active learning and surrogate models in the inverse viscoelastic parameter identification of human brain tissueFrontiers in Physiology10.3389/fphys.2024.132129815Online publication date: 23-Jan-2024
      • (2024)An optimized framework for processing multicentric polysomnographic data incorporating expert human oversightFrontiers in Neuroinformatics10.3389/fninf.2024.137993218Online publication date: 13-May-2024
      • (2024)Explainable coronary artery disease prediction model based on AutoGluon from AutoML frameworkFrontiers in Cardiovascular Medicine10.3389/fcvm.2024.136054811Online publication date: 1-Jul-2024
      • (2024)Model extraction via active learning by fusing prior and posterior knowledge from unlabeled dataJournal of Intelligent & Fuzzy Systems10.3233/JIFS-239504(1-16)Online publication date: 19-Mar-2024
      • (2024)Active contrastive coding reducing label effort for sensor-based human activity recognitionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23480446:2(3987-3999)Online publication date: 14-Feb-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media