Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities

Published: 13 July 2023 Publication History

Abstract

Few-shot learning (FSL) has emerged as an effective learning method and shows great potential. Despite the recent creative works in tackling FSL tasks, learning valid information rapidly from just a few or even zero samples remains a serious challenge. In this context, we extensively investigated 200+ FSL papers published in top journals and conferences in the past three years, aiming to present a timely and comprehensive overview of the most recent advances in FSL with a fresh perspective and to provide an impartial comparison of the strengths and weaknesses of existing work. To avoid conceptual confusion, we first elaborate and contrast a set of relevant concepts including few-shot learning, transfer learning, and meta-learning. Then, we inventively extract prior knowledge related to few-shot learning in the form of a pyramid, which summarizes and classifies previous work in detail from the perspective of challenges. Furthermore, to enrich this survey, we present in-depth analysis and insightful discussions of recent advances in each subsection. What is more, taking computer vision as an example, we highlight the important application of FSL, covering various research hotspots. Finally, we conclude the survey with unique insights into technology trends and potential future research opportunities to guide FSL follow-up research.

References

[1]
Wenbo Zheng, Lan Yan, Chao Gou, and Fei-Yue Wang. 2021. Federated meta-learning for fraudulent credit card detection. In Proceedings of the 29th International Conference on International Joint Conference on Artificial Intelligence. 4654–4660.
[2]
Adrian El Baz, Ihsan Ullah, et al. 2022. Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone fine-tuning without episodic meta-learning dominates for few-shot learning image classification. In NeurIPS 2021 Competitions and Demonstrations Track. PMLR, 80–96.
[3]
Yutai Hou, Xinghao Wang, Cheng Chen, Bohan Li, Wanxiang Che, and Zhigang Chen. 2022. FewJoint: Few-shot learning for joint dialogue understanding. Int. J. Mach. Learn. Cyber. (2022), 1–15.
[4]
Huimin Sun, Jiajie Xu, Kai Zheng, Pengpeng Zhao, Pingfu Chao, and Xiaofang Zhou. 2021. MFNP: A meta-optimized model for few-shot next POI recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence. 3017–3023.
[5]
Elahe Rahimian, Soheil Zabihi, Amir Asif, S. Farokh Atashzar, and Arash Mohammadi. 2021. Few-shot learning for decoding surface electromyography for hand gesture recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1300–1304.
[6]
Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. 53, 3 (2020), 1–34.
[7]
Jun Shu, Zongben Xu, and Deyu Meng. 2018. Small sample learning in big data era. arXiv preprint arXiv:1808.04572 (2018).
[8]
Jiang Lu, Pinghua Gong, Jieping Ye, and Changshui Zhang. 2020. Learning from very few samples: A survey. arXiv preprint arXiv:2009.02653 (2020).
[9]
Endel Tulving. 1985. How many memory systems are there? Amer. Psychol. 40, 4 (1985), 385.
[10]
Biqi Wang, Yang Xu, Zebin Wu, Tianming Zhan, and Zhihui Wei. 2022. Spatial-spectral local domain adaption for cross domain few shot hyperspectral images classification. IEEE Trans. Geosci. Remote Sens. 60 (2022), 1–15. DOI:
[11]
Luiz F. O. Chamon and Alejandro Ribeiro. 2020. Probably approximately correct constrained learning. In Proceedings of the Annual Conference on Neural Information Processing Systems. Retrieved from https://proceedings.neurips.cc/paper/2020/hash/c291b01517f3e6797c774c306591cc32-Abstract.html.
[12]
Shuo Yang, Lu Liu, and Min Xu. 2021. Free lunch for few-shot learning: Distribution calibration. In Proceedings of the 9th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=JWOiYxMG92s.
[13]
Atsushi Suzuki, Atsushi Nitanda, Jing Wang, Linchuan Xu, Kenji Yamanishi, and Marc Cavazza. 2021. Generalization error bound for hyperbolic ordinal embedding. In Proceedings of the 38th International Conference on Machine Learning. PMLR, 10011–10021. Retrieved from http://proceedings.mlr.press/v139/suzuki21a.html.
[14]
Tianshi Cao, Marc T. Law, and Sanja Fidler. 2020. A theoretical analysis of the number of shots in few-shot learning. In Proceedings of the 8th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=HkgB2TNYPS.
[15]
Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. 2018. Feature generating networks for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5542–5551.
[16]
Muhammad Ferjad Naeem, Yongqin Xian, Federico Tombari, and Zeynep Akata. 2021. Learning graph embeddings for compositional zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 953–962.
[17]
Joaquin Vanschoren. 2018. Meta-learning: A survey. arXiv e-prints (2018), arXiv–1810.
[18]
Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr H. Pong, Aurick Zhou, Justin Yu, and Sergey Levine. 2021. MURAL: Meta-learning uncertainty-aware rewards for outcome-driven reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 6346–6356.
[19]
Gongjie Zhang, Zhipeng Luo, Kaiwen Cui, Shijian Lu, and Eric P. Xing. 2022. Meta-DETR: Image-level few-shot detection with inter-class correlation exploitation. IEEE Trans. Pattern Anal. Mach. Intell. (2022). DOI:
[20]
Meng Cheng, Hanli Wang, and Yu Long. 2021. Meta-learning based incremental few-shot object detection. IEEE Trans. Circ. Syst. Vid. Technol. (2021).
[21]
Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. 2020. Meta-Dataset: A dataset of datasets for learning to learn from few examples. In Proceedings of the 8th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=rkgAGAVKPr.
[22]
Yunhui Guo, Noel C. Codella, Leonid Karlinsky, James V. Codella, John R. Smith, Kate Saenko, Tajana Rosing, and Rogerio Feris. 2020. A broader study of cross-domain few-shot learning. In Proceedings of the European Conference on Computer Vision. Springer, 124–141.
[23]
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. Adv. Neural Inf. Process. Syst. 29 (2016), 3630–3638.
[24]
Technical Report.
[25]
Ehtesham Iqbal, Sirojbek Safarov, and Seongdeok Bang. 2022. MSANet: Multi-similarity and attention guidance for boosting few-shot segmentation. arXiv preprint arXiv:2206.09667 (2022).
[26]
Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Li Jian, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, and Zhilin Yang. 2022. FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 501–516. DOI:
[27]
Liang Xu, Xiaojing Lu, Chenyang Yuan, Xuanwei Zhang, Huilin Xu, Hu Yuan, Guoao Wei, Xiang Pan, Xin Tian, Libo Qin, et al. 2021. FewCLUE: A Chinese few-shot learning evaluation benchmark. arXiv preprint arXiv:2107.07498 (2021).
[28]
Patrizio Bellan, Han van der Aa, Mauro Dragoni, Chiara Ghidini, and Simone Paolo Ponzetto. 2022. PET: A new dataset for process extraction from natural language text. arXiv e-prints (2022), arXiv–2203.
[29]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[30]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248–255.
[31]
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. [n. d.]. Meta-learning for semi-supervised few-shot classification. Training 1, 2 ([n. d.]), 3.
[32]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 3 (2015), 211–252.
[33]
Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. 2019. The Omniglot challenge: A 3-year progress report. Curr. Opin. Behav. Sci.ences 29 (2019), 97–104.
[34]
Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. 2013. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151 (2013).
[35]
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. 2014. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3606–3613.
[36]
Lukáš Picek, Milan Šulc, Jiří Matas, et al. 2022. Danish Fungi 2020—Not just another image recognition dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1525–1535.
[37]
D. Temel, M. Chen, and G. AlRegib. 2019. Traffic sign detection under challenging conditions: A deeper look into performance variations and spectral characteristics. IEEE Trans. Intell. Transport. Syst. (2019), 1–11. DOI:
[38]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision. Springer, 740–755.
[39]
Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. EuroSAT: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Select. Topics Appl. Earth Observ. and Remote Sensing (2019).
[40]
Noel Codella, Veronica Rotemberg, Philipp Tschandl, et al. 2019. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1902.03368 (2019).
[41]
Xiaosong Wang, Yifan Peng, Le Lu, et al. 2017. ChestX-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Computer Vision and Pattern Recognition Conference. 2097–2106.
[42]
T. Schick and H. Schütze. 2020. It’s not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118 (2020).
[43]
Jingyi Xu and Hieu Le. 2022. Generating representative samples for few-shot classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9003–9013.
[44]
Jingyi Xu, Hieu Le, Mingzhen Huang, ShahRukh Athar, and Dimitris Samaras. 2021. Variational feature disentangling for fine-grained few-shot classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8812–8821.
[45]
Jiawei Yang, Hanbo Chen, Jiangpeng Yan, Xiaoyu Chen, and Jianhua Yao. 2022. Towards better understanding and better generalization of few-shot classification in histology images with contrastive learning. arXiv preprint arXiv:2202.09059 (2022).
[46]
Cheng Perng Phoo and Bharath Hariharan. 2021. Self-training for few-shot transfer across extreme task differences. In Proceedings of the 9th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=O3Y56aqpChA.
[47]
Michalis Lazarou, Tania Stathaki, and Yannis Avrithis. 2021. Iterative label cleaning for transductive and semi-supervised few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8751–8760.
[48]
Jie Ling, Lei Liao, Meng Yang, and Jia Shuai. 2022. Semi-supervised few-shot learning via multi-factor clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14564–14573.
[49]
Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022. PromDA: Prompt-based data augmentation for low-resource NLU tasks. arXiv preprint arXiv:2202.12499 (2022).
[50]
Ibrahim Aksu, Zhengyuan Liu, Min-Yen Kan, and Nancy Chen. 2022. N-shot learning for augmenting task-oriented dialogue state tracking. In Findings of the Association for Computational Linguistics. 1659–1671.
[51]
Hang Qi, Matthew Brown, and David G. Lowe. 2018. Low-shot learning with imprinted weights. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5822–5830.
[52]
Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. 2018. AutoAugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501 (2018).
[53]
Jason W. Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 6381–6387. DOI:
[54]
Zitian Chen, Yanwei Fu, Yu-Xiong Wang, et al. 2019. Image deformation meta-networks for one-shot learning. In Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition Conference. 8680–8689.
[55]
Junjie Li, Zilei Wang, and Xiaoming Hu. 2021. Learning intact features by erasing-inpainting for few-shot classification. In Proceedings of the AAAI Conference on Artificial Intelligence. 8401–8409.
[56]
Yuhang Huang, Fangle Chang, Yu Tao, Yangfan Zhao, Longhua Ma, and Hongye Su. 2022. Few-shot learning based on Attn-CutMix and task-adaptive transformer for the recognition of cotton growth state. Comput. Electron. Agric. 202 (2022), 107406. DOI:
[57]
Erik G. Miller, Nicholas E. Matsakis, and Paul A. Viola. 2000. Learning from one example through shared densities on transforms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 464–471.
[58]
Bharath Hariharan and Ross Girshick. 2017. Low-shot visual recognition by shrinking and hallucinating features. In Proceedings of the IEEE International Conference on Computer Vision. 3018–3027.
[59]
Amy Zhao, Guha Balakrishnan, Fredo Durand, John V. Guttag, and Adrian V. Dalca. 2019. Data augmentation using learned transformations for one-shot medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8543–8553.
[60]
Roland Kwitt, Sebastian Hegenbart, and Marc Niethammer. 2016. One-shot learning of scene locations via feature trajectory transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 78–86.
[61]
Jing Zhou, Yanan Zheng, Jie Tang, Li Jian, and Zhilin Yang. 2022. FlipDA: Effective and robust data augmentation for few-shot learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 8646–8665. DOI:
[62]
Tomas Pfister, James Charles, and Andrew Zisserman. 2014. Domain-adaptive discriminative one-shot learning of gestures. In Proceedings of the European Conference on Computer Vision. Springer, 814–829.
[63]
Yu Wu, Yutian Lin, Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. 2018. Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5177–5186.
[64]
Yikai Wang, Chengming Xu, Chen Liu, Li Zhang, and Yanwei Fu. 2020. Instance credibility inference for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12836–12845.
[65]
Xinzhe Li, Qianru Sun, Yaoyao Liu, Qin Zhou, Shibao Zheng, Tat-Seng Chua, and Bernt Schiele. 2019. Learning to self-train for semi-supervised few-shot classification. Adv. Neural Inf. Process. Syst. 32 (2019), 10276–10286.
[66]
Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, and Shih-Fu Chang. 2018. Low-shot learning via covariance-preserving adversarial augmentation networks. In Proceedings of the Annual Conference on Neural Information Processing Systems. 983–993. Retrieved from https://proceedings.neurips.cc/paper/2018/hash/81448138f5f163ccdba4acc69819f280-Abstract.html.
[67]
Jianhong Zhang, Manli Zhang, Zhiwu Lu, and Tao Xiang. 2021. AdarGCN: Adaptive aggregation GCN for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3482–3491.
[68]
Haofeng Zhang, Li Liu, Yang Long, Zheng Zhang, and Ling Shao. 2020. Deep transductive network for generalized zero shot learning. Pattern Recog. 105 (2020), 107370. DOI:
[69]
Ashraful Islam, Chun-Fu Richard Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, and Richard J. Radke. 2021. Dynamic distillation network for cross-domain few-shot recognition with unlabeled data. Adv. Neural Inf. Process. Syst. 34 (2021), 3584–3595.
[70]
Amit Alfassy, Leonid Karlinsky, Amit Aides, Joseph Shtok, Sivan Harary, Rogerio Feris, Raja Giryes, and Alex M. Bronstein. 2019. LaSO: Label-set operations networks for multi-label few-shot learning. In Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition Conference. 6548–6557.
[71]
Zhiyong Dai, Jianjun Yi, Lei Yan, Qingwen Xu, Liang Hu, Qi Zhang, Jiahui Li, and Guoqiang Wang. 2023. PFEMed: Few-shot medical image classification using prior guided feature enhancement. Pattern Recog. 134 (2023), 109108. DOI:
[72]
Mengting Chen, Yuxin Fang, Xinggang Wang, Heng Luo, Yifeng Geng, Xinyu Zhang, Chang Huang, Wenyu Liu, and Bo Wang. 2020. Diversity transfer network for few-shot learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 10559–10566.
[73]
Kai Li, Yulun Zhang, Kunpeng Li, and Yun Fu. 2020. Adversarial feature hallucination networks for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13470–13479.
[74]
Yao-Hung Hubert Tsai and Ruslan Salakhutdinov. 2017. Improving one-shot learning through fusing side information. CoRR abs/1710.08347 (2017).
[75]
Matthijs Douze, Arthur Szlam, Bharath Hariharan, and Hervé Jégou. 2018. Low-shot learning with large-scale diffusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3349–3358.
[76]
Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogério Schmidt Feris, Raja Giryes, and Alexander M. Bronstein. 2018. Delta-encoder: An effective sample synthesis method for few-shot object recognition. In Proceedings of the Annual Conference on Neural Information Processing Systems. 2850–2860. Retrieved from https://proceedings.neurips.cc/paper/2018/hash/1714726c817af50457d810aae9d27a2e-Abstract.html.
[77]
Aoxue Li, Tiange Luo, Tao Xiang, Weiran Huang, and Liwei Wang. 2019. Few-shot learning with global class representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9715–9724.
[78]
Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural snowball for few-shot relation learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 7772–7779.
[79]
Wentao Cui and Yuhong Guo. 2021. Parameterless transductive feature re-representation for few-shot learning. In Proceedings of the International Conference on Machine Learning. PMLR, 2212–2221.
[80]
Cheng Perng Phoo and Bharath Hariharan. 2021. Coarsely-labeled data for better few-shot transfer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9052–9061.
[81]
Kai Huang, Jie Geng, Wen Jiang, Xinyang Deng, and Zhe Xu. 2021. Pseudo-loss confidence metric for semi-supervised few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8671–8680.
[82]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[83]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the 9th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=YicbFdNTTy.
[84]
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018).
[85]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 4171–4186. DOI:
[86]
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. (2018).
[87]
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 2463–2473. DOI:
[88]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33 (2020), 1877–1901.
[89]
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative pretraining from pixels. In Proceedings of the International Conference on Machine Learning. PMLR, 1691–1703.
[90]
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000–16009.
[91]
Nanxuan Zhao, Zhirong Wu, Rynson W. H. Lau, and Stephen Lin. 2020. What makes instance discrimination good for transfer learning? arXiv preprint arXiv:2006.06606 (2020).
[92]
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. 2019. A closer look at few-shot classification. In Proceedings of the 7th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=HkxLXnAcFQ.
[93]
Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, and Kwang-Ting Cheng. 2021. Partial is better than all: Revisiting fine-tuning strategy for few-shot learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 9594–9602.
[94]
Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, and Se-Young Yun. 2022. ReFine: Re-randomization before fine-tuning for cross-domain few-shot learning. arXiv preprint arXiv:2205.05282 (2022).
[95]
Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and Phillip Isola. 2020. Rethinking few-shot image classification: A good embedding is all you need? In Proceedings of the 16th European Conference on Computer Vision. Springer, 266–282.
[96]
Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Mubarak Shah. 2021. Self-supervised knowledge distillation for few-shot learning. In Proceedings of the 32nd British Machine Vision Conference. BMVA Press, 179. Retrieved from https://www.bmvc2021-virtualconference.com/assets/papers/0820.pdf.
[97]
Carl Doersch, Ankush Gupta, and Andrew Zisserman. 2020. CrossTransformers: Spatially-aware few-shot transfer. In Proceedings of the Annual Conference on Neural Information Processing Systems. Retrieved from https://proceedings.neurips.cc/paper/2020/hash/fa28c6cdf8dd6f41a657c3d7caa5c709-Abstract.html.
[98]
Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. 2019. Cross attention network for few-shot classification. Adv. Neural Inf. Process. Syst. 32 (2019).
[99]
Xilang Huang and Seon Han Choi. 2023. SAPENet: Self-attention based prototype enhancement network for few-shot learning. Pattern Recog. 135 (2023), 109170. DOI:
[100]
Shell Xu Hu, Da Li, Jan Stühmer, et al. 2022. Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9068–9077.
[101]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100, 000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. ACL, 2383–2392. DOI:
[102]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017).
[103]
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 7th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=rJ4km2R5t7.
[104]
Jieh-Sheng Lee and Jieh Hsiang. 2020. Patent claim generation by fine-tuning OpenAI GPT-2. World Patent Inf. 62 (2020), 101983.
[105]
Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, and Yoav Artzi. 2021. Revisiting few-sample BERT fine-tuning. In Proceedings of the 9th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=cO1IH43yUF.
[106]
Xu Luo, Longhui Wei, Liangjian Wen, Jinrong Yang, Lingxi Xie, Zenglin Xu, and Qi Tian. 2021. Rectifying the shortcut learning of background for few-shot learning. Adv. Neural Inf. Process. Syst. 34 (2021), 13073–13085.
[107]
Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In Findings of the Association for Computational Linguistics: ACL/IJCNLP (Findings of ACL). Association for Computational Linguistics, 1835–1845. DOI:
[108]
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. ACL, 4582–4597. DOI:
[109]
Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Haitao Zheng, and Maosong Sun. 2022. OpenPrompt: An open-source framework for prompt-learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 105–113. DOI:
[110]
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 3045–3059. DOI:
[111]
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019).
[112]
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. ACL, 255–269. DOI:
[113]
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Trans. Assoc. Computat. Ling. 8 (2020), 423–438.
[114]
T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh. 2020. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. ACL, 4222–4235. DOI:
[115]
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. 3816–3830. DOI:
[116]
Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, and Jun Huang. 2022. Making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning. arXiv preprint arXiv:2204.00166 (2022).
[117]
Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. arXiv preprint arXiv:2104.05240 (2021).
[118]
Xiao Liu, Yanan Zheng, Zhengxiao Du, et al. 2021. GPT understands, too. arXiv preprint arXiv:2103.10385 (2021).
[119]
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. 2022. PPT: Pre-trained prompt tuning for few-shot learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 8410–8423. DOI:
[120]
Wang Yan, Jordan Yap, and Greg Mori. 2015. Multi-task transfer methods to improve one-shot learning for multimedia event detection. In Proceedings of the British Machine Vision Conference. 37–1.
[121]
Saeid Motiian, Quinn Jones, Seyed Iranmanesh, and Gianfranco Doretto. 2017. Few-shot adversarial domain adaptation. Adv. Neural Inf. Process. Syst. 30 (2017).
[122]
Sagie Benaim and Lior Wolf. 2018. One-shot unsupervised cross domain translation. Adv. Neural Inf. Process. Syst. 31 (2018).
[123]
Samaneh Azadi, Matthew Fisher, Vladimir G. Kim, Zhaowen Wang, Eli Shechtman, and Trevor Darrell. 2018. Multi-content GAN for few-shot font style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7564–7573.
[124]
Bo Liu, Xudong Wang, Mandar Dixit, Roland Kwitt, and Nuno Vasconcelos. 2018. Feature space transfer for data augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9090–9098.
[125]
Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of the 27th International Conference on Computational Linguistics. 487–498.
[126]
Yabin Zhang, Hui Tang, and Kui Jia. 2018. Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data. In Proceedings of the European Conference on Computer Vision (ECCV). 233–248.
[127]
Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. 2019. Boosting few-shot visual learning with self-supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8059–8068.
[128]
R. Zhang, P. Isola, and A. A. Efros. 2016. Colorful image colorization. In Proceedings of the European Conference on Computer Vision. Springer, 649–666.
[129]
Jong-Chyi Su, Subhransu Maji, and Bharath Hariharan. 2020. When does self-supervision improve few-shot learning? In Proceedings of the European Conference on Computer Vision. Springer, 645–666.
[130]
Zijun Li, Zhengping Hu, Weiwei Luo, and Xiao Hu. 2023. SaberNet: Self-attention based effective relation network for few-shot learning. Pattern Recog. 133 (2023), 109024. DOI:
[131]
Rui Feng, Hongbing Ji, Zhigang Zhu, and Lei Wang. 2022. SelfNet: A semi-supervised local Fisher discriminant network for few-shot learning. Neurocomputing 512 (2022), 352–362. DOI:
[132]
Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, and Donglin Wang. 2021. Pareto self-supervised training for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13663–13672.
[133]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning. PMLR, 1126–1135.
[134]
Aravind Rajeswaran, Chelsea Finn, Sham M. Kakade, and Sergey Levine. 2019. Meta-learning with implicit gradients. Adv. Neural Inf. Process. Syst. 32 (2019).
[135]
Xiaomeng Zhu and Shuxiao Li. 2022. MGML: Momentum group meta-learning for few-shot image classification. Neurocomputing 514 (2022), 351–361. DOI:
[136]
Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018).
[137]
Alex Nichol and John Schulman. 2018. Reptile: A scalable metalearning algorithm. arXiv preprint arXiv:1803.02999 2, 3 (2018), 4.
[138]
Ondrej Bohdal, Yongxin Yang, and Timothy Hospedales. 2021. EvoGrad: Efficient gradient-based meta-learning and hyperparameter optimization. Adv. Neural Inf. Process. Syst. 34 (2021), 22234–22246.
[139]
Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, and Yunhao Tang. 2019. ES-MAML: Simple Hessian-free meta learning. arXiv preprint arXiv:1910.01215 (2019).
[140]
Z. Li, F. Zhou, F. Chen, and H. Li. 2017. Meta-SGD: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017).
[141]
Yiluan Guo and Ngai-Man Cheung. 2020. Attentive weights generation for few shot learning via information maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13499–13508.
[142]
Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2019. Meta-learning with latent embedding optimization. In Proceedings of the International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=BJgklhAcK7.
[143]
Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas L. Griffiths. 2018. Recasting gradient-based meta-learning as hierarchical Bayes. In Proceedings of the International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=BJ_UL-k0b.
[144]
C. Finn, K. Xu, and S. Levine. 2018. Probabilistic model-agnostic meta-learning. Adv. Neural Inf. Process. Syst. 31 (2018).
[145]
Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. 2018. Bayesian model-agnostic meta-learning. Adv. Neural Inf. Process. Syst. 31 (2018).
[146]
Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O’Boyle, and Amos Storkey. 2020. Bayesian meta-learning for the few-shot setting via deep kernels. (2020).
[147]
Sachin Ravi and Alex Beatson. 2018. Amortized Bayesian meta-learning. In Proceedings of the International Conference on Learning Representations.
[148]
Yoonho Lee and Seungjin Choi. 2018. Gradient-based meta-learning with learned layerwise metric and subspace. In Proceedings of the International Conference on Machine Learning. PMLR, 2927–2936.
[149]
Muhammad Abdullah Jamal and Guo-Jun Qi. 2019. Task agnostic meta-learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11719–11727.
[150]
Taewon Jeong and Heeyoung Kim. 2020. OOD-MAML: Meta-learning for few-shot out-of-distribution detection and classification. Adv. Neural Inf. Process. Syst. 33 (2020), 3907–3916.
[151]
Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, and Zhenhui Li. [n. d.]. Automated relational meta-learning. ([n. d.]).
[152]
Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, and Raia Hadsell. 2019. Meta-learning with warped gradient descent. arXiv preprint arXiv:1909.00025 (2019).
[153]
Muhammad Abdullah Jamal, Liqiang Wang, and Boqing Gong. 2021. A lazy approach to long-horizon gradient-based meta-learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6577–6586.
[154]
Han-Jia Ye and Wei-Lun Chao. 2022. How to train your MAML to excel in few-shot classification. In Proceedings of the 10th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=49h_IkpJtaE.
[155]
Arnav Chavan, Rishabh Tiwari, Udbhav Bamba, and Deepak K. Gupta. 2022. Dynamic kernel selection for improved generalization and memory efficiency in meta-learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9851–9860.
[156]
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In Proceedings of the International Conference on Machine Learning. PMLR, 1842–1850.
[157]
Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to remember rare events. In Proceedings of the 5th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=SJTQLdqlg.
[158]
Tiago Ramalho and Marta Garnelo. 2019. Adaptive posterior learning: Few-shot learning with a surprise-based memory module. In Proceedings of the 7th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=ByeSdsC9Km.
[159]
Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, and Xiaodan Zhu. 2020. Dynamic memory induction networks for few-shot text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, 1087–1094. DOI:
[160]
Xuncheng Liu, Xudong Tian, Shaohui Lin, Yanyun Qu, Lizhuang Ma, Wang Yuan, Zhizhong Zhang, and Yuan Xie. 2021. Learn from concepts: Towards the purified memory for few-shot learning. In Proceedings of the International Joint Conference on Artificial Intelligence. 888–894.
[161]
Tianqin Li, Zijie Li, Andrew Luo, Harold Rockwell, Amir Barati Farimani, and Tai Sing Lee. 2021. Prototype memory and attention mechanisms for few shot image generation. In Proceedings of the International Conference on Learning Representations.
[162]
Ying-Jun Du, Xiantong Zhen, Ling Shao, and Cees G. M. Snoek. 2022. Hierarchical variational memory for few-shot learning across domains. In Proceedings of the 10th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=i3RI65sR7N.
[163]
Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. 2018. Efficient neural architecture search via parameters sharing. In Proceedings of the International Conference on Machine Learning. PMLR, 4095–4104.
[164]
Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. 2018. Understanding and simplifying one-shot architecture search. In Proceedings of the International Conference on Machine Learning. PMLR, 550–559.
[165]
Yiyang Zhao, Linnan Wang, Yuandong Tian, Rodrigo Fonseca, and Tian Guo. 2021. Few-shot neural architecture search. In Proceedings of the International Conference on Machine Learning. PMLR, 12707–12718.
[166]
Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter. 2020. Meta-learning of neural architectures for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12365–12375.
[167]
Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. DARTS: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018).
[168]
Jorge Gonzalez-Zapata, Ivan Reyes-Amezcua, Daniel Flores-Araiza, Mauricio Mendez-Ruiz, Gilberto Ochoa-Ruiz, and Andres Mendez-Vazquez. 2022. Guided deep metric learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1481–1489.
[169]
You Zhou, Changlin Chen, and Shukun Ma. 2021. Few-shot ship classification based on metric learning. Multim. Syst. (2021), 1–10.
[170]
Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al. 2015. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, Vol. 2. Lille.
[171]
Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In Proceedings of the International Workshop on Similarity-based Pattern Recognition. Springer, 84–92.
[172]
Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Meng Fang, and Pheng-Ann Heng. 2020. Revisiting metric learning for few-shot image classification. Neurocomputing 406 (2020), 49–58.
[173]
Weihua Chen, Xiaotang Chen, Jianguo Zhang, and Kaiqi Huang. 2017. Beyond triplet loss: A deep quadruplet network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 403–412.
[174]
Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In Proceedings of the International Conference on Neural Information Processing Systems. 4077–4087. Retrieved from https://proceedings.neurips.cc/paper/2017/hash/cb8da6767461f2812ae4290eac7cbc42-Abstract.html.
[175]
Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, and Liwei Wang. 2020. Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12576–12584.
[176]
Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, and Han Hu. 2020. Negative margin matters: Understanding margin in few-shot classification. In Proceedings of the European Conference on Computer Vision. Springer, 438–455.
[177]
Xiaoxu Li, Yalan Li, Yixiao Zheng, Rui Zhu, Zhanyu Ma, Jing-Hao Xue, and Jie Cao. 2023. ReNAP: Relation network with adaptive prototypical learning for few-shot classification. Neurocomputing 520 (2023), 356–364. DOI:
[178]
Kaidi Cao, Maria Brbic, and Jure Leskovec. 2021. Concept learners for few-shot learning. In Proceedings of the 9th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=eJIJF3-LoZO.
[179]
Pau Rodríguez, Issam Laradji, Alexandre Drouin, and Alexandre Lacoste. 2020. Embedding propagation: Smoother manifold for few-shot classification. In Proceedings of the European Conference on Computer Vision. Springer, 121–138.
[180]
Zhong Ji, Xiyao Liu, Yanwei Pang, and Xuelong Li. 2020. SGAP-Net: Semantic-guided attentive prototypes network for few-shot human-object interaction recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 11085–11092.
[181]
Fangyu Wu, Jeremy S. Smith, Wenjin Lu, Chaoyi Pang, and Bailing Zhang. 2020. Attentive prototype few-shot learning with capsule network-based embedding. In Proceedings of the European Conference on Computer Vision. Springer, 237–253.
[182]
Van N. Nguyen, Sigurd Løkse, Kristoffer Wickstrøm, Michael Kampffmeyer, Davide Roverso, and Robert Jenssen. 2020. SEN: A novel feature normalization dissimilarity measure for prototypical few-shot learning networks. In Proceedings of the European Conference on Computer Vision. Springer, 118–134.
[183]
Wanqi Xue and Wei Wang. 2020. One-shot image classification by learning to restore prototypes. In Proceedings of the AAAI Conference on Artificial Intelligence. 6558–6565.
[184]
Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence. 6407–6414.
[185]
Philip Chikontwe, Soopil Kim, and Sang Hyun Park. 2022. CAD: Co-adapting discriminative features for improved few-shot classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14554–14563.
[186]
Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. 2020. DeepEMD: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12203–12213.
[187]
Yidan Nie, Chunjiang Bian, and Ligang Li. 2022. Adap-EMD: Adaptive EMD for aircraft fine-grained classification in remote sensing. IEEE Geosci. Remote Sens. Lett. 19 (2022), 1–5.
[188]
Jiangtao Xie, Fei Long, Jiaming Lv, Qilong Wang, and Peihua Li. 2022. Joint distribution matters: Deep Brownian distance covariance for few-shot classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7972–7981.
[189]
Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1199–1208.
[190]
Xiaoxu Li, Jijie Wu, Zhuo Sun, Zhanyu Ma, Jie Cao, and Jing-Hao Xue. 2020. BSNet: Bi-similarity network for few-shot fine-grained image classification. IEEE Trans. Image Process. 30 (2020), 1318–1331.
[191]
Fusheng Hao, Fengxiang He, Jun Cheng, Lei Wang, Jianzhong Cao, and Dacheng Tao. 2019. Collect and select: Semantic alignment metric learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8460–8469.
[192]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S. Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32, 1 (2020), 4–24.
[193]
Victor Garcia Satorras and Joan Bruna Estrach. 2018. Few-shot learning with graph neural networks. In Proceedings of the 6th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=BJj6qGbRW.
[194]
Jongmin Kim, Taesup Kim, Sungwoong Kim, and Chang D. Yoo. 2019. Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11–20.
[195]
Avishek Joey Bose, Ankit Jain, Piero Molino, and William L. Hamilton. 2019. Meta-graph: Few shot link prediction via meta learning. arXiv preprint arXiv:1912.09867 (2019).
[196]
Yuqing Ma, Shihao Bai, Shan An, Wei Liu, Aishan Liu, Xiantong Zhen, and Xianglong Liu. 2020. Transductive relation-propagation network for few-shot learning. In Proceedings of the International Joint Conference on Artificial Intelligence. 804–810.
[197]
Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. 2019. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence. 3558–3565.
[198]
Ling Yang, Liangliang Li, Zilun Zhang, Xinyu Zhou, Erjin Zhou, and Yu Liu. 2020. DPGN: Distribution propagation graph network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13390–13399.
[199]
Huikai Shao, Dexing Zhong, Xuefeng Du, Shaoyi Du, and Raymond N. J. Veldhuis. 2021. Few-shot learning for palmprint recognition via meta-Siamese network. IEEE Trans. Instrum. Measur. 70 (2021), 1–12.
[200]
Thomas Müller, Guillermo Pérez-Torró, and Marc Franco-Salvador. 2022. Few-shot learning with siamese networks and label tuning. arXiv preprint arXiv:2203.14655 (2022).
[201]
Congying Xia, Caiming Xiong, and Philip Yu. 2021. Pseudo siamese network for few-shot intent generation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2005–2009.
[202]
Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, and Jiashi Feng. 2019. PANet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9197–9206.
[203]
Baoquan Zhang, Xutao Li, Shanshan Feng, Yunming Ye, and Rui Ye. 2022. MetaNODE: Prototype optimization as a neural ODE for few-shot learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 9014–9021.
[204]
Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. 2021. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 9042–9051. DOI:
[205]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning. PMLR, 8748–8763.
[206]
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. 2019. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8247–8255.
[207]
Shuo Wang, Jun Yue, Jianzhuang Liu, Qi Tian, and Meng Wang. 2020. Large-scale few-shot learning via multi-modal knowledge discovery. In Proceedings of the European Conference on Computer Vision. Springer, 718–734.
[208]
Aoxue Li, Tiange Luo, Zhiwu Lu, Tao Xiang, and Liwei Wang. 2019. Large-scale few-shot learning: Knowledge transfer with class hierarchy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7212–7220.
[209]
Eli Schwartz, Leonid Karlinsky, Rogerio Feris, Raja Giryes, and Alex Bronstein. 2022. Baby steps towards few-shot learning with multiple semantics. Pattern Recog. Lett. 160 (2022), 142–147.
[210]
Zhimao Peng, Zechao Li, Junge Zhang, Yan Li, Guo-Jun Qi, and Jinhui Tang. 2019. Few-shot image recognition with knowledge transfer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 441–449.
[211]
Chen Xing, Negar Rostamzadeh, Boris Oreshkin, and Pedro O. O. Pinheiro. 2019. Adaptive cross-modal few-shot learning. Adv. Neural Inf. Process. Syst. 32 (2019), 4847–4857.
[212]
Frederik Pahde, Mihai Puscas, Tassilo Klein, and Moin Nabi. 2021. Multimodal prototypical networks for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2644–2653.
[213]
Mathieu Pagé Fortin and Brahim Chaib-draa. 2021. Towards contextual learning in few-shot object classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3279–3288.
[214]
Michael Sollami and Aashish Jain. 2021. Multimodal conditionality for natural language generation. arXiv preprint arXiv:2109.01229 (2021).
[215]
Mengmeng Wang, Jiazheng Xing, and Yong Liu. 2021. ActionCLIP: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021).
[216]
Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, and Steven C. H. Hoi. 2022. Align and prompt: Video-and-language pre-training with entity prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4953–4963.
[217]
Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2021. CPT: Colorful prompt tuning for pre-trained vision-language models. arXiv preprint arXiv:2109.11797 (2021).
[218]
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Learning to prompt for vision-language models. Int. J. Comput. Vis. (2022), 1–12.
[219]
Maria Tsimpoukelli, Jacob L. Menick, Serkan Cabi, S. M. Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. Adv. Neural Inf. Process. Syst. 34 (2021), 200–212.
[220]
Daniel Shalam and Simon Korman. 2022. The self-optimal-transport feature transform. arXiv e-prints (2022), arXiv–2204.
[221]
Dong Hoon Lee and Sae-Young Chung. 2021. Unsupervised embedding adaptation via early-stage feature reconstruction for few-shot classification. arXiv preprint arXiv:2106.11486 (2021).
[222]
Haoxiang Wang, Han Zhao, and Bo Li. 2021. Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation. In Proceedings of the 38th International Conference on Machine Learning. PMLR, 10991–11002. Retrieved from http://proceedings.mlr.press/v139/wang21ad.html.
[223]
Mohamed Afham, Salman Khan, Muhammad Haris Khan, Muzammal Naseer, and Fahad Shahbaz Khan. 2021. Rich semantics improve few-shot learning. In Proceedings of the 32nd British Machine Vision Conference. BMVA Press, 152. Retrieved from https://www.bmvc2021-virtualconference.com/assets/papers/0444.pdf.
[224]
Mamshad Nayeem Rizve, Salman Khan, Fahad Shahbaz Khan, and Mubarak Shah. 2021. Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition Conference. 10836–10846.
[225]
Reza Esfandiarpoor, Amy Pu, Mohsen Hajabdollahi, and Stephen H. Bach. 2020. Extended few-shot learning: Exploiting existing resources for novel tasks. arXiv preprint arXiv:2012.07176 (2020).
[226]
Haoxing Chen, Huaxiong Li, Yaohui Li, and Chunlin Chen. 2022. Multi-scale adaptive task attention network for few-shot learning. In Proceedings of the 26th International Conference on Pattern Recognition. IEEE, 4765–4771. DOI:
[227]
Da Chen, Yuefeng Chen, Yuhong Li, Feng Mao, Yuan He, and Hui Xue. 2021. Self-supervised learning for few-shot image classification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1745–1749.
[228]
Xian Zhong, Cheng Gu, Wenxin Huang, Lin Li, Shuqin Chen, and Chia-Wen Lin. 2021. Complementing representation deficiency in few-shot image classification: A meta-learning approach. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR). IEEE, 2677–2684.
[229]
Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, and Hajime Nagahara. 2020. Match them up: Visually explainable few-shot image classification. arXiv preprint arXiv:2011.12527 (2020).
[230]
Lyes Khacef, Vincent Gripon, and Benoît Miramond. 2020. GPU-based self-organizing maps for post-labeled few-shot unsupervised learning. In Proceedings of the International Conference on Neural Information Processing. Springer, 404–416.
[231]
Zhiyu Xue, Lixin Duan, Wen Li, Lin Chen, and Jiebo Luo. 2020. Region comparison network for interpretable few-shot image classification. arXiv preprint arXiv:2009.03558 (2020).
[232]
Imtiaz Ziko, Jose Dolz, Eric Granger, and Ismail Ben Ayed. 2020. Laplacian regularized few-shot learning. In Proceedings of the International Conference on Machine Learning. PMLR, 11660–11670.
[233]
Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, and Frank Wood. 2022. Enhancing few-shot image classification with unlabelled examples. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. IEEE, 1597–1606. DOI:
[234]
Yuqing Hu, Vincent Gripon, and Stéphane Pateux. 2021. Leveraging the feature distribution in transfer-based few-shot learning. In Artificial Neural Networks and Machine Learning - 30th International Conference on Artificial Neural Networks, Vol. 12892. Springer, 487–499. DOI:
[235]
Shell Xu Hu, Pablo Garcia Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, and Andreas C. Damianou. 2020. Empirical Bayes transductive meta-learning with synthetic gradients. In Proceedings of the International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=Hkg-xgrYvH.
[236]
Cuong Nguyen, Thanh-Toan Do, and Gustavo Carneiro. 2020. PAC-Bayesian meta-learning with implicit prior and posterior. arXiv preprint arXiv:2003.02455 (2020).
[237]
Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam Kosiorek, and Yee Whye Teh. 2020. MetaFun: Meta-learning with iterative functional updates. In Proceedings of the International Conference on Machine Learning. PMLR, 10617–10627.
[238]
Jialin Liu, Fei Chao, and Chih-Min Lin. 2020. Task augmentation by rotating for meta-learning. arXiv preprint arXiv:2003.00804 (2020).
[239]
Puneet Mangla, Nupur Kumari, Abhishek Sinha, et al. 2020. Charting the right manifold: Manifold mixup for few-shot learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2218–2227.
[240]
Sung Whan Yoon, Jun Seo, and Jaekyun Moon. 2019. TapNet: Neural network augmented with task-adaptive projection for few-shot learning. In Proceedings of the International Conference on Machine Learning. PMLR, 7115–7123.
[241]
Liang Song, Jinlu Liu, and Yongqiang Qin. 2019. Generalized adaptation for few-shot learning. arXiv preprint arXiv:1911.10807 (2019).
[242]
Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. 2020. Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8808–8817.
[243]
Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, and Jiebo Luo. 2019. Revisiting local descriptor based image-to-class measure for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7260–7268.
[244]
Eunbyung Park and Junier B. Oliva. 2019. Meta-curvature. In Proceedings of the Annual Conference on Neural Information Processing Systems. 3309–3319. Retrieved from https://proceedings.neurips.cc/paper/2019/hash/57c0531e13f40b91b3b0f1a30b529a1d-Abstract.html.
[245]
Ross Girshick. 2015. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision. 1440–1448.
[246]
Xingkui Zhu, Shuchang Lyu, Xu Wang, and Qi Zhao. 2021. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2778–2788.
[247]
Xiaosong Zhang, Feng Liu, Zhiliang Peng, Zonghao Guo, Fang Wan, Xiangyang Ji, and Qixiang Ye. 2022. Integral migrating pre-trained transformer encoder-decoders for visual object detection. arXiv preprint arXiv:2205.09613 (2022).
[248]
Bo Sun, Banghuai Li, Shengcai Cai, Ye Yuan, and Chi Zhang. 2021. FSCE: Few-shot object detection via contrastive proposal encoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7352–7362.
[249]
Chenchen Zhu, Fangyi Chen, Uzair Ahmed, Zhiqiang Shen, and Marios Savvides. 2021. Semantic relation reasoning for shot-stable few-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8782–8791.
[250]
Yang Xiao and Renaud Marlet. 2020. Few-shot object detection and viewpoint estimation for objects in the wild. In Proceedings of the European Conference on Computer Vision. Springer, 192–210.
[251]
Jiaxi Wu, Songtao Liu, Di Huang, and Yunhong Wang. 2020. Multi-scale positive sample refinement for few-shot object detection. In Proceedings of the European Conference on Computer Vision. Springer, 456–472.
[252]
Xin Wang, Thomas E. Huang, Joseph Gonzalez, Trevor Darrell, and Fisher Yu. 2020. Frustratingly simple few-shot object detection. In Proceedings of the 37th International Conference on Machine Learning. PMLR, 9919–9928. Retrieved from http://proceedings.mlr.press/v119/wang20j.html.
[253]
Xiaopeng Yan, Ziliang Chen, Anni Xu, Xiaoxi Wang, Xiaodan Liang, and Liang Lin. 2019. Meta R-CNN: Towards general solver for instance-level low-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9577–9586.
[254]
Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. 2019. Meta-learning to detect rare objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9925–9934.
[255]
Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots. 2017. One-shot learning for semantic segmentation. In Proceedings of the British Machine Vision Conference. BMVA Press. Retrieved from https://www.dropbox.com/s/1odhw88t465klsz/0797.pdf?dl=1.
[256]
Zhihe Lu, Sen He, Xiatian Zhu, Li Zhang, Yi-Zhe Song, and Tao Xiang. 2021. Simpler is better: Few-shot semantic segmentation with classifier weight transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8741–8750.
[257]
Gengwei Zhang, Guoliang Kang, Yi Yang, and Yunchao Wei. 2021. Few-shot segmentation via cycle-consistent transformer. In Proceedings of the Annual Conference on Neural Information Processing Systems. 21984–21996. Retrieved from https://proceedings.neurips.cc/paper/2021/hash/b8b12f949378552c21f28deff8ba8eb6-Abstract.html.
[258]
Juhong Min, Dahyun Kang, and Minsu Cho. 2021. Hypercorrelation squeeze for few-shot segmentation. arXiv preprint arXiv:2104.01538 (2021).
[259]
Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, and Qixiang Ye. 2020. Prototype mixture models for few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision. Springer, 763–778.
[260]
Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, and Jiaya Jia. 2020. Prior guided feature enrichment network for few-shot segmentation. IEEE Trans. Pattern Anal. Mach. Intell.01 (2020), 1–1.
[261]
Yongfei Liu, Xiangyi Zhang, Songyang Zhang, and Xuming He. 2020. Part-aware prototype network for few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision. Springer, 142–158.
[262]
Khoi Nguyen and Sinisa Todorovic. 2019. Feature weighting and boosting for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 622–631.
[263]
Chi Zhang, Guosheng Lin, Fayao Liu, Rui Yao, and Chunhua Shen. 2019. CANet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5217–5226.
[264]
Dan Andrei Ganea, Bas Boom, and Ronald Poppe. 2021. Incremental few-shot instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1185–1194.
[265]
Khoi Nguyen and Sinisa Todorovic. 2021. FAPIS: A few-shot anchor-free part-based instance segmenter. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11099–11108.
[266]
Zhibo Fan, Jin-Gang Yu, Zhihao Liang, Jiarong Ou, Changxin Gao, Gui-Song Xia, and Yuanqing Li. 2020. FGN: Fully guided network for few-shot instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9172–9181.
[267]
Xiaolin Zhang, Yunchao Wei, Yi Yang, and Thomas S. Huang. 2020. SG-One: Similarity guidance network for one-shot semantic segmentation. IEEE Trans. Cyber. 50, 9 (2020), 3855–3865.
[268]
Claudio Michaelis, Ivan Ustyuzhaninov, Matthias Bethge, et al. 2018. One-shot instance segmentation. arXiv preprint arXiv:1811.11507 (2018).

Cited By

View all
  • (2025)Hyperbolic prototype rectification for few-shot 3D point cloud classificationPattern Recognition10.1016/j.patcog.2024.111042158(111042)Online publication date: Feb-2025
  • (2025)Less is more: A closer look at semantic-based few-shot learningInformation Fusion10.1016/j.inffus.2024.102672114(102672)Online publication date: Feb-2025
  • (2025)Semantic Guided prototype learning for Cross-Domain Few-Shot hyperspectral image classificationExpert Systems with Applications10.1016/j.eswa.2024.125453260(125453)Online publication date: Jan-2025
  • Show More Cited By

Index Terms

  1. A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 55, Issue 13s
      December 2023
      1367 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3606252
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 July 2023
      Online AM: 04 February 2023
      Accepted: 23 January 2023
      Revised: 16 January 2023
      Received: 08 February 2022
      Published in CSUR Volume 55, Issue 13s

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Few-shot learning
      2. one-shot learning
      3. zero-shot learning
      4. low-shot learning
      5. meta-learning
      6. prior knowledge

      Qualifiers

      • Survey

      Funding Sources

      • National Key R&D Program of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)7,236
      • Downloads (Last 6 weeks)862
      Reflects downloads up to 10 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Hyperbolic prototype rectification for few-shot 3D point cloud classificationPattern Recognition10.1016/j.patcog.2024.111042158(111042)Online publication date: Feb-2025
      • (2025)Less is more: A closer look at semantic-based few-shot learningInformation Fusion10.1016/j.inffus.2024.102672114(102672)Online publication date: Feb-2025
      • (2025)Semantic Guided prototype learning for Cross-Domain Few-Shot hyperspectral image classificationExpert Systems with Applications10.1016/j.eswa.2024.125453260(125453)Online publication date: Jan-2025
      • (2024)Exploring Few-Shot Learning Approaches for Bioinformatics AdvancementsApplying Machine Learning Techniques to Bioinformatics10.4018/979-8-3693-1822-5.ch016(303-316)Online publication date: 5-Apr-2024
      • (2024)Bioinformatics in Agriculture and Ecology Using Few-Shots Learning From Field to ConservationApplying Machine Learning Techniques to Bioinformatics10.4018/979-8-3693-1822-5.ch002(27-38)Online publication date: 5-Apr-2024
      • (2024)Cognitive diagnostic assessment: A Q-matrix constraint-based neural network methodBehavior Research Methods10.3758/s13428-024-02404-556:7(6981-7004)Online publication date: 30-Apr-2024
      • (2024)Enhancing Few-Shot Learning in Lightweight Models via Dual-Faceted Knowledge DistillationSensors10.3390/s2406181524:6(1815)Online publication date: 12-Mar-2024
      • (2024)Optimizing Few-Shot Remote Sensing Scene Classification Based on an Improved Data Augmentation ApproachRemote Sensing10.3390/rs1603052516:3(525)Online publication date: 30-Jan-2024
      • (2024)OCT Retinopathy Classification via a Semi-Supervised Pseudo-Label Sub-Domain Adaptation and Fine-Tuning MethodMathematics10.3390/math1202034712:2(347)Online publication date: 21-Jan-2024
      • (2024)Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion StrategiesJournal of Imaging10.3390/jimaging1008019510:8(195)Online publication date: 10-Aug-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media