Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Knowledge Transfer in Vision Recognition: A Survey

Published: 17 April 2020 Publication History

Abstract

In this survey, we propose to explore and discuss the common rules behind knowledge transfer works for vision recognition tasks. To achieve this, we firstly discuss the different kinds of reusable knowledge existing in a vision recognition task, and then we categorize different knowledge transfer approaches depending on where the knowledge comes from and where the knowledge goes. Compared to previous surveys on knowledge transfer that are from the problem-oriented perspective or from the technique-oriented perspective, our viewpoint is closer to the nature of knowledge transfer and reveals common rules behind different transfer learning settings and applications. Besides different knowledge transfer categories, we also show some research works that study the transferability between different vision recognition tasks. We further give a discussion about the introduced research works and show some potential research directions in this field.

References

[1]
Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C. Fowlkes, Stefano Soatto, and Pietro Perona. 2019. Task2Vec: Task embedding for meta-learning. ArXiv abs/1902.03545 (2019).
[2]
Yonatan Amit, Michael Fink, Nathan Srebro, and Shimon Ullman. 2007. Uncovering shared structures in multiclass classification. In Proceedings of the 24th International Conference on Machine Learning. ACM, 17--24.
[3]
Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6, Nov. (2005), 1817--1853.
[4]
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems. 3981--3989.
[5]
Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multi-task feature learning. In Advances in Neural Information Processing Systems. 41--48.
[6]
Y. Aytar and A. Zisserman. 2011. Tabula rasa: Model transfer for object category detection. In Proceedings of the 2011 International Conference on Computer Vision. IEEE Computer Society, Washington, 2252--2259.
[7]
Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, and Mathieu Salzmann. 2014. Domain adaptation on the statistical manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2481--2488.
[8]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning 79, 1 (2010), 151--175.
[9]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 8 (2013), 1798--1828.
[10]
Himanshu S. Bhatt, Arun Rajkumar, and Shourya Roy. 2016. Multi-source iterative adaptation for cross-domain classification. In IJCAI. 3691--3697.
[11]
Michael C. Burl and Pietro Perona. 1996. Recognition of planar object classes. In Proceedings of the 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR’96. IEEE, 223--230.
[12]
P. Panareda Busto and Juergen Gall. 2017. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Vol. 1. 3.
[13]
Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I. Jordan. 2018. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14]
Minmin Chen, Zhixiang Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Coference on International Conference on Machine Learning. Omnipress, 1627--1634.
[15]
Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2015. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641 (2015).
[16]
Yiqiang Chen, Jindong Wang, Meiyu Huang, and Han Yu. 2019. Cross-position activity recognition with stratified transfer learning. Pervasive and Mobile Computing 57 (2019), 1--13.
[17]
Zhiyuan Chen and Bing Liu. 2016. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 10, 3 (2016), 1--145.
[18]
Edward Collier, Robert DiBiano, and Supratik Mukhopadhyay. 2018. CactusNets: Layer applicability as a metric for transfer learning. 2018 International Joint Conference on Neural Networks (IJCNN) (2018), 1--8.
[19]
Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. 2017. Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems. 3733--3742.
[20]
Nicolas Courty, Rémi Flamary, and Devis Tuia. 2014. Domain adaptation with regularized optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 274--289.
[21]
Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. 2016. Optimal transport for Domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2016).
[22]
Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems. 2292--2300.
[23]
W. Dai, Q. Yang, G. Xue, and Y. Yu. 2007. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning. ACM, New York, NY, 193--200.
[24]
Oscar Day and Taghi M. Khoshgoftaar. 2017. A survey on heterogeneous transfer learning. Journal of Big Data 4, 1 (2017), 29.
[25]
Zhengming Ding and Yun Fu. 2017. Robust transfer metric learning for image classification. IEEE Transactions on Image Processing 26, 2 (2017), 660--670.
[26]
Carl Doersch, Abhinav Gupta, and Alexei A. Efros. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision. 1422--1430.
[27]
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. DECAF: A deep convolutional activation feature for generic visual recognition. In Proceedings of the International Conference on Machine Learning. 647--655.
[28]
Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. 2016. Adversarial feature learning. arXiv preprint arXiv:1605.09782 (2016).
[29]
Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. 2016. Rl: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779 (2016).
[30]
Kshitij Dwivedi and Gemma Roig. 2019. Representation similarity analysis for efficient task taxonomy 8 transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12387--12396.
[31]
Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11, Feb. (2010), 625--660.
[32]
Li Fei-Fei, Rob Fergus, and Pietro Perona. 2006. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 4 (2006), 594--611.
[33]
Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. 2013. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1–8, 2013. 2960--2967.
[34]
Michael Fink. 2005. Object classification from a single example utilizing class relevance metrics. In Advances in Neural Information Processing Systems. 449--456.
[35]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 1126--1135.
[36]
Chelsea Finn, Sergey Levine, and Pieter Abbeel. 2016. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the International Conference on Machine Learning. 49--58.
[37]
Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. 2016. Deep spatial autoencoders for visuomotor learning. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 512--519.
[38]
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, and Sergey Levine. 2016. Generalizing skills with semi-supervised reinforcement learning. arXiv preprint arXiv:1612.00429 (2016).
[39]
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. 2017. One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905 (2017).
[40]
Magda Friedjungová and Marcel Jirina. 2017. Asymmetric heterogeneous transfer learning: A survey. In DATA. 17--27.
[41]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning. 1180--1189.
[42]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096--2030.
[43]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015).
[44]
Liang Ge, Jing Gao, Hung Ngo, Kang Li, and Aidong Zhang. 2014. On handling negative transfer and imbalanced distributions in multiple source transfer learning. Statistical Analysis and Data Mining: The ASA Data Science Journal 7, 4 (2014), 254--271.
[45]
Weifeng Ge and Yizhou Yu. 2017. Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, Vol. 6.
[46]
Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2066--2073.
[47]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672--2680.
[48]
Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2011. Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the 2011 International Conference on Computer Vision. IEEE, 999--1006.
[49]
Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji Fukumizu, and Bharath K. Sriperumbudur. 2012. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems. 1205--1213.
[50]
David Ha, Andrew Dai, and Quoc V. Le. 2016. Hypernetworks. arXiv preprint arXiv:1609.09106 (2016).
[51]
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2019. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722 (2019).
[52]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[53]
R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In Proceedings of the International Conference on Learning Representations. https://openreview.net/forum?id=Bklr3j0cKX.
[54]
Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. 2001. Learning to learn using gradient descent. In Proceedings of the International Conference on Artificial Neural Networks. Springer, 87--94.
[55]
Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, and Trevor Darrell. 2017. Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017).
[56]
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. 2016. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397 (2016).
[57]
Wei Jiang, Eric Zavesky, Shih-Fu Chang, and Alex Loui. 2008. Cross-domain learning methods for high-level visual concept classification. In Proceedings of the 15th IEEE International Conference on Image Processing, ICIP 2008. IEEE, 161--164.
[58]
Alireza Karbalayghareh, Xiaoning Qian, and Edward R. Dougherty. 2018. Optimal Bayesian transfer learning. IEEE Transactions on Signal Processing 66, 14 (2018), 3724--3739.
[59]
Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaśkowski. 2016. Vizdoom: A doom-based AI research platform for visual reinforcement learning. In Proceedings of the 2016 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1--8.
[60]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097--1105.
[61]
Ilja Kuzborskij and Francesco Orabona. 2013. Stability and hypothesis transfer learning. In Proceedings of the International Conference on Machine Learning. 942--950.
[62]
I. Kuzborskij, F. Orabona, and B. Caputo. 2013. From N to N+1: Multiclass transfer incremental learning. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. 3358--3365.
[63]
Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. 2013. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 3 (2013), 453--465.
[64]
Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 609--616.
[65]
Honglak Lee, Peter Pham, Yan Largman, and Andrew Y. Ng. 2009. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in Neural Information Processing Systems. 1096--1104.
[66]
Ke Li and Jitendra Malik. 2016. Learning to optimize. arXiv preprint arXiv:1606.01885 (2016).
[67]
Sheng Li, Kang Li, and Yun Fu. 2018. Self-taught low-rank coding for visual learning. IEEE Transactions on Neural Networks and Learning Systems 29, 3 (2018), 645--656.
[68]
Xiao Li. 2007. Regularized Adaptation: Theory, Algorithms and Applications. Vol. 68. Citeseer.
[69]
Ming-Yu Liu and Oncel Tuzel. 2016. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems. 469--477.
[70]
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu. 2013. Transfer feature learning with joint distribution adaptation. In Proceedings of the 2013 IEEE International Conference on Computer Vision. 2200--2207.
[71]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2016. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems (NIPS).
[72]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 2208--2217. http://proceedings.mlr.press/v70/long17a.html.
[73]
M.-S. Long, Y. Cao, J.-M. Wang, and M. Jordan. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning. 97--105.
[74]
Hao Lu, Lei Zhang, Zhiguo Cao, Wei Wei, Ke Xian, Chunhua Shen, and Anton van den Hengel. 2017. When unsupervised domain adaptation meets tensor representations. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Vol. 2.
[75]
Y. Lu, L. Chen, A. Saidi, E. Dellandrea, and Y. Wang. 2018. Discriminative transfer learning using similarities and dissimilarities. IEEE Transactions on Neural Networks and Learning Systems 29, 7 (July 2018), 3097--3110.
[76]
Zhongqi Lu, Yin Zhu, Sinno Jialin Pan, Evan Wei Xiang, Yujing Wang, and Qiang Yang. 2014. Source free transfer learning for text classification. In Proceedings of the 28th AAAI Conference on Artificial Intelligence.
[77]
Lingkun Luo, Liming Chen, Shiqiang Hu, Ying Lu, and Xiaofang Wang. 2017. Discriminative and geometry aware unsupervised domain adaptation. CoRR abs/1712.10042 (2017). arxiv:1712.10042 http://arxiv.org/abs/1712.10042.
[78]
Yong Luo, Yonggang Wen, Lingyu Duan, and Dacheng Tao. 2018. Transfer metric learning: Algorithms, applications, and outlooks. arXiv preprint arXiv:1810.03944 (2018).
[79]
Varun Manjunatha, Srikumar Ramalingam, Tim K. Marks, and Larry Davis. 2018. Class subset selection for transfer learning using submodularity. arXiv preprint arXiv:1804.00060 (2018).
[80]
Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 2554--2563.
[81]
Devang K. Naik and Richard J. Mammone. 1992. Meta-neural networks that learn by learning. In Proceedings of the 1992 IJCNN International Joint Conference on Neural Networks, Vol. 1. IEEE, 437--442.
[82]
Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018).
[83]
Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision. Springer, 69--84.
[84]
Mehdi Noroozi, Hamed Pirsiavash, and Paolo Favaro. 2017. Representation learning by learning to count. In Proceedings of the IEEE International Conference on Computer Vision. 5898--5906.
[85]
Arghya Pal and Vineeth N. Balasubramanian. 2019. Zero-shot task transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[86]
Sinno Jialin Pan. 2014. Transfer learning. In Data Classification: Algorithms and Applications, Charu C. Aggarwal (Ed.). CRC Press, 537--570. http://www.crcnetbase.com/doi/abs/10.1201/b17320-22.
[87]
Sinno Jialin Pan, James T. Kwok, and Qiang Yang. 2008. Transfer learning via dimensionality reduction. In AAAI, Vol. 8. 677--682.
[88]
Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22, 2 (2011), 199--210.
[89]
S. J. Pan and Q. Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22 (Oct. 2010), 1345--1359.
[90]
Shibin Parameswaran and Kilian Q. Weinberger. 2010. Large margin multi-task metric learning. In Advances In Neural Information Processing Systems. 1867--1875.
[91]
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2536--2544.
[92]
Michaël Perrot, Nicolas Courty, Rémi Flamary, and Amaury Habrard. 2016. Mapping estimation for discrete optimal transport. In Advances in Neural Information Processing Systems. 4197--4205.
[93]
Michaël Perrot and Amaury Habrard. 2015. A theoretical analysis of metric hypothesis transfer learning. In International Conference on Machine Learning. 1708--1717.
[94]
G.-J. Qi, C. Aggarwal, Y. Rui, Q. Tian, S. Chang, and T. Huang. 2011. Towards cross-category knowledge propagation for learning visual concepts. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. 897--904.
[95]
Ariadna Quattoni, Michael Collins, and Trevor Darrell. 2007. Learning visual representations using images with captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’07. IEEE, 1--8.
[96]
R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. 2007. Self-taught learning: Transfer learning from unlabeled data. In Proceedings of the 24th International Conference on Machine Learning.
[97]
Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In ICLR.
[98]
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014).
[99]
Michael T. Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G. Dietterich. 2005. To transfer or not to transfer. In Proceedings of the NIPS 2005 Workshop on Transfer Learning, Vol. 898. 3.
[100]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision (IJCV) 115, 3 (2015), 211--252.
[101]
Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. 2012. One-shot learning with a hierarchical nonparametric Bayesian model. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning. 195--206.
[102]
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In Proceedings of the International Conference on Machine Learning. 1842--1850.
[103]
Jürgen Schmidhuber. 1987. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-… hook. Ph.D. Dissertation. Technische Universität München.
[104]
Jürgen Schmidhuber. 1992. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation 4, 1 (1992), 131--139.
[105]
Chun-Wei Seah, Yew-Soon Ong, and Ivor W. Tsang. 2012. Combating negative transfer from predictive distribution differences. IEEE Transactions on Cybernetics 43, 4 (2012), 1153--1165.
[106]
L. Shao, F. Zhu, and X. Li. 2014. Transfer learning for visual categorization: A survey. IEEE Transactions on Neural Networks and Learning Systems PP, 99 (July 2014), 1--1.
[107]
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 806--813.
[108]
S. Si, D. Tao, and B. Geng. 2010. Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22 (July 2010), 929--942.
[109]
Daniel L. Silver and Kristin P. Bennett. 2008. Guest editor’s introduction: Special issue on inductive transfer learning. Machine Learning 73, 3 (2008), 215--220.
[110]
Daniel L. Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Beyond learning algorithms. In Proceedings of the 2013 AAAI Spring Symposium Series.
[111]
Suraj Srinivas and Francois Fleuret. 2018. Knowledge transfer with jacobian matching. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 4730--4738.
[112]
Trevor Standley, Amir R. Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. 2019. Which tasks should be learned together in multi-task learning? arXiv preprint arXiv:1905.07553 (2019).
[113]
Baochen Sun and Kate Saenko. 2015. Subspace Distribution Alignment for Unsupervised Domain Adaptation. In BMVC. 24--1.
[114]
Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M. Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1199--1208.
[115]
Yuxing Tang, Josiah Wang, Xiaofang Wang, Boyang Gao, Emmanuel Dellandréa, Robert Gaizauskas, and Liming Chen. 2017. Visual and semantic knowledge transfer for large scale semi-supervised object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).
[116]
Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. 2017. A deep hierarchical approach to lifelong learning in minecraft. In Proceedings of the 31st AAAI Conference on Artificial Intelligence.
[117]
Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In Advances in Neural Information Processing Systems. The MIT Press, 640--646.
[118]
Sebastian Thrun and Lorien Pratt. 2012. Learning to Learn. Springer Science 8 Business Media.
[119]
T. Tommasi, F. Orabona, and B. Caputo. 2010. Safety in numbers: Learning categories from few examples with multi model knowledge transfer. In Proceedings of the 2010 IEEE Conference Computer Vision and Pattern Recognition. 3081--3088.
[120]
Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. 2015. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision. 4068--4076.
[121]
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR), Vol. 1. 4.
[122]
Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).
[123]
Vladimir Vapnik. 1992. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems. 831--838.
[124]
Cédric Villani. 2008. Optimal Transport: Old and New. Vol. 338. Springer Science 8 Business Media.
[125]
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning. ACM, 1096--1103.
[126]
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research 11, Dec (2010), 3371--3408.
[127]
Hua Wang, Feiping Nie, and Heng Huang. 2013. Robust and discriminative self-taught learning. In International Conference on Machine Learning. 298--306.
[128]
Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi Shen. 2017. Balanced distribution adaptation for transfer learning. In 2017 IEEE International Conference on Data Mining (ICDM). IEEE, 1129--1134.
[129]
Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and Philip S. Yu. 2018. Visual domain adaptation with manifold embedded distribution alignment. In 2018 ACM Multimedia Conference on Multimedia Conference. ACM, 402--410.
[130]
Jindong Wang, Vincent W. Zheng, Yiqiang Chen, and Meiyu Huang. 2018. Deep transfer learning for cross-domain activity recognition. In Proceedings of the 3rd International Conference on Crowd Science and Engineering. ACM, 16.
[131]
Mei Wang and Weihong Deng. 2018. Deep visual domain adaptation: A survey. Neurocomputing 312 (2018), 135--153.
[132]
Karl Weiss, Taghi M. Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big Data 3, 1 (2016), 9.
[133]
Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. 2018. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3733--3742.
[134]
Evan Wei Xiang, Sinno Jialin Pan, Weike Pan, Jian Su, and Qiang Yang. 2011. Source-selection-free transfer learning. In Proceedings of the International Joint Conference on Artificial Intelligence.
[135]
J. Yang, R. Yan, and A. G. Hauptmann. 2007. Cross-domain video concept detection using adaptive svms. In Proceedings of the 15th International Conference Multimedia. ACM, New York, NY, 188--197.
[136]
Y. Yao and G. Doretto. 2010. Boosting for transfer learning with multiple sources. In Proceedings of the 2010 IEEE Conference Computer Vision and Pattern Recognition. 1855--1862.
[137]
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. 2017. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[138]
Wei Ying, Yu Zhang, Junzhou Huang, and Qiang Yang. 2018. Transfer learning via learning to transfer. In Proceedings of the International Conference on Machine Learning. 5072--5081.
[139]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems. 3320--3328.
[140]
Xiaodong Yu and Yiannis Aloimonos. 2010. Attribute-based transfer learning for object categorization with zero/one training example. Computer Vision–ECCV 2010, 127--140.
[141]
Amir R. Zamir, Alexander Sax, William Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3712--3722.
[142]
Amir R. Zamir, Tilman Wekel, Pulkit Agrawal, Colin Wei, Jitendra Malik, and Silvio Savarese. 2016. Generic 3D representation via pose estimation and matching. In European Conference on Computer Vision. Springer, 535--553.
[143]
Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. 2018. Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[144]
Jing Zhang, Wanqing Li, and Philip Ogunbona. 2017. Joint geometrical and statistical alignment for visual domain adaptation. arXiv preprint arXiv:1705.05498 (2017).
[145]
Jing Zhang, Wanqing Li, Philip Ogunbona, and Dong Xu. 2019. Recent advances in transfer learning for cross-dataset visual recognition: A problem-oriented perspective. ACM Computing Surveys (CSUR) 52, 1 (2019), 7.
[146]
Lei Zhang. 2019. Transfer adaptation learning: A decade survey. arXiv preprint arXiv:1903.04687 (2019).
[147]
Richard Zhang, Phillip Isola, and Alexei A. Efros. 2016. Colorful image colorization. In European Conference on Computer Vision. Springer, 649--666.
[148]
Yu Zhang and Qiang Yang. 2017. A survey on multi-task learning. arXiv preprint arXiv:1707.08114 (2017).
[149]
Bo Zhao, Yanwei Fu, Rui Liang, Jiahong Wu, Yonggang Wang, and Yizhou Wang. 2019. A large-scale attribute dataset for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. http://openaccess.thecvf.com/content_CVPRW_2019/papers/MULA/Zhao_A_Large-Scale_Attribute_Dataset_for_Zero-Shot_Learning_CVPRW_2019_paper.pdf.
[150]
Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. 2019. Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE International Conference on Computer Vision. 6002--6012.
[151]
Fariba Zohrizadeh, Mohsen Kheirandishfard, and Farhad Kamangar. 2019. Class subset selection for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.

Cited By

View all
  • (2024)Discriminative Noise Robust Sparse Orthogonal Label Regression-Based Domain AdaptationInternational Journal of Computer Vision10.1007/s11263-023-01865-z132:1(161-184)Online publication date: 1-Jan-2024
  • (2024)Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain AdaptationNeural Processing Letters10.1007/s11063-024-11677-y56:4Online publication date: 13-Aug-2024
  • (2023)Predicting the success of transfer learning for genetic programming using DeepInsight feature space alignmentAI Communications10.3233/AIC-23010436:3(159-173)Online publication date: 21-Aug-2023
  • Show More Cited By

Index Terms

  1. Knowledge Transfer in Vision Recognition: A Survey

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 53, Issue 2
    March 2021
    848 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3388460
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 April 2020
    Accepted: 01 January 2020
    Revised: 01 December 2019
    Received: 01 March 2019
    Published in CSUR Volume 53, Issue 2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Knowledge transfer
    2. computer vision
    3. machine learning
    4. transfer learning
    5. vision recognition

    Qualifiers

    • Survey
    • Research
    • Refereed

    Funding Sources

    • EU FEDER, Saint-Etienne Metropole and Region Auvergne-Rhone-Alpes fundings
    • PARTNER UNIVERSITY FUND

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)69
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 14 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Discriminative Noise Robust Sparse Orthogonal Label Regression-Based Domain AdaptationInternational Journal of Computer Vision10.1007/s11263-023-01865-z132:1(161-184)Online publication date: 1-Jan-2024
    • (2024)Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain AdaptationNeural Processing Letters10.1007/s11063-024-11677-y56:4Online publication date: 13-Aug-2024
    • (2023)Predicting the success of transfer learning for genetic programming using DeepInsight feature space alignmentAI Communications10.3233/AIC-23010436:3(159-173)Online publication date: 21-Aug-2023
    • (2023)Joint Transfer Extreme Learning Machine with Cross-Domain Mean Approximation and Output Weight AlignmentComplexity10.1155/2023/50722472023Online publication date: 1-Jan-2023
    • (2023)Visual Query Tuning: Towards Effective Usage of Intermediate Representations for Parameter and Memory Efficient Transfer Learning2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.00746(7725-7735)Online publication date: Jun-2023
    • (2023)Using teacher-student neural networks based on knowledge distillation to detect anomalous samples in the otolith imagesZoology10.1016/j.zool.2023.126133161(126133)Online publication date: Dec-2023
    • (2023)DC-DC Buck circuit fault diagnosis with insufficient state data based on deep model and transfer strategyExpert Systems with Applications10.1016/j.eswa.2022.118918213(118918)Online publication date: Mar-2023
    • (2022)Material measurement units for a circular economy: Foundations through a reviewSustainable Production and Consumption10.1016/j.spc.2022.05.02232(833-850)Online publication date: Jul-2022
    • (2021)OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR46437.2021.01552(15774-15783)Online publication date: Jun-2021
    • (2021)Integrating semantic features in fruit recognition based on perceptual color and semantic templateInformation Processing in Agriculture10.1016/j.inpa.2021.02.004Online publication date: Mar-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media