Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Multitask learning
Publisher:
  • Carnegie Mellon University
  • Schenley Park Pittsburgh, PA
  • United States
ISBN:978-0-591-75271-7
Order Number:AAI9823282
Pages:
256
Reflects downloads up to 04 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

Multitask Learning is an approach to inductive transfer that improves learning for one task by using the information contained in the training signals of other related tasks. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In this thesis we demonstrate multitask learning for a dozen problems. We explain how multitask learning works and show that there are many opportunities for multitask learning in real domains. We show that in some cases features that would normally be used as inputs work better if used as multitask outputs instead. We present suggestions for how to get the most out of multitask learning in artificial neural nets, present an algorithm for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Multitask learning improves generalization performance, can be applied in many different kinds of domains, and can be used with different learning algorithms. We conjecture there will be many opportunities for its use on real-world problems.

Cited By

  1. Segev N, Harel M, Mannor S, Crammer K and El-Yaniv R (2017). Learn on Source, Refine on Target: A Model Transfer Learning Framework with Random Forests, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:9, (1811-1824), Online publication date: 1-Sep-2017.
  2. Bahrampour S, Nasrabadi N, Ray A and Jenkins W (2016). Multimodal Task-Driven Dictionary Learning for Image Classification, IEEE Transactions on Image Processing, 25:1, (24-38), Online publication date: 1-Jan-2016.
  3. Wang X, Zheng W, Li X and Zhang J (2016). Cross-Scenario Transfer Person Reidentification, IEEE Transactions on Circuits and Systems for Video Technology, 26:8, (1447-1460), Online publication date: 1-Aug-2016.
  4. Bueno-Crespo A, SáNchez-GarcíA A and Sancho-GóMez J (2012). Improving learning by using artificial hints, Neurocomputing, 79, (18-25), Online publication date: 1-Mar-2012.
  5. Menke J and Martinez T (2009). Artificial neural network reduction through oracle learning, Intelligent Data Analysis, 13:1, (135-149), Online publication date: 1-Jan-2009.
  6. García-Laencina P, Serrano J, Figueiras-Vidal A and Sancho-Gómez J Multi-task Neural Networks for Dealing with Missing Inputs Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks, (282-291)
  7. Heckerman D, Kadie C and Listgarten J Leveraging information across HLA alleles/supertypes improves epitope prediction Proceedings of the 10th annual international conference on Research in Computational Molecular Biology, (296-308)
  8. Eaton E Multi-resolution learning for knowledge transfer proceedings of the 21st national conference on Artificial intelligence - Volume 2, (1908-1909)
  9. Caruana R (1997). Multitask Learning, Machine Language, 28:1, (41-75), Online publication date: 1-Jul-1997.
  10. Yu Q, Liu P, Wu Z, Ang S, Meng H and Cai L Learning cross-lingual information with multilingual BLSTM for speech synthesis of low-resource languages 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (5545-5549)
Contributors
  • Microsoft Research
  • Carnegie Mellon University

Recommendations