Rectification-Based Knowledge Retention for Task Incremental Learning
Abstract
References
Index Terms
- Rectification-Based Knowledge Retention for Task Incremental Learning
Recommendations
Knowledge Distillation for Multi-task Learning
Computer Vision – ECCV 2020 WorkshopsAbstractMulti-task learning (MTL) is to learn one single model that performs multiple tasks for achieving good performance on all tasks and lower cost on computation. Learning such a model requires to jointly optimize losses of a set of tasks with ...
Few-shot partial multi-label learning via prototype rectification
AbstractPartial multi-label learning (PML) models the scenario where each training sample is annotated with a candidate label set, among which only a subset corresponds to the ground-truth labels. Existing PML approaches generally promise that there are ...
Incremental Task Learning with Incremental Rank Updates
Computer Vision – ECCV 2022AbstractIncremental Task learning (ITL) is a category of continual learning that seeks to train a single network for multiple tasks (one after another), where training data for each task is only available during the training of that task. Neural networks ...
Comments
Information & Contributors
Information
Published In
Publisher
IEEE Computer Society
United States
Publication History
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0