Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Apr 12, 2022 · In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as ...
The proposed approach, denoted as Generative Negative Replay, does not attempt to improve the knowledge of old classes using the generated data because it ...
Oct 31, 2022 · We first introduce pseudo text replay that generates hard negative texts conditioned on the training images in memory, which not only better preserves learned ...
multi-modal continual learning setting, we first introduce pseudo text replay that generates hard negative texts conditioned on the training images in ...
One of the most effective strategies to control catastrophic forgetting, the Achilles' heel of continual learning, is storing part of the old data and replaying ...
Ayub, A., & Wagner, A. R. (2021). EEC: Learning to Encode and Regenerate Images for Continual Learning. In International conference on learning representations.
Oct 23, 2022 · In this work, we focus on learning a VLP model with sequential chunks of image-text pair data. ... Moreover, we propose multi-modal knowledge ...
In this work, we focus on learning a VLP model with sequential chunks of image-text pair data. ... Moreover, we propose multi-modal knowledge distillation between ...
People also ask
Generative replay (GR), which typically consists of a genera- tor and a classifier, is an efficient way to mitigate catastrophic forgetting. However, ...