Cited By
View all- Ding KLi XYu QWang YZhang HXiang S(2024)Compositional Kronecker Context Optimization for vision–language modelsNeurocomputing10.1016/j.neucom.2024.128421608:COnline publication date: 1-Dec-2024
Prompt tuning has enhanced the performance of Pre-trained Language Models for multi-task learning in few-shot scenarios. However, existing studies fail to consider that the prompts among different layers in Transformer are different due to the diverse ...
Pretrained Vision-Language Models (VLMs) like CLIP have exhibited remarkable capacities across downstream tasks, while their image encoders are vulnerable to adversarial examples. A recently introduced lightweight approach, termed Adversarial ...
Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in ...
Elsevier Science Publishers B. V.
Netherlands