Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning - neelsjain/NEFTune.
Jun 23, 2024 · As long as I read is right, neftune is being applied twice both in trl and transformers. trl/trl/trainer/sft_trainer.py Line 440 in 39a7d1c ...
Introduced the NEFTune method. · Training Kosy-platypus. · Training Kosy-Orca-Platypus. · User can adjust the noisy_alpha with config(parser).
Oct 13, 2023 · I did for a very simple example, and think it should work for LoRA. If you want to try it, all you'd need to do is specify neftune_alpha in finetune/lora.py.
This paper introduces NEFTune, a simple yet effective augmentation technique that improves the finetuning process of language models by adding noise to the ...
Oct 13, 2023 · NEFT is a simple trick where noise is applied to the embeddings during instruction tuning which improves the performance significantly.
Oct 18, 2023 · The TRL library introduced support for NEFTune in this pr Motivation Improves Supervised Fine-tuning performance See paper : https://arxiv.org/abs/2310.05914
Oct 9, 2023 · We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation. NEFTune adds noise to the embedding vectors during ...