jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
-
Updated
Jul 7, 2024 - Jupyter Notebook
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
Fine-tuning open-source large and small instruct/chat language models (LLMs & SLMs) from the Hugging Face Model Hub using public datasets.
LoRa@FIIT algorithms comparison using jupyter notebooks
Paparspace notebook for kohya gui lora training
This repository provides a Jupyter notebook demonstrating parameter-efficient fine-tuning (PEFT) with LoRA on Hugging Face models.
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
This repository contains experiments on fine-tuning LLMs (Llama, Llama3.1, Gemma). It includes notebooks for model tuning, data preprocessing, and hyperparameter optimization to enhance model performance.
Add a description, image, and links to the lora topic page so that developers can more easily learn about it.
To associate your repository with the lora topic, visit your repo's landing page and select "manage topics."