Implementation for the different ML tasks on Kaggle platform with GPUs.
-
Updated
Oct 7, 2024 - Jupyter Notebook
Implementation for the different ML tasks on Kaggle platform with GPUs.
The LLM FineTuning and Evaluation project 🚀 enhances FLAN-T5 models for tasks like summarizing Spanish news articles 🇪🇸📰. It features detailed notebooks 📚 on fine-tuning and evaluating models to optimize performance for specific applications. 🔍✨
A set of jupyter notebooks
This notebook contains the code script I used to fine tune a Llama-2 LLM to create a python codeium generator using a custom build dataset
Annotated Notebooks to dive into Self-Attention, In-Context Learning, RAG, Knowledge-Graphs, Fine-Tuning, Model Optimization, and many more.
Fine-tuning open-source large and small instruct/chat language models (LLMs & SLMs) from the Hugging Face Model Hub using public datasets.
This repository contains notebook that shows how to fine-tune OpenAI's Whisper model on custom Hindi dataset.
A Repo to store the Google Colaboratory Notebooks that I have created and shared
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
This project fine-tunes the BERT model on Kaggle's spam-ham dataset to classify messages. It explores transformers and large language models (LLMs) in generative AI, with potential for customization to other sentiment analysis tasks. The repository includes a single Jupyter Notebook for complete code on preprocessing, training, and prediction.
notebooks to finetune `bert-small-amharic`, `bert-mini-amharic`, and `xlm-roberta-base` models using an Amharic text classification dataset and the transformers library
Dive into Diverse Topics with Hands-on Python Notebooks
In this notebook, i have tried to appy KMeans, Hierarchical and DBSCAN clustering along PCA. The dataset used is Mall_Customers. In DBSCAN, certain type of Heatmaps are used to find the Epsilon and min_samples value which have performed quite well in identifying the correct number of clusters.
Colab notebook where I replicate the Vision Transformer research paper from scratch in PyTorch on 'AI generated vs Real images dataset' and also fine-tune a pre-trained ViT-B/16 model on the same dataset to compare its performance
Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA
Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.
Notebooks of pre trained models using the HAM10000 dataset
This repository offers practical notebook examples for fine-tuning large language models (LLMs) at no cost, using Google Colab's free GPU resources. Consider starring it if helpful. Feel free to contribute or suggest improvements.
This Notebook contains the code for Fine tuning OpenAI GPT-3.5-Turbo API to specialize on answering the algerians BAC students questions within the Algeria Bac Context !!!
It is end-to-end CNN Image Classification Model notebook which identifies the food in your image. In this project I was working with pre-trained classification model EfficientNetB1 and Food101 Dataset.
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."