Community Articles
view allPerformance Comparison: Llama-3.2 vs. Llama-3.1 LLMs and Smaller Models (3B, 1B) in Medical and Healthcare AI Domains 🩺🧬💊
By
•
Building a Custom Arabic Semantic Search Model with Arabic Matryoshka Embeddings for RAG Using Sentence Transformers
By
•
•
4Evaluations with Chat Formats
By
•
•
1🌟 Easy Fine-Tuning with Hugging Face SQL Console, Notebook Creator, and SFT
By
•
•
12Does Daily Software Engineering Work Need Reasoning Models?
By
•
•
4Document Similarity Search with ColPali
By
•
•
27Making the spectrum of ‘openness’ in AI more visible
By
•
•
1Recreating o1 at Home with Role-Play LLMs
By
•
•
14Self Generative Systems (SGS) and Its Integration with AI Models
By
•
•
1This Title Is Already Tokenized (Tokun P.2)
By
•
•
5Fine-tuning Parler TTS on a Specific Language
By
•
•
19"Diffusers Image Fill" guide
By
•
•
24All LLMs Write Great Code, But Some Make (A Lot) Fewer Mistakes
By
•
•
3Training Flux Locally on Mac
By
•
•
10Improving performance with Arena Learning in post training
By
•
•
4Fine Tuning a LLM Using Kubernetes with Intel® Gaudi® Accelerator
By
•
•
5Introducing AISAK-O
By
•
•
1Full Training Tutorial and Guide and Research For a FLUX Style
By
•
•
3Fine-tuning a token classification model for legal data using Argilla and AutoTrain
By
•
•
11Llama-3.1 8B Carrot - Capx AI
By
•
•
2