GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
-
Updated
Oct 8, 2024 - C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
A programming framework for agentic AI 🤖
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Run any open-source LLMs, such as Llama 3.1, Gemma, as OpenAI compatible API endpoint in the cloud.
Official inference library for Mistral models
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Superduper: Integrate AI models and machine learning workflows with your database to implement custom AI applications, without moving your data. Including streaming inference, scalable model hosting, training and vector search.
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Sparsity-aware deep learning inference runtime for CPUs
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
Code examples and resources for DBRX, a large language model developed by Databricks
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
AICI: Prompts as (Wasm) Programs
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."