Perplexica is an AI-powered answering engine.
-
Updated
Feb 13, 2026 - TypeScript
Perplexica is an AI-powered answering engine.
High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model discovery across local and remote inference backends.
On-device AI engine for transcription, TTS, and voice workflows.
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
Free, open-source alternative to Weavy AI, Krea Nodes, Freepik Spaces & FloraFauna AI — node-based AI workflow builder for generative image & video pipelines
SmarterRouter: An intelligent LLM gateway and VRAM-aware router for Ollama, llama.cpp, and OpenAI. Features semantic caching, model profiling, and automatic failover for local AI labs.
emotional AI Companions for personal relationships
Recallium is a local, self-hosted universal AI memory system providing a persistent knowledge layer for developer tools (Copilot, Cursor, Claude Desktop). It eliminates "AI amnesia" by automatically capturing, clustering, and surfacing decisions and patterns across all projects. It uses the MCP for universal compatibility and ensures privacy
Run IBM Granite 4.0 locally on Raspberry Pi 5 with Ollama.This is a privacy-first AI. Your data never leaves your device because it runs 100% locally. There are no cloud uploads and no third-party tracking.
A private, local RAG (Retrieval-Augmented Generation) system using Flowise, Ollama, and open-source LLMs to chat with your documents securely and offline.
AI SMS Auto-Responder for Android. Turn your Android device into an autonomous AI communication hub. A Python-based SMS auto-responder running natively on Termux, powered by LLMs (OpenRouter/Ollama) with a sleek Web & Terminal UI.
Production-ready guide for connecting OpenClaw to a Telegram Bot. Build a self-hosted Telegram AI Agent using OpenClaw Gateway, pairing, and streaming responses.
Self-hosted AI chat interface with RAG, long-term memory, and admin controls. Works with TabbyAPI, Ollama, vLLM, and any OpenAI-compatible API.
Production-ready test-time compute optimization framework for LLM inference. Implements Best-of-N, Sequential Revision, and Beam Search strategies. Validated with models up to 7B parameters.
Kernel-first, lightweight OSS for self-hosted multi-turn AI chat in Docker, with transparent runtime logic, baseline performance data, and split-licensed extensibility.
Powers the local RAG pipeline in the BrainDrive Chat w/ Docs plugin.
🌳 Open-source RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval - Complete open-source implementation with 100% local LLMs (Granite Code 8B + mxbai-embed-large)
Deployment of a self-hosted LLM infrastructure using Ollama and Open WebUI on Linux, including custom model creation, API integration, and system-level troubleshooting.
LocalPrompt is an AI-powered tool designed to refine and optimize AI prompts, helping users run locally hosted AI models like Mistral-7B for privacy and efficiency. Ideal for developers seeking to run LLMs locally without external APIs.
Web-Based Q&A Tool enables users to extract and query website content using FastAPI, FAISS, and a local TinyLlama-1.1B model—without external APIs. Built with React, it offers a minimal UI for seamless AI-driven search
Add a description, image, and links to the self-hosted-ai topic page so that developers can more easily learn about it.
To associate your repository with the self-hosted-ai topic, visit your repo's landing page and select "manage topics."