Sage is like an open-source GitHub Copilot that helps you learn how a codebase works and how to integrate it into your project without spending hours sifting through the code.
- Dead-simple setup. Follow our quickstart guide to get started.
- Runs locally or on the cloud. When privacy is your priority, you can run the entire pipeline locally using Ollama for LLMs and Marqo as a vector store. When optimizing for quality, you can use third-party LLM providers like OpenAI and Anthropic.
- Wide range of built-in retrieval mechanisms. We support both lightweight retrieval strategies (with nothing more but an LLM API key required) and more traditional RAG (which requires indexing the codebase). There are many knobs you can tune for retrieval to work well on your codebase.
- Well-documented experiments. We profile various strategies (for embeddings, retrieval etc.) on our own benchmark and thoroughly document the results.
We're working to make all code on the internet searchable and understandable for devs. You can check out hosted app. We pre-indexed a slew of OSS repos, and you can index your desired ones by simply pasting a GitHub URL.
If you're the maintainer of an OSS repo and would like a dedicated page on Code Sage (e.g. sage.storia.ai/your-repo
), then send us a message at founders@storia.ai. We'll do it for free!
We built the code purposefully modular so that you can plug in your desired embeddings, LLM and vector stores providers by simply implementing the relevant abstract classes.
Feel free to send feature requests to founders@storia.ai or make a pull request!