Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Redis for AI documentation

An overview of Redis for AI documentation

Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within hashes or JSON documents for indexing and querying.

Vector RAG RedisVL
AI Redis icon. Redis vector database quick start guide AI Redis icon. Retrieval-Augmented Generation quick start guide AI Redis icon. Redis vector Python client library documentation

Overview

This page organized into a few sections depending on what you’re trying to do:

How to's

  1. Create a vector index: Redis maintains a secondary index over your data with a defined schema (including vector fields and metadata). Redis supports FLAT and HNSW vector index types.
  2. Store and update vectors: Redis stores vectors and metadata in hashes or JSON objects.
  3. Search with vectors: Redis supports several advanced querying strategies with vector fields including k-nearest neighbor (KNN), vector range queries, and metadata filters.
  4. Configure vector queries at runtime. Select the best filter mode to optimize query execution.

Concepts

Learn to perform vector search and use gateways and semantic caching in your AI/ML projects.

Search LLM memory Semantic caching Semantic routing AI Gateways
AI Redis icon. Vector search guide LLM memory icon. Store memory for LLMs AI Redis icon. Semantic caching for faster, smarter LLM apps Semantic routing icon. Semantic routing chooses the best tool AI Redis icon. Deploy an enhanced gateway with Redis

Quickstarts

Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent. Get started with the following Redis Python notebooks.

Vector search retrieves results based on the similarity of high-dimensional numerical embeddings, while hybrid search combines this with traditional keyword or metadata-based filtering for more comprehensive results.

RAG

Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.

Agents

AI agents can act autonomously to plan and execute tasks for the user.

LLM memory

LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance.

Semantic caching

An estimated 31% of LLM queries are potentially redundant. Redis enables semantic caching to help cut down on LLM costs quickly.

Semantic routing

Routing is a simple and effective way of preventing misuses with your AI application or for creating branching logic between data sources etc.

Computer vision

Build a facial recognition system using the Facenet embedding model and RedisVL.

Recommendation systems

Feature store

Tutorials

Need a deeper-dive through different use cases and topics?

RAG

Ecosystem integrations

Benchmarks

See how we stack up against the competition.

Best practices

See how leaders in the industry are building their RAG apps.

RATE THIS PAGE
Back to top ↑