Ready to accelerate your GenAI apps with Azure Managed Redis? Join Kyle Teegarden, Microsoft’s Principal Product Manager, and Shreya Verma, our Director of Product Management on Thursday, March 6th for a livestream covering: 📍 Azure Managed Redis as a vector database 📍 Semantic caching using Redis and Azure OpenAI Service 📍 Best practices and real-world examples Sign up here: https://lnkd.in/g9FftUvG
Redis
Software Development
Mountain View, CA 265,806 followers
The world's fastest data platform.
About us
Redis is the world's fastest data platform. We provide cloud and on-prem solutions for caching, vector search, and more that seamlessly fit into any tech stack. With fast setup and fast support, we make it simple for digital customers to build, scale, and deploy the fast apps our world runs on.
- Website
-
http://redis.io
External link for Redis
- Industry
- Software Development
- Company size
- 501-1,000 employees
- Headquarters
- Mountain View, CA
- Type
- Privately Held
- Founded
- 2011
- Specialties
- In-Memory Database, NoSQL, Redis, Caching, Key Value Store, real-time transaction processing, Real-Time Analytics, Fast Data Ingest, Microservices, Vector Database, Vector Similarity Search, JSON Database, Search Engine, Real-Time Index and Query, Event Streaming, Time-Series Database, DBaaS, Serverless Database, Online Feature Store, and Active-Active Geo-Distribution
Locations
-
Primary
700 E. El Camino Real
Suite 250
Mountain View, CA 94041, US
-
Bridge House, 4 Borough High Street
London, England SE1 9QQ, GB
-
94 Yigal Alon St.
Alon 2 Tower, 32nd Floor
Tel Aviv, Tel Aviv 6789140, IL
-
316 West 12th Street, Suite 130
Austin, Texas 78701, US
Employees at Redis
Updates
-
Redis reposted this
Today’s technical post is about Bluesky Social. The platform is close to reaching 32 million users, and with that, it has started facing new technical challenges. Yesterday, Jaz, one of their software engineers, shared an article where he specifically talked about the challenge Bluesky is now facing with users who follow hundreds of thousands of accounts. Follow the thread to understand. Jaz explains that in Bluesky’s database, each user has a logical table that represents their timeline. When someone makes a new post, Bluesky needs to take a reference to that post and copy it into each user’s timeline table. When a user follows hundreds of thousands of accounts, hundreds of thousands of posts are being added to their timeline all the time. This creates huge overload on the database, causing delays of up to 20 minutes! This wouldn’t be a problem if it only affected that user. But the real issue is that many of these logical timelines share the same database shards. This means that users who share a shard with an account that follows hundreds of thousands of others also experience slowdowns. To solve this problem, Bluesky implemented what they call Lossy Timelines. The idea is simple: when a user follows too many accounts, the platform randomly drops some posts from their timeline. Instead of trying to store everything, the system reduces the load without significantly affecting the user experience. This works with a probability rule based on the number of follows. If a user follows up to a “reasonable” limit (e.g., 2,000 accounts), they receive all posts normally. But if they follow 4,000 accounts, they might miss about 50% of the posts. If they follow 8,000, they might miss 75%. The more accounts they follow, the more posts get dropped. This mechanism dramatically reduces database writes and prevents users with excessive follows from creating hot shards—database partitions that become overloaded due to too many writes. With fewer fanout operations for these users, the rest of the system continues to perform well. However, to apply this rule, the system first needs to check how many accounts each user follows before deciding to drop posts. Doing this directly in the database would add millions of extra reads per second, which would be unsustainable. This is where Redis comes in. Bluesky uses a Redis Sorted Set to store users with high follow counts and updates this data every 30 seconds. This allows the system to quickly determine who should have a lossy timeline without overloading the database. As a result, the platform can scale efficiently without degrading the experience for most users. 🚀
-
-
Collaborative filtering may seem like magic, but it's actually grounded in logic and relies on user interactions to make recommendations. Our latest blog looks at how collaborative filtering systems work, their use cases, and how to build one using Redis. Check it out below:
Collaborative filtering: How to build a recommender system
Redis on LinkedIn
-
Have you checked out Redis for AI yet? It’s an integrated package of Redis features and services that get GenAI apps into production faster. Learn more about: ▶️ Vector database for faster RAG apps ▶️ Personalized responses for AI apps ▶️ Better AI agents, less lag ▶️ Feature stores providing faster predictions Check it out: https://lnkd.in/gPzeK6qn
-
-
Redis reposted this
In a recent article published by BigBinary's software engineer Sandip Mane, they benchmarked different caching solutions for #Rails, comparing Redis with alternatives like #Valkey, #DragonflyDB, #DiceDB, and SolidCache using PostgreSQL and SQLite3. Their tests showed that Redis and Memcached had similar performance, while Valkey and DiceDB were about 150% slower. DragonflyDB performed poorly and was excluded from final comparisons. SolidCache with PostgreSQL started as the slowest option but improved significantly with tuning. SolidCache with SQLite3 performed on par with Redis for reads and surpassed PostgreSQL in write speeds. And while LiteCache with SQLite3 delivered the best performance overall, it still remains experimental and not yet recommended for production. In their conclusion, they state that while Valkey, Dragonfly, and others claim better performance than Redis, Redis maintained its position as the top performer, continuing to outpace the others. You can read the full article at: https://lnkd.in/ePV_ShTt
-
-
Semantic caching serves as an in-memory layer for delivering pre-calculated responses for apps that use LLMs, reducing latency and cutting down on expensive LLM calls. Unlike traditional caching, semantic caching uses vector embeddings to return matches based on similarity instead of exact key matches, so it creates some initial challenges for developers to get it right. Our own Robert Shelton, an applied AI engineer, tells us how to evaluate embedding models, choose the right distance threshold, and how to populate a cache so semantic caching systems are more accurate and scalable: https://lnkd.in/eg_3UZuH
-
-
Redis reposted this
Popular Redis Use Cases 1. Caching The most common use case is to utilize Redis for caching. This helps protect the database layer from overloading. Redis offers fast lookup for cached data and can help improve application performance. 2. Session Store We use Redis to share user session data among stateless servers. Redis provides a centralized place to store session data and makes it easy to scale out servers. 3. Distributed lock We use Redis distributed locks to grant mutually exclusive access to shared resources. This prevents race conditions in distributed systems. Redis locks are easy to implement and automatically expire. 4. Counter and Rate Limiter We use Redis to track like counts, view counts etc on social media apps. Redis counters provide atomic increments/decrements. We also use Redis to enforce rate limits on our API endpoints. This helps prevent abuse. 5. Leaderboard Sorted sets make it easy to implement gaming leaderboards in Redis. We can add, update, or remove users from the leaderboard and query ranges efficiently.
-
-
Redis started the year with some major updates. Swipe to catch up on the latest Redis updates you might have missed last month and check out the full blog: https://lnkd.in/gyvsW2NT cc: Talon Miller
-
Our own Guy Royse likes to ask the deep questions in life, like "Who is your meme twin?" Guy spoke at Jfokus 2025 in Stockholm last month and talked about image embeddings, how they work, and how to use them with vector databases like Redis. Most importantly, he used these tools to build an AI-powered meme twin finder. [Spoiler alert: His is the dad from American Choppers.] Watch the talk here: https://lnkd.in/g7rNy4aY You can run the code yourself here: https://lnkd.in/gTX3nhbg
Presentation: Finding Your Meme Twin with Embeddings & Vector Databases by Guy Royse
https://www.youtube.com/
-
With Redis-as-a-Service, your dev team gets instant access to our latest innovations, so they can hit the ground running and deliver real-time app experiences that scale effortlessly. Swipe through for some of our Redis Cloud offerings. ➡️ Learn more: https://redis.io/cloud/