Fully managed and integrated with Google Cloud, Azure, and AWS.
Build the fastest, most reliable GenAI apps with our advanced vector database.
Self-managed software with enterprise-grade compliance and reliability.
Synchronize data in near-real time to make data fast—without writing code.
In-memory database for caching & streaming.
Server-assisted, client-side caching in Redis
Client-side caching reduces network traffic between a Redis client and the server, which generally improves performance.
By default, an application server (which sits between the user app and the database) contacts the Redis database server through the client library for every read request. The diagram below shows the flow of communication from the user app, through the application server to the database and back again:
When you use client-side caching, the client library maintains a local cache of data items as it retrieves them from the database. When the same items are needed again, the client can satisfy the read requests from the cache instead of the database:
Accessing the cache is much faster than communicating with the database over the network and it reduces network traffic. Client-side caching reduces the load on the database server, so you may be able to run it using less hardware resources.
As with other forms of caching, client-side caching works well in the very common use case where a small subset of the data is accessed much more frequently than the rest of the data (according to the Pareto principle).
All caching systems must implement a scheme to update data in the cache when the corresponding data changes in the main database. Redis uses an approach called tracking.
When client-side caching is enabled, the Redis server remembers or tracks the set of keys
that each client connection has previously read. This includes cases where the client
reads data directly, as with the GET
command, and also where the server calculates values from the stored data,
as with STRLEN
. When any client
writes new data to a tracked key, the server sends an invalidation message
to all clients that have accessed that key previously. This message warns
the clients that their cached copies of the data are no longer valid and the clients
will evict the stale data in response. Next time a client reads from
the same key, it will access the database directly and refresh its cache
with the updated data.
The sequence diagram below shows how two clients might interact as they access and update the same key:
The following client libraries support CSC from the stated version onwards:
Client | Version |
---|---|
redis-py |
v5.1.0 |
Jedis |
v5.2.0 |
All read-only commands (with the @read
ACL category)
will use cached data, except for the following:
HRANDFIELD
,
HSCAN
,
and ZRANDMEMBER
. By design, these commands
give different results each time they are called.FT.*
prefix), such as
FT.SEARCH
.You can use the MONITOR
command to
check the server's behavior when you are using client-side caching. Because MONITOR
only
reports activity from the server, you should find the first cacheable
access to a key causes a response from the server. However, subsequent
accesses are satisfied by the cache, and so MONITOR
should report no
server activity if client-side caching is working correctly.
Broadly speaking, the data from the specific response to a command invocation gets cached after it is used for the first time. Subsets of that data or values calculated from it are retrieved from the server as usual and then cached separately. For example:
GET
is added to the cache. Parts of the same string retrieved by
SUBSTR
are calculated on the
server the first time and then cached separately from the original
string.GETBIT
or
BITFIELD
on a string
caches the returned values separately from the original string.JSON.GET mykey $
and JSON.GET mykey $.myfield
create
separate entries in the cache.SINTER
and
SDIFF
create separate cache entries.MGET
,
the ordering of the keys is significant. For example MGET name:1 name:2
is
cached separately from MGET name:2 name:1
because the server returns the
values in the order you specify.SISMEMBER
) and
LLEN
are cached separately from the
object they refer to.Like any caching system, client-side caching has some limitations:
Below are some guidelines to help you use client-side caching efficiently, within these limitations:
Use a separate connection for data that is not cache-friendly: Caching gives the most benefit for keys that are read frequently and updated infrequently. However, you may also have data, such as counters and scoreboards, that receives frequent updates. In cases like this, the performance overhead of the invalidation messages can be greater than the savings made by caching. Avoid this problem by using a separate connection without client-side caching for any data that is not cache-friendly.
Estimate how many items you can cache: The client libraries let you
specify the maximum number of items you want to hold in the cache. You
can calculate an estimate for this number by dividing the
maximum desired size of the
cache in memory by the average size of the items you want to store
(use the MEMORY USAGE
command to get the memory footprint of a key). For example, if you had
10MB (or 10485760 bytes) available for the cache, and the average
size of an item was 80 bytes, you could fit approximately
10485760 / 80 = 131072 items in the cache. Monitor memory usage
on your server with a realistic test load to adjust your estimate
up or down.
The Redis server implements extra features for client-side caching that are not used by the main Redis clients, but may be useful for custom clients and other advanced applications. See Client-side caching reference for a full technical guide to all the options available for client-side caching.