In depth analysis of common application scenarios in Redis

  • 2020-05-30 21:17:00
  • OfStack


Redis is an key-value storage system, which is now being used more and more in various systems, mostly as a cache due to its high performance features. Here are some common usage scenarios encountered by Redis. Without further ado, let's take a look at the details.

Redis features

The usage scenario of 1 product must be based on the characteristics of the product. Firstly, the characteristics of Redis are listed as follows:

Excellent reading and writing performance persistence Data type rich Single thread Automatic data expiration Release subscription distributed

Here we will talk about the application of Redis through several scenarios and different dimensions.

High performance is suitable for caching

Caching is the most common scenario for Redis, and all of them are used this way because Redis has excellent read and write performance. Increasingly, components are replacing memcached as the preferred server cache. Moreover, Redis supports transactions internally, so it can effectively guarantee the 1-uniqueness of data when used.
When used as a cache, 1 usually holds data in two ways:

1. Read Redis before reading. If there is no data, read the database and pull the data into Redis.

2. When inserting data, write Redis at the same time.

Plan 1: simple to implement, but with two caveats:

1. Avoid cache breakdown. (no database requires hit data, resulting in no data for Redis1, while 1 directly hits the database.)

2. The real-time performance of data will be 1 point worse.

Scheme 2: the data is real-time, but it is not easy to deal with the system.

Of course, the two methods apply according to the actual situation. For example, scheme 1 is suitable for scenarios that do not have a high requirement for real-time data. Scheme 2 is suitable for data storage of dictionary table and small amount of data.

Rich data formats have higher performance and rich application scenarios

One of the big advantages of Redis over other caches is that it supports multiple data types.

数据类型 说明
string 字符串,最简单的k-v存储
hash hash格式,value为field和value,适合ID-Detail这样的场景。
list 简单的list,顺序列表,支持首位或者末尾插入数据
set 无序list,查找速度快,适合交集、并集、差集处理
sorted set 有序的set

In fact, based on the characteristics of the above data types, you can basically think of an appropriate application scenario.

string - suitable for the simplest k-v storage, which is similar to memcached storage structure, SMS verification code, configuration information, etc., to be stored in this type. hash -- 1 key is ID or only 1, value corresponds to the details. Such as product details, personal information details, news details and so on. list -- because list is ordered, it is better to store data that is ordered and relatively fixed. Such as the provincial table, dictionary table, etc. Because list is ordered, it is suitable for sorting by write time, such as: latest ***, message queue, etc. set -- can be simply understood as the mode of ID-List. For example, what friends does a person have in a microblog? The best part of set is that it can provide the operation of intersection, union and difference between two set. For example, find two people's mutual friends and so on. Sorted Set -- an enhanced version of set that adds an score parameter and automatically sorts by the value of score. It is suitable for data such as top 10 which is not sorted by insertion time.

As mentioned above, while Redis is not as complex a data structure as a relational database, it can be used in many scenarios, more than a 1-like cache data structure. Understanding the appropriate business scenarios for each data structure will not only improve development efficiency, but also leverage the performance of Redis.

A single thread can act as a distributed lock

When it comes to the difference between Redis and Memcached, people talk more about the two features of data structure and persistence. In fact, there is another big difference:

Redis is a single-threaded, multiplex approach that improves processing efficiency. Memcached is multithreaded, with CPU thread switching to improve processing efficiency.

Therefore, the single thread feature of Redis is actually a very important application scenario. The most common one is distributed locking.

For systems with high concurrency, multiple servers are deployed, and each technical framework has a good way of handling data locks, such as lock and java of synchronized, to deal with data pollution caused by threads by locking an object. But after all, you can only control the thread of this server, distributed deployment

In the future, the problem of data pollution becomes more difficult to deal with. The single-threaded feature of Redis fits this requirement very well. The pseudo-code is as follows:

// Produce the lock 
while lock!=1
 // The expiration time is to avoid deadlocks 
 now = int(time.time())
 lock_timeout = now + LOCK_TIMEOUT + 1
 lock = redis_client.setnx(lock_key, lock_timeout)

// The real business 

// Release the lock 
now = int(time.time())
if now < lock_timeout:

The above is a pseudo-code that only describes the process, in fact, the overall logic is very simple, as long as you consider the deadlock situation, it is easier to deal with. As a distributed lock, Redis will not become a bottleneck due to its performance advantages. What will create a bottleneck is the real business processing content, or try to narrow the scope of locks to ensure the system performance.

Automatic expiration can effectively improve the development efficiency

Redis can set the expiration time for all data, which is also a feature that is frequently used by people. The users do not need to pay attention to the overdue data cleaning, so the development efficiency is relatively high, and of course, the performance is also relatively high. The most common is: SMS verification code, with the timing of the commodity display. You don't have to look up the time for comparison like a database. Since it is easy to use, I will not repeat it.

Distributed and persistent effectively cope with massive amounts of data and high concurrency

The early version of Redis officially only supports stand-alone or simple master and slave, and most applications develop the middleware of cluster by themselves. However, as the application becomes more and more widespread, users have a growing voice about distributed, so the distributed support was officially added in Redis 3.0, mainly in two aspects:

Redis server master/slave hot standby ensures system stability Redis sharding deals with massive amounts of data and high concurrency

Moreover, although Redis is a memory cache and the data exists in memory, Redis supports a variety of ways to persist and write data to the hard disk. Therefore, the stability of Redis data is also very guaranteed. Combined with the clustering scheme of Redis, some systems have used Redis as a kind of NoSql data storage.

Example: combination of seckilling and Redis

Second kill is now a common marketing model in the Internet system, as a developer, in fact, the most reluctant to such activities, because non-technical personnel can not understand the technical difficulty, resulting in some deviation in resource coordination. Common problems with seckilling include:

Too much concurrency causes the program to block. Inventory can not be effectively controlled, and there is oversold situation.

In fact, there are basically two solutions to these problems:

The data is cached as much as possible, blocking direct interaction between the user and the database. Control by locking to avoid oversold.

Now explain 1, if you do 1 second kill now, then, how Redis should be used in combination.

Warm up the data in advance and put it into Redis Put the product list in Redis List The details of the goods data Redis hash save, set the expiration time The inventory data of the goods Redis sorted set is saved The user's address information Redis set is saved The order generation buckle stock through Redis manufacturing distributed lock, stock synchronous deduction The data delivered after the order is generated, Redis list is generated and processed through the message queue When the seconds are over, the Redis data will be synchronized with the database

The above is a brief scheme combining the second kill system with Redis. Of course, http caching may be introduced in practice, or MQ may be used to replace the message interface. Some business omits may also occur.


Related articles: