Introduce the showlog features in Redis

  • 2020-05-14 05:20:14
  • OfStack

Redis has a useful slowlog feature that, as you can guess, allows you to check for slow queries. Slowlog will record the last X queries that run longer than Y microseconds.

CONFIG SET slowlog-log-slower-than 5000
CONFIG SET slowlog-max-len 25

Set it up.

slowlog-log-slower-than is used to set the number of microseconds, so the above Settings will record queries that take longer than 5 seconds to execute. To get the log, you can use the SLOWLOG GET X command, where X is the number of records you want to get:

SLOWLOG GET 10 

It will show a 1-only id, timestamp and the query that occurred, the time it took to execute the query and the command that was actually executed + parameter. You can erase the log by SLOWLOG RESET.


View slowlog last 1 times, I was not calm see DEL command execution has spent more than 20 milliseconds. Remember, Redis is single-threaded, so it will block (and serious hindrance) our concurrent system. Also, because this is a write operation, it will be in service to all subordinate Redis replication block it when 1 replication process. Well, this is what happened on earth? The & # 63;

Everyone except me probably knows this, but it turns out that the time complexity of Redis's DEL command is O(1) for strings and hash values, and O(N) for list, set, and sorted set (N here is the number of data items in the collection). Just wait for it to block.

Our solution is simple: instead of deleting these items, we rename them and delete them in background jobs using small, interruptible chunks. First, our delayed_delete function:


local key = KEYS[1]
local data_type = redis.call('type', key).ok
 
if data_type == 'set' or data_type == 'zset' then
 local temp = 'gc:tmp:' .. redis.call('incr', 'gc:ids') .. ':' .. key
 redis.call('rename', key, temp)
 return redis.call('sadd', 'gc:' .. data_type, temp)
end
 
return redis.call('del', key) 


This will rename the collection and add the new name to gc:set or gc:zset set (we don't use list, but if you do, you should add support for this as well).

In the next step, we arranged an Ruby script to run once per minute:



require 'redis'
r = Redis.new(driver: :hiredis)
r.srandmember('gc:set', 10000).each do |set|
 items = r.srandmember(set, 5000)
 if items.nil? || items.length == 0
  r.srem('gc:set', set)
  next
 end
 r.srem(set, items)
end
 
r.srandmember('gc:zset', 10000).each do |zset|
 if r.zremrangebyrank(zset, 0, 5000) < 5000
  r.srem('gc:zset', zset)
 end
end

You can change the Numbers to suit your needs. How big your collections are and how often they are deleted. Because we don't do these types of output operations too often, we can just delete one small block at a time.

This is slower than just deleting, but it works well in a concurrent environment.


Related articles: