redis3.2 Configuration file ES1en. conf details

  • 2020-06-07 05:32:48
  • OfStack

When Redis starts, you can specify the configuration file as follows:


/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis.conf

Redis.conf file details:


#  The default redis Instead of starting as a back-end process, set this value to if you want to run in the background yes 
#  When you start the future stage mode, redis Writes to the default process file /var/run/redis.pid 
daemonize yes 
 
# redis The path of the started process  
pidfile/var/run/redis.pid 
 
#  Start the process port number, preferably without using the default 6379 Easy to attack  
port 7179 
 
tcp-backlog 511 
 
#  configuration redis Listen to the ip Address, it could be 1 You can have more than one  
bind 127.0.0.110.254.3.42 
 
# redis the sock The path  
unixsocket/tmp/redis.sock 
unixsocketperm 755 
 
#  timeout  
timeout 0 
 
# The specified TCP Whether the connection is a long one ," The detective " The signal is server The maintenance. The default is 0. disables  
tcp-keepalive 0 
 
#  Log level, log  Class is divided into 4  Level, debug,verbose,notice,  and warning . Under production environment 1 Like the open notice 
loglevel notice 
 
#  Log file address  
logfile"/usr/local/redis/logs/redis.log" 
 
 
 
#  Set the number of databases that can be used SELECT  Command to switch the database. The default database used is 0 Library. The default 16 A library  
databases 16 
 
#RDB The persistence is done through snapshots ( snapshotting ) When done, when done 1 Set conditions Redis All data in memory is automatically snapped and stored on the hard disk. The conditions for taking a snapshot can be customized by the user in the profile and are made up of two parameters: the time and the number of keys changed. A snapshot is taken when the number of keys changed in a specified time is greater than the specified value. RDB is Redis The default persistence mode is already preset in the configuration file 3 A condition:  
save 900 1 # 900 At least in a second 1 A snapshot is taken when the keys have been changed  
save 300 10 # 300 At least in a second 10 A snapshot is taken when the keys have been changed  
save 60 10000 # 60 At least in a second 10000 A snapshot is taken when the keys have been changed  
 
#  Persistent data store directories  
dir/usr/local/redis/data 
 
 
# Whether to continue working and terminate all clients when persistence fails write The request. The default Settings "yes" To terminate, 1 denier snapshot Data save failure, then here server For read-only service. If it is "no" So this time snapshot Will fail, but under 1 time snapshot It won't be affected, but if there is a malfunction , The data can only be recovered " Recently, 1 A successful point " 
stop-writes-on-bgsave-errorno 
 
# Whether to enable when doing a data mirror backup rdb File compression means, default is yes . Compression may require additional cpu Cost, but it can be effectively reduced rdb File size facilitates storage / The backup / transmission / Data recovery  
rdbcompression yes 
 
#checksum File detection, read write time rdb file checksum To lose 1 Some performance  
rdbchecksum yes 
 
# File name of the mirror backup file, default dump.rdb 
dbfilename dump.rdb 
 
# When the main master Is it still possible to allow customers access to potentially stale data while the server is on or master-slave replication is in progress? in "yes" cases ,slave Continue to provide read-only services to the client , It is possible that the data is out of date at this point; in "no" In the case of any of these server The sent data request service ( This includes the client and this server the slave) Will be informed "error" 
slave-serve-stale-datayes 
 
#  If it is slave Library, only read-only, no modifications allowed  
slave-read-only yes 
 
 
#slave with master The connection of the , Whether to disable TCPnodelay Options. "yes" disables , then socket The data in the communication will be packet Way to send (packet The size is socket buffer limit ) . Can improve socket Efficiency of communication (tcp interactions ), But small data will be buffer, Will not be sent immediately , There may be a delay for the recipient. "no" According to open tcp nodelay options , Any data will be sent immediately , Good timeliness , But the efficiency is low. It is suggested to set as no , in the case of high concurrency or a large number of master-slave operations, set to yes 
repl-disable-tcp-nodelayno 
 
 
# apply Sentinel The module (unstable,M-S Cluster management and monitoring ), Additional profile support is required. slave The weight value , The default 100. when master After the failure ,Sentinel Will be from slave Find the lowest weight in the list (>0) the slave, And promoted to master . If the weight is zero 0, According to the slave for " The observer ", Don't participate in master The election  
slave-priority 100 
 
# Limit the number of simultaneous customers. When the number of connections exceeds this value, redis  No other connection requests will be received and will be received when the client tries to connect error  Information. The default is 10000 , should consider the system file descriptor limit, should not be too large, waste of file descriptor, the specific number depends on the specific situation  
maxclients 10000 
 
#redis-cache Maximum memory available (bytes), The default is 0, said " unlimited ", In the end by the OS Physical memory size determines ( If there is not enough physical memory , It might be used swap) . Try not to exceed the machine's physical memory size , From a performance and implementation perspective , Can be physical memory 3/4 . This configuration requires and "maxmemory-policy" Together with , when redis In memory data reach maxmemory when , The trigger " Clear strategy " . in " Out of memory " when , any write operation ( Such as set,lpush Etc. ) Will trigger " Clear strategy " The execution. In the real world , advice redis The hardware configuration of all physical machines is maintained 1 to ( memory 1 to ), At the same time to ensure that master/slave In the "maxmemory""policy" configuration 1 Cause.  
maxmemory 0 
 
 
# Memory expiration policy, out of memory " when , Data clearance strategy , The default is "volatile-lru" .  
#volatile-lru -> right " Overdue collection " Data adoption in LRU( Least recent use ) algorithm . If the key use "expire" The instruction specifies the expiration time , So this key Will be added to " Overdue collection " In the. Will have expired /LRU Data is removed first . if " Overdue collection " All of them still do not meet the memory requirements , will OOM. 
#allkeys-lru -> For all the data , using LRU algorithm  
#volatile-random -> right " Overdue collection " Data adoption in " Then select " algorithm , And remove the selected one K-V, until " Memory is enough " So far, .  If if " Overdue collection " In all remove all remove all still cannot satisfy , will OOM 
#allkeys-random -> For all the data , take " Randomly selected " algorithm , And remove the selected one K-V, until " Memory is enough " So far,  
#volatile-ttl -> right " Overdue collection " Data adoption in TTL algorithm ( Minimum survival time ), Removes data that is about to expire . 
#noeviction -> Do not interfere with any operation , Direct return OOM abnormal  
# In addition, if the data is out of date, it will not be correct " The application system " Lead to abnormal , And in the system write Operation is dense , recommend "allkeys-lru" 
maxmemory-policyvolatile-lru 
 
#  The default value 5 Above, LRU And the minimum TTL The policy is not a rigorous strategy, but rather a way of approximating it, so you can select a sample value for inspection  
maxmemory-samples 5 
 
# By default, redis  Database images are backed up to disk asynchronously in the background, but this backup is time consuming and cannot be done frequently. so redis  Provides additional 1 A more efficient database backup and disaster recovery method. open append only  After the pattern, redis  Will take every received 1 Each write operation request is appended to appendonly.aof  File, when redis  When rebooted, the file is restored to its previous state. But it does appendonly.aof  The file is too big, so redis  Also supports the BGREWRITEAOF  Instruction, appendonly.aof  Reorganize. If data migration is not frequent, the recommended practice in production is to close the image and turn it on appendonly.aof At the same time you can choose to visit fewer times each day appendonly.aof  rewrite 1 Times.  
# In addition, the master The machine , Mainly responsible for writing, suggest to use AOF, for slave, I'm mainly responsible for reading and sorting out 1-2 Taiwan open AOF The rest of the recommendations are closed  
appendonly yes 
 
#aof File name, default appendonly.aof 
appendfilename"appendonly.aof" 
 
#  Set to appendonly.aof  How often files are synchronized. always Means that every write is synchronized, everysec  Means write operations are accumulated and synchronized per second 1 Times. no Do not take the initiative to fsync By the OS Do it yourself. This needs to be configured based on a real business scenario  
appendfsync everysec 
 
#  in aof rewrite During the period of , Whether the aof A new record of append Defer the use of the file synchronization policy , The main consideration is disk IO Expenditure and request blocking time. The default is no, said " Don't put on hold ", The new aof Records are still synchronized immediately  
no-appendfsync-on-rewriteno 
 
# when Aof log When the growth exceeds a specified percentage, it is overwritten logfile , is set to 0 Indicates that it is not overwritten automatically Aof  Log, overwritten to enable aof Keep volume to a minimum while ensuring the most complete data is preserved.  
auto-aof-rewrite-percentage100 
# The trigger aof rewrite Minimum file size  
auto-aof-rewrite-min-size64mb 
 
#lua The maximum time, in milliseconds, for the script to execute  
lua-time-limit 5000 
 
 
 
# Slow logging, unit subtlety, 10000 is 10 The value of milliseconds, if the operation time exceeds this value , It will put command information " record " up .( memory , The file ) . Among them " Operating time " Not including the network IO spending , Only requests are reached server After the " Memory implementation " The time of ."0" Represents the recording of all operations  
slowlog-log-slower-than10000 
 
#" Slow operation log " The maximum number of entries retained ," record " It will be queued , If you go beyond that , The old record will be removed. Can be achieved by "SLOWLOG<subcommand> args" View slow logged information (SLOWLOG get 10,SLOWLOG reset) 
slowlog-max-len 128 
notify-keyspace-events"" 
 
#hash Data structures of type can be used in coding ziplist and hashtable . ziplist The feature is file storage ( And memory storage ) Less space is required , In less content , The performance and hashtable almost 1 sample . so redis right hash Type default take ziplist . if hash The number of entries or value The length reaches the threshold , Will be refactored to hashtable .  
# This parameter refers to ziplist , the maximum number of items allowed to be stored in, and the default is 512 , it is suggested that for 128 
hash-max-ziplist-entries512 
#ziplist Allowed entry in value Maximum number of bytes, default 64 , it is suggested that for 1024 
hash-max-ziplist-value64 
 
# Same as above  
list-max-ziplist-entries512 
list-max-ziplist-value64 
 
#intset The maximum number of entries allowed in , If the threshold is reached ,intset Will be refactored to hashtable 
set-max-intset-entries512 
 
#zset Is an ordered set , There are 2 Medium code type :ziplist,skiplist . because " The sorting " Additional performance will be consumed , when zset Medium data is large , Will be refactored to skiplist .  
zset-max-ziplist-entries128 
#zset Allowed entry in value Maximum number of bytes, default 64 , it is suggested that for 1024 
zset-max-ziplist-value64 
 
 
# Whether to turn on the top-level data structure rehash function , If memory allows , Please open it. rehash Can be greatly improved K-V Access efficiency  
activerehashing yes 
 
# The client buffer Control. In the client and server In the interaction , Each connection will be connected to 1 a buffer associated , this buffer Used to queue waiting to be taken client Received response information. if client Unable to timely consumer response information , then buffer It's going to keep piling up server Memory pressure . if buffer The data backlog reached the threshold , This will cause the connection to be closed ,buffer Has been removed.  
 
#buffer Control types include :normal ->  Ordinary connection; slave-> with slave The connection between; pubsub ->pub/sub Type connections, of this type, tend to cause this problem ; because pub The end will release messages intensively , but sub The end may be underconsumed . Instruction format :client-output-buffer-limit <class> <hard><soft><seconds>", Among them hard said buffer The maximum ,1 Once the threshold is reached, the connection is immediately closed ;soft said " Tolerance values ", It and seconds Cooperate with , if buffer Value is more than soft And the duration has been reached seconds, The connection will also be closed immediately , If you exceed soft But in the seconds After that, buffer The data is less than soft, The connection will be preserved . Among them hard and soft All set to 0, Is disabled buffer control . usually hard Value is greater than soft. 
client-output-buffer-limitnormal 0 0 0 
client-output-buffer-limitslave 256mb 64mb 60 
client-output-buffer-limitpubsub 32mb 8mb 60 
 
 
#Redis server How often background tasks are performed , The default is 10, The larger the value, the greater the value redis right " intermittent task" The more frequently it is executed ( The number of / seconds ) . " intermittent task" including " Overdue collection " Detect and close " Idle timeout " The connection etc. , This value must be greater than 0 And less than 500 . Too small a value means more cpu Periodic consumption , The background task I'm being polled more often. Too large a value means that " Memory is sensitive " The gender is poorer. Default values are recommended.  
hz 10 
 
# when 1 a child In rewriting AOF File when if aof-rewrite-incremental-fsync A value of yes Take effect, then this file will take every time 32M The incoming data is synchronized, which is useful for large new commits to disk and avoids peak delays.  
aof-rewrite-incremental-fsyncyes 

# Extra configuration files are loaded 
# include /path/to/local.conf
# include /path/to/other.conf 

Related articles: