redis profile redis.conf Chinese version (based on 2.4)

  • 2020-05-10 23:09:13
  • OfStack


# Redis Sample configuration file 
#  Note the unit issue: you can use this when you need to set the memory size 1k , 5GB , 4M Common formats are: 
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
#  The units are case insensitive, so 1GB 1Gb 1gB It's written completely 1 Kind of. 
# Redis The default is not to run as a daemon. You can set this to zero "yes" Let it run as a daemon. 
#  Note that when acting as a daemon, Redis The process of ID writes  /var/run/redis.pid
daemonize no
#  When running as a daemon, Redis The process of ID The default writes  /var/run/redis.pid . You can modify the path here. 
pidfile /var/run/redis.pid
#  The specific port on which to accept a connection, the default is 6379 . 
#  If the port is set to 0 . Redis You don't listen TCP The socket. 
port 6379
#  You can bind the bill if you want 1 Interface; If this is not set separately, the connections to all the interfaces will be monitored. 
#
# bind 127.0.0.1
#  Specifies which to listen for connections unxi The path of the socket. There's no default for this, so if you don't specify, Redis It won't pass unix Socket to listen. 
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755
#1 Close the connection after the number of seconds the client is idle. (0 C is for disabled, never closed )
timeout 0
#  Set the server debugging level. 
#  Possible values: 
# debug  A lot of information about development / Testing works) 
# verbose  (a lot of condensed useful information, but not like debug So many levels) 
# notice  (the right amount of information, basically how much you need in your production environment) 
# warning  Only very important / Serious information will be recorded.) 
loglevel verbose
#  Specify the log file name. You can also use "stdout" To force to Redis Write the log information to standard output. 
#  Note: if Redis Running as a daemon and you set the log to standard output, the log will be sent to  /dev/null
logfile stdout
#  To use the system logger is as simple as setting it up  "syslog-enabled"  for  "yes"  That's it. 
#  Then you can set others as needed 1 some syslog The parameters are fine. 
# syslog-enabled no
#  Indicate the syslog identity 
# syslog-ident redis
#  Indicate the syslog The equipment. It must be 1 User or  LOCAL0 ~ LOCAL7  the 1 . 
# syslog-facility local0
#  Set the number of databases. The default database is  DB 0 You can pass it SELECT <dbid> WHERE dbid ( 0 ~ 'databases' - 1 ) to use a different database for each connection. 
databases 16
################################  The snapshot  #################################
#
#  Put the data inventory on disk :
#
#   save <seconds> <changes>
#   
#    The database is written to disk after the specified number of seconds and the number of data changes. 
#
#    The following example will do the operation of writing data to disk :
#   900 Seconds ( 15 Minutes) later, and at least 1 Times change 
#   300 Seconds ( 5 Minutes) later, and at least 10 Times change 
#   60 Seconds later, and at least 10000 Times change 
#
#    Note: if you don't want to write the disk, put it all together  "save"  Just comment it out. 
save 900 1
save 300 10
save 60 10000
#  When the export to  .rdb  Whether the database is used or not LZF Compress the string object. 
#  The default setting is  "yes" So it almost always works. 
#  If you want to save CPU You can set this to zero  "no" But if you have compressible key If so, the data file will be larger. 
rdbcompression yes
#  File name of database 
dbfilename dump.rdb
#  Working directory 
#
#  The database will be written to this directory, and the file name will be above  "dbfilename"  The value of the. 
# 
#  I'll put the stack here, too. 
# 
#  Note that you must specify a directory here, not a file name. 
dir ./
#################################  synchronous  #################################
#
#  Master slave synchronization. through  slaveof  Configure to implement Redis A backup of the instance. 
#  Note that this is a local copy of the data from the remote. That is, you can have different database files and bindings locally IP , listen on different ports. 
#
# slaveof <masterip> <masterport>
#  if master Set the password (through the following  "requirepass"  Option to configure), then slave Authentication must be performed before synchronization can begin, or its synchronization request will be rejected. 
#
# masterauth <master-password>
#  when 1 a slave Lost and master Or synchronization is in progress, slave There are two possibilities for the behavior of: 
#
# 1)  if  slave-serve-stale-data  Set to  "yes" ( The default value ) . slave It continues to respond to client requests, either normal data or empty data that has not yet received a value. 
# 2)  if  slave-serve-stale-data  Set to  "no" . slave Will reply " Is from master Synchronous ( SYNC with master in progress ) " To handle all kinds of requests, except  INFO  and  SLAVEOF  Command. 
#
slave-serve-stale-data yes
# slave Sends to the server at the specified time interval ping The request. 
#  Time intervals can be passed  repl_ping_slave_period  To set up. 
#  The default 10 Seconds. 
#
# repl-ping-slave-period 10
#  The following options set the bulk data I/O And to the master Request data and ping The expiration time of the response. 
#  The default value 60 Seconds. 
#
# 1 One of the most important things to do is to make sure that the ratio  repl-ping-slave-period  Big, otherwise master and slave The transmission expiration time between the two is shorter than expected. 
#
# repl-timeout 60
##################################  security  ###################################
#  Require the client to verify the identity and password when processing any command. 
#  This is useful when you don't trust your visitors. 
#
#  This should be commented out for backward compatibility. And most people don't need authentication (e.g., they're running on their own servers). 
# 
#  Warning: because Redis It's too fast, so mean people can try it every second 150k The password to try to crack the password. 
#  That means you need to 1 A strong password, otherwise it would be too easy to crack. 
#
# requirepass foobared
#  Command rename 
#
#  In a Shared environment, you can change the name for a dangerous command. So, for example, you could do this  CONFIG  Change your name to something less predictable, so you can still use it and no one else can do anything wrong. 
#
#  For example, :
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
#  You can even assign values to commands 1 To completely disable this command with an empty string: 
#
# rename-command CONFIG ""
###################################  limit  ####################################
#
#  Sets the maximum number of simultaneous clients to connect. 
#  There is no limit by default Redis The number of file descriptors that a process can open. 
#  Special values "0" There is no limit. 
# 1 Once that limit is reached, Redis All new connections are closed and an error is sent " Reach the maximum number of users ( max number of clients reached ) "
#
# maxclients 128
#  Do not use more memory than the set limit. 1 Once memory usage reaches the upper limit, Redis Depending on the recycling strategy selected (see: maxmemmory-policy ) to delete key . 
#
#  If there is a deletion policy problem Redis Unable to delete key , or the policy is set to  "noeviction" . Redis An error message requiring more memory is replied to the command. 
#  For example, SET,LPUSH And so on. But it continues to respond appropriately to read-only commands, such as: GET . 
#
#  In the use of Redis As a LRU Cache, or when a hard memory limit is set for the instance (used  "noeviction"  When it comes to strategy, this option can be useful. 
#
#  Warning: when 1 The heap slave When an instance reaches the memory limit, the response slave The memory required for the required output cache is not computed in the used memory. 
#  So when asked 1 That was deleted key The time will not trigger the network problem/resynchronization of the event, then slave Will receive 1 Heap delete instruction until the database is empty. 
#
#  In short, if you have slave Even on 1 a master If so, I suggest you take it master A small memory limit ensures that there is enough system memory for the output cache. 
#  (if the policy is set to "noeviction" Not without reason.) 
#
# maxmemory <bytes>
#  Memory policy: if the memory limit is reached, Redis How to delete key . You can go down here 5 The following strategies are selected: 
# 
# volatile-lru ->  According to the LRU The algorithm generates the expiration time to delete. 
# allkeys-lru ->  According to the LRU Algorithm removes any key . 
# volatile-random ->  Randomly delete based on the expiration setting key . 
# allkeys->random ->  Random deletion without distinction. 
# volatile-ttl ->  Delete based on the most recent expiration date (supplemented by TTL ) 
# noeviction ->  No one is deleted, and an error is returned during the write operation. 
# 
#  Note: for all strategies, if Redis Can't find a suitable one to delete key Both are returned during write operations 1 A mistake. 
#
#        The commands involved here: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
#  The default values are as follows: 
#
# maxmemory-policy volatile-lru
# LRU And the minimum TTL The implementation of the algorithm is not very accurate, but it is very close (for provincial storage), so you can use the sample to do the test. 
#  For example: default Redis Will check the 3 a key Then take the oldest one, and you can set the number of samples by the following configuration item. 
#
# maxmemory-samples 3
##############################  Pure accumulation mode  ###############################
#  By default, Redis It is asynchronous to export data to disk. In this case, when Redis When you hang up, the latest data is lost. 
#  If you don't want to lose any 1 The pure accumulation mode should be used for the data: 1 Once this mode is turned on, Redis The data is written every time it is received  appendonly.aof  File. 
#  Every time you start up Redis Will read the file's data into memory. 
#
#  Note that asynchronously exported database files and pure cumulative files can coexist (you have to put all of the above "save" Comment out the Settings and turn off the export mechanism. 
#  If pure cumulative mode is enabled, then Redis The log file is loaded at startup and the export is ignored  dump.rdb  File. 
#
#  Important: view  BGREWRITEAOF  To learn how to reprocess the log file in the background when the cumulative log file is too large. 
appendonly no
#  Pure accumulative file name (default: "appendonly.aof" ) 
# appendfilename appendonly.aof
# fsync()  Ask the operating system to write the data to disk immediately, and don't wait any longer. 
#  Some operating systems actually put data on disk at once; Some will dawdle 1 B: yes, but I'll do it as soon as possible. 
#
# Redis support 3 Different models: 
#
# no : do not brush immediately, only brush when the operating system needs to brush. Faster. 
# always : writes immediately to each write operation aof File. Slow, but safest. 
# everysec : write per second 1 Times. A compromise. 
#
#  The default  "everysec"  It is generally possible to strike a good balance between speed and data security. 
#  If you really understand what this means, then set it "no" You get better performance (if you lose data, you get it 1 Not a very new snapshot); 
#  Or the other way around, you choose  "always"  To sacrifice speed to ensure data security and integrity. 
#
#  If in doubt, use it  "everysec"
# appendfsync always
appendfsync everysec
# appendfsync no
#  if AOF The synchronization policy is set to  "always"  or  "everysec" , then the background storage process (background storage or write AOF Logs produce many disks I/O Overhead. 
#  Some of the Linux The configuration will be Redis because  fsync()  It's blocked for a long time. 
#  Note that there is currently no perfect fix for this situation, even for different threads  fsync()  It's gonna block us  write(2)  The request. 
#
#  To alleviate this problem, use the following option. It can be  BGSAVE  or  BGREWRITEAOF  Process blocking  fsync() . 
# 
#  This means that if a child process is saving, then Redis Is in the midst of " Not synchronized " In the state. 
#  This is actually saying that in the worst case you might lose it 30 Second log data. (the default Linux Set) 
# 
#  If you have a delay problem set this to zero  "yes" Otherwise, keep it  "no" , which is the safest way to keep persistent data. 
no-appendfsync-on-rewrite no
#  Automatic override AOF file 
#
#  if AOF Log files are large to the specified percentage, Redis Can pass  BGREWRITEAOF  Automatic override AOF Log files. 
# 
#  Working principle: Redis Remember the last rewrite AOF The size of the log (or if there is no write after restart, use the current log AOF File), 
#            Compare the reference size with the current size. If the current size exceeds the specified scale, the override operation is triggered. 
#
#  You also need to specify the minimum size of the log to be overwritten, which avoids the need to overwrite the log if the specified percentage is reached but the size is still small. 
#
#  The specified percentage is 0 Will be disabled AOF Automatic override feature. 
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
##################################  Slow query log  ###################################
# Redis A slow query log can record queries that take longer than a specified amount of time. Running time does not include various I/O Time. 
#  For example: connect the client, send response data, etc. Only calculate the actual time the command runs (this is only true 11 A command that runs a scenario in which a thread is blocked and cannot serve other requests at the same time. 
# 
#  You can configure two parameters for slow query logging: 1 A is exceeded time, the unit is subtle, record more than a time command. 
#  On the other 1 One is the length of the slow query log. when 1 When a new command is written into the log, the oldest record is deleted. 
#
#  The units of time below are microseconds, so 1000000 is 1 Seconds. Note that negative times disable slow query logging, and 0 All commands are forced to be logged. 
slowlog-log-slower-than 10000
#  There's no limit to this length. Just have enough memory. You can go through  SLOWLOG RESET  To free up memory. The log is in memory Orz ) 
slowlog-max-len 128
################################  Virtual memory  ###############################
###  Warning! Virtual internality Redis 2.4 It's against it. 
###  Use of virtual memory is highly discouraged!! 
#  Virtual memory can be used Redis It is possible to keep all data sequences in memory even when memory is insufficient. 
#  In order to do that 1 High frequency points, key It's going to be in memory, and low frequency key It goes to the swap file, just like the operating system USES memory pages 1 The sample. 
#
#  To use virtual memory, just put  "vm-enabled"  Set to  "yes" , and set the following as needed 3 A virtual memory parameter is ok. 
vm-enabled no
# vm-enabled yes
#  This is the path to the swap file. As you probably guessed, the exchange file cannot be in multiple Redis Instances are Shared between each other, so make sure each one is Redis Instance using 1 Two separate interchange files. 
#
#  The best media for saving swap files (to be accessed randomly) is a solid state drive (SSD). SSD ). 
#
# ***  warning  ***  If you use a Shared host, then the default swap file is placed  /tmp  It's not safe. 
#  create 1 a Redis User writable directories, and configuration Redis Create the swap file here. 
vm-swap-file /tmp/redis.swap
# "vm-max-memory"  Configure the maximum memory capacity available for virtual memory. 
#  If there is room for the swap file, all of the superscript parts will be put in the swap file. 
#
# "vm-max-memory"  Set to 0 Means the system will use up all available memory. 
#  The default is not so good, it's just a matter of using up all the memory you can. 
#  For example, set to the remaining memory 60%-80% . 
vm-max-memory 0
# Redis The swap file is split into multiple data pages. 
# 1 Three storable objects can be stored on multiple consecutive pages, however 1 Data pages cannot be Shared by multiple objects. 
#  So, if your data pages are too big, small objects can waste a lot of space. 
#  If the data pages are too small, there will be less swap space for storage (assuming you set the same number of data pages) 
#
#  If you use a lot of small objects, the recommended page size is 64 or 32 Bytes. 
#  If you use a lot of large objects, use large objects 1 Some size. 
#  If you're not sure, use the default  :)
vm-page-size 32
#  The total number of data pages in the swap file. 
#  According to the in-memory paging table (used / Unused data pages are distributed on disk per 8 Data pages consume memory 1 Bytes. 
#
#  Exchange capacity  = vm-page-size * vm-pages
#
#  By default 32 Data page size and bytes 134217728 The number of data pages, Redis The data page file will be occupied 4GB , and the paging tables in memory are consumed 16MB Memory. 
#
#  It is better to set the minimum and sufficient number for your fulfillment program. The following default value is larger in most cases. 
vm-pages 134217728
#  Simultaneously runnable virtual memory I/O The number of threads. 
#  These threads can perform data reads and writes from swap files, as well as data interactions and encodings between memory and disk / Decoding processing. 
#  more 1 Some threads can 1 Improved processing efficiency to some extent, though I/O The operation itself depends on physical device limitations and does not increase the efficiency of a single read and write operation with more threads. 
#
#  Special values 0 It closes the thread level I/O , and turns on the blocking virtual memory mechanism. 
vm-max-threads 4
###############################  Advanced configuration  ###############################
#  When there is a large amount of data, hash encoding is appropriate (more memory is required) and the maximum number of elements cannot exceed the given limit. 
#  You can set these limits with the following options: 
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
#  Similar to hashing, another can be used when there are fewer data elements 1 Code in a way that saves a lot of space. 
#  This method should only be used if the following limitations are met: 
list-max-ziplist-entries 512
list-max-ziplist-value 64
#  There are such 1 A case of special encoding: the data is all 64 A string of unsigned integer digits. 
#  The following configuration item is used to limit the maximum possible use of this encoding in this case. 
set-max-intset-entries 512
#  With the first 1.  The first 2 Similarly, ordered sequences can also be used 1 A special coding approach can save a lot of space. 
#  This encoding is only suitable for ordered sequences of length and elements that meet the following constraints: 
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
#  Hash refresh, each 100 a CPU Milliseconds will take out 1 Milliseconds to refresh Redis The primary hash table (top-level key-value mapping table). 
# redis The hash table implementation used (see dict.c ) use the delayed hash refresh mechanism: you are right 1 The more hash table operations, the more frequent hash refresh operations; 
#  On the other hand, if the server is very inactive, it is just a bit of memory to save the hash table. 
# 
#  The default is per second 10 Secondary hash table refresh, used to refresh the dictionary and then free memory as soon as possible. 
#
#  Advice: 
#  If you're concerned about delays  "activerehashing no" , each request is delayed 2 Milliseconds are not good. 
#  If you don't care too much about latency and want to free up memory as soon as possible  "activerehashing yes" . 
activerehashing yes
##################################  contains  ###################################
#  contains 1 Two or more other configuration files. 
#  This is where you have standard configuration templates but each one redis The server is useful when it needs to be personalized. 
#  The include file feature allows you to bring in other configuration files, so take advantage of it. 
#
# include /path/to/local.conf
# include /path/to/other.conf


Related articles: