nginx configuration file nginx.conf

  • 2020-05-06 12:16:12
  • OfStack

The nginx configuration file defaults to conf in the nginx program installation directory. The main configuration file asks nginx.conf. The following is the detailed explanation of nginx.conf file. If you don't understand, you can add our QQ group to discuss.


######Nginx The configuration file nginx.conf In Chinese, #####
 
# define Nginx Running users and user groups 
user www www;
 
#nginx Number of processes, recommended to be equal to CPU Total core number. 
worker_processes 8;
 
# Global error log definition type, [ debug | info | notice | warn | error | crit ]
error_log /usr/local/nginx/logs/error.log info;
 
# process pid file 
pid /usr/local/nginx/logs/nginx.pid;
 
# Specifies the maximum number of descriptors that a process can open: 
# Working mode and connection limit 
# This instruction means when a nginx The maximum number of file descriptors a process can open, the theoretical value should be the maximum number of open files ( ulimit -n ) and nginx Processes divide, but nginx Requests are not evenly distributed, so it is best to match ulimit -n  The value of. 
# Now in the linux 2.6 The number of open files under the kernel is 65535 . worker_rlimit_nofile Should fill in accordingly 65535 . 
# This is because nginx Scheduling requests to the process is not so balanced, so fill in 10240 , the total amount of concurrency 3-4 Ten thousand hours there is a process may exceed 10240 Yes, it will return 502 Error. 
worker_rlimit_nofile 65535;
 
 
events
{
 # Referring to the event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; epoll model 
 # is Linux 2.6 High performance networks in the above version of the kernel I/O Model, linux advice epoll If run in FreeBSD Up here kqueue Model. 
 # Additional notes: 
 # with apache Similar, nginx There are different event models for different operating systems 
 #A ) standard event model 
 #Select , poll It's a standard event model, and if there's no more efficient way for the current system, nginx Will choose select or poll
 #B ) efficient event model 
 #Kqueue : FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0  and  MacOS X. Dual processor MacOS X The system USES kqueue A kernel crash is possible. 
 #Epoll : Linux The kernel 2.6 Version and later system. 
 #/dev/poll : Solaris 7 11/99+ . HP/UX 11.22+ (eventport) . IRIX 6.5.15+  and  Tru64 UNIX 5.1A+ . 
 #Eventport : Solaris 10 .   To prevent kernel crashes,   It is necessary to install a security patch. 
 use epoll;
 
 # Maximum number of connections per process (maximum number of connections) = The number of connections * Process) 
 # According to the hardware adjustment, and the previous work process together, as large as possible, but do not put cpu Run to the 100% Will do. Maximum number of connections per process allowed, theoretically per unit nginx The maximum number of connections to the server is. 
 worker_connections 65535;
 
 #keepalive Timeout time. 
 keepalive_timeout 60;
 
 # The client requests the buffer size of the header. This can be set according to the size of your system page, and the size of a request header should not exceed 1k However, because the general system pages are larger than 1k , so set the page size here. 
 # Page size can be ordered getconf PAGESIZE  To obtain. 
 #[root@web001 ~]# getconf PAGESIZE
 #4096
 # But there are also client_header_buffer_size More than 4k but client_header_buffer_size This value must be set to an integral multiple of the system page size. 
 client_header_buffer_size 4k;
 
 # This will specify the cache for opening the file, which is not enabled by default, max Specify the number of caches, the number of recommendations and the number of open files, inactive Delete the cache after the file has not been requested. 
 open_file_cache max=65535 inactive=60s;
 
 # This refers to how often the cache is checked for valid information. 
 # grammar :open_file_cache_valid time  The default value :open_file_cache_valid 60  Use field :http, server, location  This directive specifies when to check open_file_cache Cache valid information about the project in .
 open_file_cache_valid 80s;
 
 #open_file_cache In the instruction inactive The minimum number of times a file is used in the parameter time. If this number is exceeded, the file descriptor is always open in the cache, as in the example above, if there is a file in the cache inactive If not used once in a while, it will be removed. 
 # grammar :open_file_cache_min_uses number  The default value :open_file_cache_min_uses 1  Use field :http, server, location  This directive specifies the open_file_cache The minimum number of files that can be used in a given time range in an argument with an invalid directive , If you use a larger value , The file descriptor is in cache Is always on .
 open_file_cache_min_uses 1;
 
 # grammar :open_file_cache_errors on | off  The default value :open_file_cache_errors off  Use field :http, server, location  This directive specifies whether a file is being searched for is a record cache error .
 open_file_cache_errors on;
}
 
 
 
# set http Server, which provides load balancing support with its reverse proxy function 
http
{
 # File extension and file type mapping table 
 include mime.types;
 
 # Default file type 
 default_type application/octet-stream;
 
 # The default encoding 
 #charset utf-8;
 
 # server-named hash Table size 
 # Save the name of the server hash Tables are made up of instructions server_names_hash_max_size  and server_names_hash_bucket_size Controlled. parameter hash bucket size Always equal to the hash The size of the table and is a multiple of the size of the processor cache. Enables faster lookups in the processor after reducing the number of accesses in memory hash Table key values become possible. if hash bucket size Is equal to the size of the processor cache, so the number of times in memory is the worst case when you're looking for a key 2 . The first time is to determine the address of the storage unit, and the second time is to find the key in the storage unit   Value. So if Nginx Give you need to increase hash max size  or  hash bucket size The first thing is to increase the size of the previous parameter .
 server_names_hash_bucket_size 128;
 
 # The client requests the buffer size of the header. This can be set according to your system page size, and the header size of a request should not exceed 1k However, because the general system pages are larger than 1k , so set the page size here. Page size can be ordered getconf PAGESIZE To obtain. 
 client_header_buffer_size 32k;
 
 # The client requested the header buffer size. nginx The default will be used client_header_buffer_size this buffer To read the header Value, if header Too big, it will use large_client_header_buffers To read. 
 large_client_header_buffers 4 64k;
 
 # Set by nginx Upload file size 
 client_max_body_size 8m;
 
 # Enable efficient file transfer mode, sendfile Directive specifies nginx Whether to call sendfile Function to output the file, set to  on , if used to download applications such as disk IO Heavy duty application, can be set to off To balance disk and network I/O Processing speed to reduce system load. Note: if the picture does not display properly change this to off . 
 #sendfile Directive specifies  nginx  Whether to call sendfile  Function ( zero copy  To output the file, for normal applications, must be set to on . If used to download applications such as disk IO Heavy duty application, can be set to off To balance disk and network IO Processing speed, slow down the system uptime . 
 sendfile on;
 
 # Open directory list access, suitable download server, default closed. 
 autoindex on;
 
 # This option is allowed or disabled socke the TCP_CORK This option is only in use sendfile When to use 
 tcp_nopush on;
  
 tcp_nodelay on;
 
 # Long connection timeout in seconds 
 keepalive_timeout 120;
 
 #FastCGI The parameters are intended to improve the performance of the site: reduce resource usage and increase access speed. The following arguments can be taken literally. 
 fastcgi_connect_timeout 300;
 fastcgi_send_timeout 300;
 fastcgi_read_timeout 300;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 4 64k;
 fastcgi_busy_buffers_size 128k;
 fastcgi_temp_file_write_size 128k;
 
 #gzip Module Settings 
 gzip on; # open gzip Compressed output 
 gzip_min_length 1k; # Minimum compressed file size 
 gzip_buffers 4 16k; # Compression buffer 
 gzip_http_version 1.0; # Compressed version (default 1.1 If the front end is squid2.5 Please use the 1.0 ) 
 gzip_comp_level 2; # Compression level 
 gzip_types text/plain application/x-javascript text/css application/xml; # Zip type, which is already included by default textml So I don't have to write it down, I don't have a problem writing it down, but there will be one warn . 
 gzip_vary on;
 
 # Open the limit IP Number of connections you need to use 
 #limit_zone crawler $binary_remote_addr 10m;
 
 
 
 # Load balancing configuration 
 upstream piao.jd.com {
  
  #upstream Load balancing, weight Is the weight, which can be defined according to the machine configuration. weigth The parameter represents the weight, and the higher the weight, the greater the probability of being assigned. 
  server 192.168.80.121:80 weight=3;
  server 192.168.80.122:80 weight=2;
  server 192.168.80.123:80 weight=3;
 
  #nginx the upstream Currently supported 4 Sort of distribution 
  #1 Polling (default) 
  # Each request is assigned to a different backend server in chronological order, if the backend server down Drop, can eliminate automatically. 
  #2 , weight
  # Specify the polling probability, weight Is proportional to the access ratio, used in cases of uneven backend server performance. 
  # Such as: 
  #upstream bakend {
  # server 192.168.0.14 weight=10;
  # server 192.168.0.15 weight=10;
  #}
  #2 , ip_hash
  # Click access for each request ip the hash The result allocation, so that each visitor has fixed access to a backend server, can be resolved session The problem. 
  # Such as: 
  #upstream bakend {
  # ip_hash;
  # server 192.168.0.14:88;
  # server 192.168.0.15:80;
  #}
  #3 , fair (third party) 
  # Requests are allocated according to the response time of the back-end server, and priority is given to the short response time. 
  #upstream backend {
  # server server1;
  # server server2;
  # fair;
  #}
  #4 , url_hash (third party) 
  # According to the visit url the hash The result is to allocate the request so that each url When directed to the same backend server, the backend server is a cache. 
  # Example: in upstream add hash Statements, server Cannot write in statement weight And other parameters, hash_method Is the use of the hash algorithm 
  #upstream backend {
  # server squid1:3128;
  # server squid2:3128;
  # hash $request_uri;
  # hash_method crc32;
  #}
 
  #tips:
  #upstream bakend{# Of a load balancing device Ip And equipment status }{
  # ip_hash;
  # server 127.0.0.1:9090 down;
  # server 127.0.0.1:8080 weight=2;
  # server 127.0.0.1:6060;
  # server 127.0.0.1:7070 backup;
  #}
  # Where load balancing is required server add  proxy_pass http://bakend/;
 
  # The state of each device is set to :
  #1.down Before the order server Do not participate in the load for the time being 
  #2.weight for weight The larger the load, the greater the weight of the load. 
  #3.max_fails : the default number of requests allowed to fail is 1. Returns when the maximum number is exceeded proxy_next_upstream Error in module definition 
  #4.fail_timeout:max_fails Time to pause after a failure. 
  #5.backup :   Everything else is not backup The machine down Or when busy, request backup The machine. So this machine will have the least pressure. 
 
  #nginx Support for setting up multiple load balancers at the same time server To use. 
  #client_body_in_file_only Set to On  You can speak client post The incoming data is recorded into the file for doing debug
  #client_body_temp_path Set the directory of the record file   You can set the maximum 3 Layer directory 
  #location right URL matching . You can redirect or do a new proxy   Load balancing 
 }
  
  
  
 # Virtual host configuration 
 server
 {
  # Listen on port 
  listen 80;
 
  # Domain names can be multiple, separated by Spaces 
  server_name www.jd.com jd.com;
  index index.html index.htm index.php;
  root /data/www/jd;
 
  # right ****** Load balancing 
  location ~ .*.(php|php5)?$
  {
   fastcgi_pass 127.0.0.1:9000;
   fastcgi_index index.php;
   include fastcgi.conf;
  }
   
  # Image cache time setting 
  location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$
  {
   expires 10d;
  }
   
  #JS and CSS Cache time Settings 
  location ~ .*.(js|css)?$
  {
   expires 1h;
  }
   
  # Log format setting 
  #$remote_addr with $http_x_forwarded_for For recording clients ip Address; 
  #$remote_user : to record the client user name; 
  #$time_local :   To record the access time and time zone; 
  #$request :   Used to record requests url with http Agreement; 
  #$status :   To record request status; Success is 200 . 
  #$body_bytes_sent  : record the size of the main content of the file sent to the client; 
  #$http_referer : used to record the visit from the link on that page; 
  #$http_user_agent : record the relevant information of the customer's browser; 
  # usually web The server is placed behind the reverse proxy so that the client's is not acquired IP Address, via $remote_add To get the IP The address is for the reverse proxy server iP Address. The reverse proxy server is forwarding the request http In the header, you can add x_forwarded_for Information to record the original client IP Address and the server address requested by the original client. 
  log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" $http_x_forwarded_for';
   
  # Defines the access log for this virtual host 
  access_log /usr/local/nginx/logs/host.access.log main;
  access_log /usr/local/nginx/logs/host.access.404.log log404;
   
  # right  "/"  Enable reverse proxy 
  location / {
   proxy_pass http://127.0.0.1:88;
   proxy_redirect off;
   proxy_set_header X-Real-IP $remote_addr;
    
   # The back end Web The server can go through X-Forwarded-For Get user real IP
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
   # Here are some reverse proxy configurations, optionally. 
   proxy_set_header Host $host;
 
   # Maximum number of bytes per file allowed for client requests 
   client_max_body_size 10m;
 
   # The buffer agent buffers the maximum number of bytes requested by the client, 
   # If you set it to a larger number, for example 256k , then, regardless of use firefox or IE Browser to submit any less than 256k The pictures are all normal. If you comment the directive, use the default client_body_buffer_size Set, which is twice the size of the OS page, 8k or 16k The problem arises. 
   # No matter use firefox4.0 or IE8.0 , submit a larger, 200k Left and right images, all returned 500 Internal Server Error error 
   client_body_buffer_size 128k;
 
   # Said that nginx stop HTTP The reply code is 400 Or a higher response. 
   proxy_intercept_errors on;
 
   # Timeout time for back-end server connections _ Initiate handshake wait response timeout 
   #nginx Connection timeout with backend server ( Proxy connection timeout )
   proxy_connect_timeout 90;
 
   # Backend server data return time ( Agent send timeout )
   # Backend server data return time _ This means that the back-end server must pass all the data within the specified time 
   proxy_send_timeout 90;
 
   # Backend server response time after successful connection ( Agent receive timeout )
   # After successful connection _ Wait for backend server response time _ It's already in the back-end queue waiting to be processed (or the time it takes the back-end server to process the request) 
   proxy_read_timeout 90;
 
   # Set up the proxy server ( nginx ) the buffer size to hold the user's header information 
   # Sets the buffer size for the first part of the reply read from the proxy server, which typically contains a small reply header, and the default size for this value is an instruction proxy_buffers The size of a buffer specified in, but can be set to smaller 
   proxy_buffer_size 4k;
 
   #proxy_buffers Buffer pages in average 32k The following Settings 
   # Set the number and size of buffers used to read the reply (from the proxy server), which is also the default paging size, depending on the operating system 4k or 8k
   proxy_buffers 4 32k;
 
   # Buffer size under high load ( proxy_buffers*2 ) 
   proxy_busy_buffers_size 64k;
 
   # Set to write proxy_temp_path Prevents a worker process from blocking too long when passing a file 
   # Set the cache folder size, greater than this value, from upstream The server transfer 
   proxy_temp_file_write_size 64k;
  }
   
   
  # Set the view Nginx Address of status 
  location /NginxStatus {
   stub_status on;
   access_log on;
   auth_basic "NginxStatus";
   auth_basic_user_file confpasswd;
   #htpasswd The contents of the file are available apache To provide the htpasswd Tools to generate. 
  }
   
  # Local dynamic/static separation reverse proxy configuration 
  # all jsp Pages are submitted tomcat or resin To deal with 
  location ~ .(jsp|jspx|do)?$ {
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_pass http://127.0.0.1:8080;
  }
   
  # All static files by nginx Read directly without passing tomcat or resin
  location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|
  pdf|xls|mp3|wma)$
  {
   expires 15d; 
  }
   
  location ~ .*.(js|css)?$
  {
   expires 1h;
  }
 }
}
######Nginx The configuration file nginx.conf In Chinese, #####

After checking the network information, the understanding of Nginx configuration file nginx.conf is as follows:


# define Nginx Running users and user groups
user www www; #nginx Number of processes, recommended to be equal to CPU Total core number.
worker_processes 8; # Global error log definition type, [ debug | info | notice | warn | error | crit ]
error_log /var/log/nginx/error.log info; # Process documents
pid /var/run/nginx.pid; # a nginx The maximum number of file descriptors a process can open. The theoretical value should be the maximum number of open files (the value of the system) ulimit -n ) and nginx Processes divide, but nginx Requests are not evenly distributed, so it is recommended that the ulimit -n The value of.
worker_rlimit_nofile 65535; # Working mode and connection limit
events {  # Referring to the event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; epoll The model is Linux 2.6 High performance networks in the above version of the kernel I/O Model, if run in FreeBSD Up here kqueue Model.
 use epoll;
 
 # Maximum number of connections per process (maximum number of connections) = The number of connections * Process)
 worker_connections 65535;
} # set http The server
http {
 include mime.types; # File extension and file type mapping table
 default_type application/octet-stream; # Default file type
 #charset utf-8; # The default encoding
 server_names_hash_bucket_size 128; # server-named hash Table size
 client_header_buffer_size 32k; # Upload file size limit
 large_client_header_buffers 4 64k; # Set request delay
 client_max_body_size 8m; # Set request delay
 sendfile on; # Enable efficient file transfer mode, sendfile Directive specifies nginx Whether to call sendfile Function to output the file, set to on , if used to download applications such as disk IO Heavy duty application, can be set to off To balance disk and network I/O Processing speed to reduce system load. Note: if the picture does not display properly change this to off .
 autoindex on; # Open directory list access, suitable download server, default closed.
 tcp_nopush on; # Prevent network congestion
 tcp_nodelay on; # Prevent network congestion
 keepalive_timeout 120; # Long connection timeout in seconds  #FastCGI The parameters are intended to improve the performance of the site: reduce resource usage and increase access speed. The following arguments can be taken literally.
 fastcgi_connect_timeout 300;
 fastcgi_send_timeout 300;
 fastcgi_read_timeout 300;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 4 64k;
 fastcgi_busy_buffers_size 128k;
 fastcgi_temp_file_write_size 128k;  #gzip Module Settings
 gzip on; # open gzip Compressed output
 gzip_min_length 1k; # Minimum compressed file size
 gzip_buffers 4 16k; # Compression buffer
 gzip_http_version 1.0; # Compressed version (default 1.1 If the front end is squid2.5 Please use the 1.0 )
 gzip_comp_level 2; # Compression level
 gzip_types text/plain application/x-javascript text/css application/xml;
 
 # Zip type, which is already included by default text/html So I don't have to write it down, I don't have a problem writing it down, but there will be one warn .
 gzip_vary on;
 #limit_zone crawler $binary_remote_addr 10m; # Open the limit IP Number of connections you need to use  upstream qianyunlai.com {
  #upstream Load balancing, weight Is the weight, which can be defined according to the machine configuration. weigth The parameter represents the weight, and the higher the weight, the greater the probability of being assigned.
  server 192.168.80.121:80 weight=3;
  server 192.168.80.122:80 weight=2;
  server 192.168.80.123:80 weight=3;
 }  # Virtual host configuration
 server {   # Listen on port
  listen 80;
  
  # Domain names can be multiple, separated by Spaces
  server_name www.qianyunlai.com qianyunlai.com;
  index index.html index.htm index.php;
  root /data/www/qianyunlai;   location ~ .*.(php|php5)?$ {
   fastcgi_pass 127.0.0.1:9000;
   fastcgi_index index.php;
   include fastcgi.conf;
  }   # Image cache time setting
  location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$ {
   expires 10d;
  }   #JS and CSS Cache time Settings
  location ~ .*.(js|css)?$ {
   expires 1h;
  }   # Log format setting
  log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" $http_x_forwarded_for';
  
  # Defines the access log for this virtual host
  access_log /var/log/nginx/qianyunlai.log access;   # right "/" Enable reverse proxy
  location / {
   proxy_pass http://127.0.0.1:88;
   proxy_redirect off;
   proxy_set_header X-Real-IP $remote_addr;
   
   # The back end Web The server can go through X-Forwarded-For Get user real IP
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   
   # Here are some reverse proxy configurations, optionally.
   proxy_set_header Host $host;
   client_max_body_size 10m; # Maximum number of bytes per file allowed for client requests
   client_body_buffer_size 128k; # The buffer agent buffers the maximum number of bytes requested by the client,
   proxy_connect_timeout 90; #nginx Connection timeout with backend server ( Proxy connection timeout )
   proxy_send_timeout 90; # Backend server data return time ( Agent send timeout )
   proxy_read_timeout 90; # Backend server response time after successful connection ( Agent receive timeout )
   proxy_buffer_size 4k; # Set up the proxy server ( nginx ) the buffer size to hold the user's header information
   proxy_buffers 4 32k; #proxy_buffers Buffer pages in average 32k The following Settings
   proxy_busy_buffers_size 64k; # Buffer size under high load ( proxy_buffers*2 )
   proxy_temp_file_write_size 64k;
   # Set the cache folder size, greater than this value, from upstream The server transfer
  }   # Set the view Nginx Address of status
  location /NginxStatus {
   stub_status on;
   access_log on;
   auth_basic "NginxStatus";
   auth_basic_user_file conf/htpasswd;
   #htpasswd The contents of the file are available apache To provide the htpasswd Tools to generate.
  }   # Local dynamic/static separation reverse proxy configuration
  # all jsp Pages are submitted tomcat or resin To deal with
  location ~ .(jsp|jspx|do)?$ {
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_pass http://127.0.0.1:8080;
  }   # All static files by nginx Read directly without passing tomcat or resin
  location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|pdf|xls|mp3|wma)$ {
   expires 15d;
  }
  
  location ~ .*.(js|css)?$ {
   expires 1h;
  }
 }
}


Related articles: