414 error and 504 error configuration resolution in Nginx server

  • 2020-05-10 23:27:37
  • OfStack

414 Request-URI Too Large


# The size of the client request header buffer if the total length of the request header is greater than or less 128k , then use this buffer, 
# The total request header length is greater than 128k The use of large_client_header_buffers Set the cache area 
client_header_buffer_size 128k;

#large_client_header_buffers  Directive parameter 4 As the number, 128k Is the size, the default is 8k . To apply for 4 a 128k . 
large_client_header_buffers 4 128k;

When http's URI is too long or request header is too large, the 414 Request URI too large or 400 bad request error is reported.

Possible reasons for

Scenario 1. The value written in cookie is too large, because size1 of other parameters in header is relatively fixed, and only cookie can be written to larger data

Scenario 2. The request parameter is too long, for example, the body of an article is published. After using urlencode, it is sent to the background using get.


GET http://www.264.cn/ HTTP/1.1
Host: www.264.cn
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 
Accept-Encoding: gzip,deflate,sdch
Accept-Language: zh-CN,zh;q=0.8
Accept-Charset: GBK,utf-8;q=0.7,*;q=0.3
Cookie: bdshare_firstime=1363517175366; 
If-Modified-Since: Mon, 13 May 2013 13:40:02 GMT

When the request header is too large, beyond large_client_header_buffer,
nginx may return "Request URI too large" (414) or "Bad-request "(400) error,

The HTTP request header, as in the example above, consists of multiple lines,
Among them "GET http: / / www. 264. cn HTTP / 1.1" said Request line

When the length of Request line is greater than one buffer(128k) of large_client_header_buffer, nginx will return the "Request URI too large" (414) error, corresponding to scenario 2 above.

The longest row in the request is also less than large_client_header_buffer. When the longest row that is not Request line is greater than 1 buffer(128k), the "Bad-request "(400) error will be returned, corresponding to scenario 1 above.

Solution: you can increase the above two values.


client_header_buffer_size 512k;
large_client_header_buffers 4 512k;

504 Gateway Time-out
Previously, website 1 used nginx as the proxy back-end apache to run php to provide services.

Often, apache will appear irregularly and at an indefinable time and the service will become unresponsive, and then nginx will appear "504 Gateway Time-out ".

Looking at the error log, you can't see anything, assuming it's bug for apache (it's not, and we'll see why below).

Maybe people don't like to toss about when they are old, but they are willing to keep the same, use monitoring tools, and restart apache after receiving the alarm every time.

Finally, I got bored for one day, which was to deal with php. I didn't use apache head office anymore. In my anger, I used the source to install php-fpm and transferred it to php-fpm to run php.

Installing php is not a hassle, and the source installation works just fine. All you need to do is set the php worker worker process log to output the php error log.


When the cut is ready, replace the original proxy_pass with fastcgipass.


upstream apachephp {
  server www.quancha.cn:8080; #Apache1
}

....
proxy_pass http://apachephp;

Replace into


upstream php {
    server 127.0.0.1:9000;
}

...
fastcgi_pass php;

You can move php from apache to php-fpm.

I thought I'd be able to rest easy, and it would be fine if you did, but if you didn't analyze the root cause of the problem.

Problems will still come to us. On the second day, nginx reported gateway timeout for 504.

I don't think apache has anything to do with it this time. apache has finally distanced himself from it.

That should still be on nginx and php-fpm, check the error log for nginx, you can see that


[error] 6695#0: *168438 upstream timed out (110: Connection timed out) while reading response header from upstream,
...
request: "GET /kd/open.php?company=chinapost&number=PA24977020344 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.quancha.cn"

"GET /kd/ open. php; company=chinapost&number=PA24977020344 HTTP/1.1" timeout exits.

Restart php-fpm immediately, the problem is gone, the website is accessible.

When I visited the page again, there was still no response, but when I visited the other page at the same time, it was normal. After the page was refreshed several times, the whole website was bad gateway timeout.

The problem narrowed down to this php script.


netstat -napo |grep "php5-fpm" | wc -l

Check that the php worker process has reached the upper limit of 10 in the configuration file, and there is a feeling that everyone is stuck in the open.php script.

What does this script do? This script is to collect express information, which USES php_curl.

The PHP script will force an exit if the execution time exceeds the max_execution_time configuration item in php.ini without a result.

Check php.ini for max_execution_time, with a value of 30.

The universal google comes in handy. After google you get the following sentence

The set_time_limit() function and the configuration instruction max_execution_time only affect the execution time of the script itself. The maximum execution time of any script that occurs such as system call using system(), stream operation, database operation, etc., is not included when the script has been run.

If you don't set a timeout, php will wait for the result of the call.

Check the open.php source 1. Sure enough, there is no timeout for curl.

Add the following two lines, refresh, and the problem is solved.


curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10); //timeout on connect
curl_setopt($ch, CURLOPT_TIMEOUT, 10); //timeout on response

Of course, in addition to this method, php-fpm also provides a parameter that we can force to kill a long time fruitless process, except that this parameter is not turned on by default.

In the configuration file of php-fpm, you can set the parameter request_terminate_timeout, the timeout of request termination, when the request is executed beyond this time, it will be kill.

It also has a parameter request_slowlog_timeout to log slow requests.

You can use this code to run php from the command line


$real_execution_time_limit = 60; // The time limit 

if (pcntl_fork())
{
// some long time code which should be
// terminated after $real_execution_time_limit seconds passed if it's not
// finished by that time
}
else
{
sleep($real_execution_time_limit);
posix_kill(posix_getppid(), SIGKILL);
}


Related articles: