Configuration resolution using gzip compression in the Nginx server

  • 2020-05-10 23:28:35
  • OfStack

gzip compression
Using gzip compression can reduce the bandwidth consumption of a website while improving access speed.
The page is compressed mainly on the nginx server side, and then uncompressed and parsed on the browser side.
Most popular browsers these days lag compression in gzip, so don't worry.
Nginx's gzip compression is turned off by default, while Nginx only compresses text/html by default
The main configuration is as follows:


gzip on;# open  
gzip_http_version 1.0;# The default 1.1 
gzip_vary on; 
gzip_comp_level 6; 
gzip_proxied any; 
gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;# Compressed file type  
 
gzip_buffers 16 8k;# Set up the gzip Request memory size , It is used to apply memory space Settings in multiples of the block size gzip Request memory size , It is used to request memory space in multiples of the block size  
 
# Disable gzip for certain browsers. 
gzip_disable  " MSIE [1-6].(?!.*SV1) " ;#ie6 Does not support gzip , need to be disabled ie6 , damn! !!!! 

Note: the setting of gzip_http_version has a default value of 1.1, which means that gzip compression is applied to requests for the HTTP/1.1 protocol
If we use proxy_pass for reverse proxy, then nginx and the back-end upstream server are communicating using the HTTP/1.0 protocol.

gzip parameter description:
Decide whether to open the gzip module
param:on|off
example:gzip on;

gzip_buffers
Set the size of the gzip request memory, which is used to request memory space in multiples of the block size
param1: multiple of int increase
param2:int(k) followed by k
example: gzip_buffers 4 8k;

gzip_comp_level
Set gzip compression level, the lower the level, the faster the compression speed, the smaller the file compression ratio, and the slower the speed, the larger the file compression ratio
param:1-9
example:gzip_com_level 1;

gzip_min_length
When the returned content is greater than this value, gzip will be used for compression. K is used as a unit. When the value is 0, all pages will be compressed
param:int
example:gzip_min_length 1000;

gzip_http_version
To identify the version of the http protocol, early browsers did not support gzip compression, and users would see the messy code, so in order to support the previous version added this option, it can be ignored at present
param: 1.0|1.1
example:gzip_http_version 1.0

gzip_proxied
When Nginx is used as a reverse proxy,
param:off|expired|no-cache|no-sotre|private|no_last_modified|no_etag|auth|any]
expample:gzip_proxied no-cache;
All agent result data compression is turned off by off
The expired copy enables compression if the header contains the "Expires" header
Compression is enabled for no-cache, if header contains the "Cache-Control: no-cache" header
Compression is enabled for no-store, if header contains the "Cache-Control: no-store" header
Compression is enabled for private, if the header contains the "Cache-Control :private" header
Compression is enabled for no_last_modified, if the header contains the "Last_Modified" header
Compression is enabled for no_etag, if the header contains "ETag" header information
Compression is enabled for auth, if header contains "Authorization" header information
All result data is unconditionally compressed

gzip_types
Set the type MIME to be compressed. Unset values are not compressed
param:text/html|application/x-javascript|text/css|application/xml
example:gzip_types text/html;

gzip_vary on;
It is related to http header. Add an vary header for proxy server. Some browsers support compression, while others do not

Nginx and Gzip requests
1 thread Nginx Gzip module is Gzip related to Response, but what if you need Gzip related to Request? Take a look:
plan

The first option is to use lua-zlib:


local zlib = require "zlib"

local encoding = ngx.req.get_headers()["Content-Encoding"]

if encoding == "gzip" then
  local body = ngx.req.get_body_data()

  if body then
    local stream = zlib.inflate()
    ngx.req.set_body_data(stream(body))
  end
end

The second option is to package the ZLIB module through LuaJIT's FFI library. There are some examples available for reference in the official tutorial, but the example is Deflate, not Gzip. If you package Gzip with FFI, it is a little complicated.


local ffi = require "ffi"
local zlib = require "zlib"

local function reader(s)
  local done
  return function()
    if done then return end
    done = true
    return s
  end
end

local function writer()
  local t = {}
  return function(data, sz)
    if not data then return table.concat(t) end
    t[#t + 1] = ffi.string(data, sz)
  end
end

local encoding = ngx.req.get_headers()["Content-Encoding"]

if encoding == "gzip" then
  local body = ngx.req.get_body_data()

  if body then
    local write = writer()
    zlib.inflate(reader(body), write, nil, "gzip")
    ngx.req.set_body_data(write())
  end
end

The code above is derived from zlib_test.lua. At first glance, reader and writer in the code may seem confusing, but you can think of them as input/output interfaces, which can be modified into files, databases, and so on.

Don't get too excited. When you run it, you are likely to encounter the following errors:


libzlib.so: cannot open shared object file.

This is actually due to the following code zlib.lua:


local C = ffi.load 'zlib'

At run time, ffi.load will automatically complete the file name, if it is Windows, then load zlib.dll, if it is Linux, then load libzlib.so, but actually under Linux, the name of the ZLIB extension is libz.so, not libzlib.so.

Knowing what the problem is, we naturally know how to modify the code:


local C

if ffi.os == "Windows" then
  C = ffi.load "zlib"
else
  C = ffi.load "z"
end

Sometimes it is not recommended to modify the code of the third party library directly, because in this way, every time the third party library updates the code, we have to do the corresponding modification, 1 once you forget it will be wrong, this time you can consider making a soft connection alias.

test

As mentioned at the beginning, the interfaces are all made by PHP, but the data of Gzip in the request is processed by LUA. How to make PHP use the data processed by LUA? Different languages seem to be a problem, but Nginx has Phases1, which says that PHP works in content as FastCGI, and LUA works in access, so they are in harmony:


location ~ \.php$ {
  access_by_lua_file /path/to/lua/file;

  include fastcgi.conf;
  fastcgi_pass 127.0.0.1:9000;
}

What about the efficiency of lua-zlib and lua-files? Here is the test script I wrote using PHP:


<?php

$url = 'http://url';

$header = implode("\r\n", array(
  'Content-Type: application/x-www-form-urlencoded',
  'Content-Encoding: gzip',
  'Connection: close',
));

$content = gzencode(http_build_query(array(
  'foo' => str_repeat('x', 100),
  'bar' => str_repeat('y', 100),
)));

$options = array(
  'http' => array(
    'protocol_version' => '1.1',
    'method' => 'POST',
    'header' => $header,
    'content' => $content,
  ),
);

$context = stream_context_create($options);

for ($i = 0; $i < 1000; $i++) {
  file_get_contents($url, false, $context);
}

?>

When writing test scripts, many people like to add time at the beginning and the end, so that the actual running time of the code can be obtained by subtracting it. Actually, this is unnecessary, and the running time can be obtained by using Linux's built-in time:


shell> time php /path/to/php/file


Related articles: