Use Nginx to deploy static pages

  • 2020-05-14 06:08:52
  • OfStack

Nginx introduction

Nginx is a Russian written 10 lightweight HTTP server, Nginx, which is pronounced "engine X", is a high-performance HTTP and reverse proxy server, and also an IMAP/POP3 / SMTP proxy server. Nginx was developed by Russian Igor Sysoev as the second most visited Rambler.ru site in Russia, and it has been running there for more than two and a half years. Igor Sysoev builds projects using the BSD license.

English homepage: http:// nginx.net.

As an HTTP server, Nginx has the following basic features:

Handle static files, indexing files and automatic indexing; Open the file descriptor buffer. Cache-free reverse proxy acceleration, simple load balancing and fault tolerance. FastCGI, simple load balancing and fault tolerance. Modular structure. These include gzipping, byte ranges, chunked responses, SSI-filter and filter. If multiple SSI on a single page are processed by Fast CGI or another proxy server, the processing can be run in parallel without waiting on each other. Support SSL and TLSSNI.

The advantages of Nginx: light weight, high performance and strong concurrency. It is also quite handy for deploying static pages.

This high performance benefits from the Nginx framework. After Nginx starts, there will be one master process and several worker processes. The master process is mainly used to manage the worker process, including: receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, and automatically restarting the new worker process when the worker process exits (under abnormal circumstances). The basic network events are handled in the worker process. Multiple worker processes are peers, they compete equally for requests from clients, and the processes are independent of each other. One request can only be processed in one worker process and one worker process, and it is impossible to process requests from other processes. The number of worker processes can be set, 1 will be set to the machine cpu audit 1, which is related to Nginx process model and event processing model.

Why Nginx

Speaking of Nginx, maybe the first reaction is reverse proxy and load balancing. So what is a reverse proxy and what is load balancing?

The reverse proxy

Let's first understand what a forward agent is. Proxy (Proxy), also known as network proxy, is a special network service. Generally speaking, it ACTS as a middleman between the client side and the target server, receives the client side's request, then initiates the corresponding request to the target server according to the client side's request, obtains the specified resources from the target server and returns them to the client side. Moreover, the proxy server can download the resources of the target server to the local cache. If the resources that the client wants to acquire are in the proxy server's cache, the proxy server will not make a request to the target server again, but directly return the cached resources.

In fact, proxy servers are very common. For example, some agents of scientific Internet surfing exist because of GWF. They use foreign servers as proxy servers to correctly parse the domain name to achieve scientific Internet surfing. Proxy servers can also hide the real IP, such as the famous Tor (onion router), which USES multiple proxies and some encryption to communicate anonymously.

A reverse proxy is used as a proxy on the server side, not on the client side. In other words, the forward agent is acting on the internal network users access Internet server connection request, the reverse proxy is a proxy server to accept Internet connection request, and then forwards the request to the internal server on the network, and from the server to get results back to Internet request connection on the client, proxy server at this time of the external performance for a server.

Load balancing

Reverse proxy load balancing technology is to dynamically forward connection requests from Internet to multiple servers on the internal network in a reverse proxy way for processing, so as to achieve the purpose of load balancing.

What a coincidence, Nginx did it all

As a good proxy server, Nginx must have both reverse proxy and load balancing. For more information on this and how to use it, see resources at the end of this article: the Nginx guide to getting started.

Nginx installation

I use the server of tencent cloud, and the version is Ubuntu Server 14.04.1 LTS 32-bit.


$ apt-get install nginx

The Mac OS system refers to this article: Installing Nginx in Mac OS X

Nginx configuration

Simply configure the Nginx configuration file to enable these configurations when Nginx is started. And this is the focus of this paper.

The configuration system of Nginx consists of a master configuration file and some other auxiliary configuration files. These configuration files are all plain text files, like 1, and we just need to configure the main configuration file. For example on my server is in :/etc/nginx/nginx.conf .

Instruction context

The configuration information in nginx.conf is classified according to its logical meaning, that is, it is divided into multiple scopes, or configuration instruction context. Different scopes contain one or more configuration items.

Each configuration item consists of a configuration instruction and an instruction parameter, forming a key-value pair. # is followed by a comment, which is also very easy to understand.

1. General configuration file structure and general configuration are as follows:


user www-data;  #  run  nginx  Belongs to the group and owner 
worker_processes 1;  #  open 1 a  nginx  Work in progress ,1 a  CPU  Just write a few cores 
pid /run/nginx.pid;  # pid  The path 

events {
    worker_connections 768;  # 1 Two processes can be processed simultaneously  768  A request 
    # multi_accept on;
}

#  And provide  http  Configuration parameters related to the service, 1 General default configuration can be, the main configuration is  http  contextual  server  context 
http {
    ##
    # Basic Settings
    ##

    ...  The general default configuration is omitted here 

    ##
    # Logging Settings
    ##
    ...  The general default configuration is omitted here 

    ##
    # Gzip Settings
    ##

    ...  The general default configuration is omitted here 

    ##
    # nginx-naxsi config
    ##

    ...  The general default configuration is omitted here 

    ##
    # nginx-passenger config
    ##

    ...  The general default configuration is omitted here 

    ##
    # Virtual Host Configs
    ##

    ...  The general default configuration is omitted here 

    #  At this point, add here  server  Context, start configuration 1 A domain name, 1 a  server  Configuration section 1 A corresponding 1 A domain name 
    server {
        listen 80;        #  Listen to all  ip  On the  80  port 
        server_name _;      #  Domain name: www.example.com  Here,  "_"  Represents fetch match all 
        root /home/filename/;  #  Site root directory 

        location / {       #  Can have multiple  location  Used to configure the routing address 
            try_files index.html =404;
        }
}

#  Mailbox configuration, because it is not used, so put this  mail  The context is commented out 
#mail {
#    # See sample authentication script at:
#    # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#    
#    # auth_http localhost/auth.php;
#    # pop3_capabilities "TOP" "USER";
#    # imap_capabilities "IMAP4rev1" "UIDPLUS";
#   
#    server {
#        listen   localhost:110;
#        protocol  pop3;
#        proxy    on;
#    }
#
#    server {
#        listen   localhost:143;
#        protocol  imap;
#        proxy    on;
#    }
#}

The important thing to note here is the server context in the http context.


server {
    listen 80;        #  Listen to all  ip  On the  80  port 
    server_name _;      #  Domain name: www.example.com  Here,  "_"  Represents fetch match all 
    root /home/filename/;  #  Site root directory 

    location / {       #  Can have multiple  location  Used to configure the routing address 
      try_files index.html =404;
    }
}

The root field here is best written outside the location field, in case css and js cannot be loaded. Because the loading of css and js is not automatic, nginx cannot be performed and requires additional configuration to return resources, this is most convenient for static page deployment.

For example, on the server, there is /home/zhihu/ directory, under which there are index.html files and css/ and img/, root /home/zhihu/ ; Will specify when the server loads the resource /home/zhihu/ The lookup.

Secondly, there are many kinds of matches after location, and the priority of all kinds of matches is also different. Here is an example of an exact match:


server {
    listen 80;        
    server_name _;      
    root /home/zhihu/;  

    location = /zhihu {
      rewrite ^/.* / break;
      try_files index.html =404;
    }
}

At this point, the access www. example. com/zhihu will load zhihu. html came out. Due to the exact match location, only visit www. example. com/zhihu this routing will correct response, and at this time to pass rewrite regular match, put/zhihu resolution to replace the original. For more information on the use of the location field, see resources at the end of the article.

The easiest and easiest way to configure static pages using nginx

Having said a lot about configuration, here is a recommended configuration method that I think is the most convenient. (thank you, senior guyskk, for your clarification)

First, create a directory, such as: /home/ubuntu/website. Then, in the website folder, you can place the static page files you need to deploy. For example, under website, I have three folders, google, zhihu and fenghuang.


server {
    listen 80;
    server_name _;
    root /home/ubuntu/website;
    index index.html;
}

The name of the static page file under each folder here is index.html. I used to have a very bad habit, for example, zhihu page would like to be named zhihu.html, but from the front end, this is not in accordance with the specification.

. Such a configuration, for example, when you visit www showzeng. cn google /, nginx will go website google folder in the directory to find index. html google page back and, by the same token, the access www. showzeng. cn zhihu /, index.html will be found in the zhihu folder and the zhihu page will be returned.

When you add your home page, index.html, to the equivalent of the zhihu, google, and fenghuang folders, it will return when you visit www.example.com.

Here is only one fly in the ointment is that access to the domain name in www. showzeng. cn/zhihu automatically add/at the end, in the browser by F12 debugging will find www. showzeng. cn/zhihu for 301 status code, because index. html zhihu/folder, So in the search process will be redirected to the www. showzeng. cn zhihu /, I cannot be accepted at first, the 1 / look too bad, but as long as the thought of 1 1 1 location field to match, I will accept the 1 following. I don't know about you, but I accept it.

Nginx is up and running


$ sudo nginx -s reload

The reload method does not need to restart the service, but reloads the configuration file directly, so that the client side will not feel the service exception and realize smooth switching. You can also restart the nginx service.


$ sudo service nginx restart

Nginx is down


$ sudo nginx -s stop

The resources

The getting started guide to Nginx

Nginx for Developers: An Introduction

nginx configuration location summary and rewrite rule writing


Related articles: