Analysis and solution of session stickiness problem when Nginx connects tomcat

  • 2020-05-14 05:35:47
  • OfStack

In the environment of multiple back-end servers, in order to ensure that one client only communicates with one server, we must use long connection. Use what means to achieve the connection, common nginx used to own ip_hash to do, I think this is definitely not a good way, if the front is CDN, or a local area network (LAN) clients access the server at the same time, resulting in unbalanced power distribution server, and can't guarantee every visit is stuck in the same 1 server. If you think about what cookie would look like, each computer would have a different cookie, maintaining a long connection while keeping the pressure on the server balanced.

Problem analysis:

1. 1 started to request to come here, without the information of session, jvm_route was sent to one tomcat according to the method of round robin.

2. tomcat adds session information and returns it to the customer.

3. When the user makes the request, jvm_route sees the name of the back-end server in session and forwards the request to the corresponding server.

For the time being, the jvm_route module does not support the default fair mode. The working mode of jvm_route is in conflict with that of fair. For a particular user, when tomcat, which 1 directly serves him, goes down, by default it will retry max_fails a number of times, and if that fails again, it will re-enable round robin, in which case the user's session will be lost.

In general, jvm_route achieves session stickiness by means of session_cookie, attaching a particular session to a particular tomcat, thus solving the session out-of-sync problem, but not the session transfer problem after a crash.
If there is no jvm_route, when the user requests again, nignx will randomly send the request to the tomcat server at the back end because there is no session information. In this case, there is no problem for normal page access. The result of a request with login authentication information is that the application server never logs in.

This module achieves session viscosity by session cookie. If there is no session in cookie and url, then this is a simple round-robin load balancing.

To solve the above similar problems, looked up from the Internet, there are generally several ways:

1) ip_hash (not recommended)

The ip_hash technology in nginx can direct a request for ip to the same back end, so that a client under ip and a back end under ip can establish a stable session. ip_hash is defined in the upstream configuration:


 upstream backend {  
  server 192.168.12.10:8080 ;  
  server 192.168.12.11:9090 ;  
  ip_hash;  
  } 

It is not recommended for the following reasons:

1/ nginx is not the most front-end server.

ip_hash requires nginx1 to be the most front-end server, otherwise nginx cannot make hash according to ip if it cannot get ip correct. For example, if squid is used as the most front-end, then when nginx takes ip, it can only get the ip address of squid server. It is definitely wrong to use this address to shunt.

The 2/ nginx backend has other ways of load balancing.

If the nginx backend is load-balanced and the request is diverted in a different way, then a client request cannot be located on the same session application server.

3/ multiple extranet exits.

Many companies have multiple outlets, multiple ip addresses, and users automatically switch ip when they access the Internet. And there are plenty of them. Using ip_hash will not work for users in this situation, and you cannot bind a user to a fixed tomcat.

2)nginx_upstream_jvm_route(nginx extension, recommended) -- I tried version 1.8 and found that the new version is no longer supported!! Clean.. However, version 1.4.2 is said to be supported.

nginx_upstream_jvm_route nginx is an extension module of nginx, which is used to realize Session Sticky's functions based on Cookie.

In simple terms, it is based on JSESSIONID in cookie to decide which server to send the request to the backend. nginx_upstream_jvm_route will bind the server identity of the response to JSESSIONID in cookie when the user requests server at the backend of cookie the first time, so that when the user makes the next request, nginx will decide which backend server will handle according to JSESSIONID.

1 / nginx_upstream_jvm_route installation

Download address (svn) : http: / / nginx - upstream - jvm - route. googlecode. com/svn trunk /

Assuming that nginx_upstream_jvm_route The download path is /usr/local/nginx_upstream_jvm_route,

(1) enter the nginx source path

patch -p0 < /usr/local/nginx_upstream_jvm_route/jvm_route.patch

(2)./configure --with-http_stub_status_module --with-http_ssl_module --prefix=/usr/local/nginx --with-

pcre=/usr/local/pcre-8.33 --add-module=/usr/local/nginx_upstream_jvm_route

(3)make & make install

2 / nginx configuration


upstream tomcats_jvm_route 
    { 
       # ip_hash;  
       server  192.168.33.10:8090 srun_id=tomcat01;  
       server  192.168.33.11:8090 srun_id=tomcat02; 
       jvm_route $cookie_JSESSIONID|sessionid reverse; 
    } 

3 / tomcat configuration

Modify 192.168.33.10:8090 tomcat server xml,

will

<Engine name="Catalina" defaultHost="localhost" > 

Is amended as:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat01"> 

Similarly, in 192.168.33.11:8090 server. Add jvmRoute xml = "tomcat02".

4 / test

Launch tomcat and nginx, access nginx agent, use Google browser, F12, view JSESSIONID in cookie,
Like: ABCD123456OIUH897SDFSDF tomcat01, refresh will not change

3) Nginx Sticky module based on cookie

conclusion

The above is the site to introduce to you Nginx connection tomcat session viscosity problem analysis and solution, I hope to help you, if you have any questions welcome to leave me a message, this site will timely reply to you!


Related articles: