nodejs's Core Process for Handling tcp Connections

  • 2021-11-01 01:55:17
  • OfStack

A few days ago, I exchanged some knowledge about epoll and request processing in nodejs with a small partner. Today, I will briefly talk about the logic of request processing in nodejs. Let's start with the listen function.


int uv_tcp_listen(uv_tcp_t* tcp, int backlog, uv_connection_cb cb) {
 //  Set the policy for processing requests, as shown in the following analysis 
 if (single_accept == -1) {
  const char* val = getenv("UV_TCP_SINGLE_ACCEPT");
  single_accept = (val != NULL && atoi(val) != 0); /* Off by default. */
 }
 if (single_accept)
  tcp->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT;
 //  Execute bind Or set a flag 
 err = maybe_new_socket(tcp, AF_INET, flags);
 //  Start listening 
 if (listen(tcp->io_watcher.fd, backlog))
  return UV__ERR(errno);
 //  Set callback 
 tcp->connection_cb = cb;
 tcp->flags |= UV_HANDLE_BOUND;
 //  Settings io The callback of the observer, which is defined by the epoll Execute when listening for a connection 
 tcp->io_watcher.cb = uv__server_io;
 //  Insert the observer queue, which has not been increased to epoll , poll io Phase then traverses the observer queue for processing ( epoll_ctl ) 
 uv__io_start(tcp->loop, &tcp->io_watcher, POLLIN);

 return 0;
}

We see that when we are createServer, the Libuv layer is the logic of traditional network programming. At this time, our service started. During the poll io phase, our listening file descriptors and contexts (events of interest, callbacks, etc.) are registered in epoll. Normally, it is blocked in epoll. So what happens when an tcp connection arrives? epoll first iterates over the fd that triggered the event, and then executes a callback in the context of fd, which is uvserver_io. Let's look at uvserver_io.


void uv__server_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) {
 //  Cyclic processing, uv__stream_fd(stream) The corresponding to the server fd
 while (uv__stream_fd(stream) != -1) {
  //  Pass accept Get the communication with the client fd We see this fd And the server's fd Is it 1 Like 
  err = uv__accept(uv__stream_fd(stream));
  // uv__stream_fd(stream) Corresponding fd Non-blocking, returning this error indicates that no connection is available accept Go back directly 
  if (err < 0) {
   if (err == UV_EAGAIN || err == UV__ERR(EWOULDBLOCK))
    return;
  }
  //  Record it 
  stream->accepted_fd = err;
  //  Execute a callback 
  stream->connection_cb(stream, 0);
  /*
   stream->accepted_fd For -1 Explain that in the callback connection_cb It has already been consumed in accepted_fd , 
    Otherwise, log off the server first in epoll In fd Read the event of, wait for consumption before registering, that is, no longer process the request 
  */
  if (stream->accepted_fd != -1) {
   uv__io_stop(loop, &stream->io_watcher, POLLIN);
   return;
  }
 /*
   ok , accepted_fd Has been consumed, do we have to continue accept New fd , 
    If you set UV_HANDLE_TCP_SINGLE_ACCEPT Which means that only the 1 Connection, and then 
    Sleep 1 Yes, give opportunities to other processes accept (In a multi-process architecture). If it is not a multi-process architecture, set this again. 
    It will cause the processing connection to be delayed 1 Under 
 */
  if (stream->type == UV_TCP &&
    (stream->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) {
   struct timespec timeout = { 0, 1 };
   nanosleep(&timeout, NULL);
  }
 }
}

From uv__server_io, we know that Libuv keeps accept new fd in one loop, and then executes a callback, which normally consumes fd, and loops until there is no connection to handle. Next, let's focus on how fd is consumed in callbacks, and whether a large number of loops will consume too much time and cause the event loop of Libuv to be blocked for 1 session. The callback of tcp is OnConnection of c + + layer.


//  Callbacks triggered when there is a connection 
template <typename WrapType, typename UVType>
void ConnectionWrap<WrapType, UVType>::OnConnection(uv_stream_t* handle,
                          int status) {
 //  Get Libuv Structure corresponding c++ Layer object                           
 WrapType* wrap_data = static_cast<WrapType*>(handle->data);
 CHECK_EQ(&wrap_data->handle_, reinterpret_cast<UVType*>(handle));

 Environment* env = wrap_data->env();
 HandleScope handle_scope(env->isolate());
 Context::Scope context_scope(env->context());

 //  Objects that communicate with clients 
 Local<Value> client_handle;

 if (status == 0) {
  // Instantiate the client javascript object and handle.
  //  New 1 A js Layer usage object 
  Local<Object> client_obj;
  if (!WrapType::Instantiate(env, wrap_data, WrapType::SOCKET)
       .ToLocal(&client_obj))
   return;

  // Unwrap the client javascript object.
  WrapType* wrap;
  //  Put js Objects used by layers client_obj Corresponding to c++ Layer objects are stored to wrap Medium 
  ASSIGN_OR_RETURN_UNWRAP(&wrap, client_obj);
  //  Get the corresponding handle
  uv_stream_t* client = reinterpret_cast<uv_stream_t*>(&wrap->handle_);
  //  From handleaccpet Arrive fd Zhong Na 1 Save to client , client You can communicate with the client 
  if (uv_accept(handle, client))
   return;
  client_handle = client_obj;
 } else {
  client_handle = Undefined(env->isolate());
 }
 //  Callback js , client_handle Equivalent to js Layer execution new TCP
 Local<Value> argv[] = { Integer::New(env->isolate(), status), client_handle };
 wrap_data->MakeCallback(env->onconnection_string(), arraysize(argv), argv);
}

The code looks complicated, so we only need to pay attention to uv_accept. The parameters of uv_accept, the first is the server-corresponding handle, and the second is the object representing communication with the client.


int uv_accept(uv_stream_t* server, uv_stream_t* client) {
 int err;

 switch (client->type) {
  case UV_NAMED_PIPE:
  case UV_TCP:
   //  Put fd Set to client Medium 
   err = uv__stream_open(client,
              server->accepted_fd,
              UV_HANDLE_READABLE | UV_HANDLE_WRITABLE);
   break;
 // ...
 }

 client->flags |= UV_HANDLE_BOUND;
 //  The mark has been consumed fd
 server->accepted_fd = -1;
 return err;
}

uv_accept is mainly two logics, which set fd communicating with clients into client and mark that it has been consumed, thus driving the while loop just mentioned to continue execution. For the upper layer, it is to get an object with the client, which is a structure at Libuv layer, an c + + object at c + + layer, and an js object at js layer. The three of them are encapsulated and associated layer by layer, and the core is fd in client structure of Libuv, which is the bottom ticket for communication with the client. Finally, the js layer is called back, which is onconnection that executes net. js. onconnection encapsulates another Socket object for representation and client communication, which holds c + + layer object, c + + layer object holds Libuv structure, and Libuv structure holds fd.


const socket = new Socket({
  handle: clientHandle,
  allowHalfOpen: self.allowHalfOpen,
  pauseOnCreate: self.pauseOnConnect,
  readable: true,
  writable: true
 });

Related articles: