Detailed explanation of the implementation of nginx process lock

  • 2021-11-01 05:42:05
  • OfStack

Directory 1. nginx Process Lock Function 2. Entry Level Lock Use 3. nginx Process Lock Implementation 3.1, Lock Data Structure 3.2, fd Lock/Unlock Implementation 3.3, nginx Lock Instance Initialization 3.4, Shared Memory Lock/Unlock Implementation 4

1. The role of nginx process lock

nginx is a multi-process concurrent model application. To put it bluntly, there are multiple worker listening for network requests. Whoever receives a request will complete the subsequent transactions. If there is no lock, then this is the scenario. When a request is accessed by the system, the process that can listen to the port will process the transaction at the same time. Of course, the system will avoid this kind of bad things, but there are so-called surprises. (I don't know if it's right, it probably means that.)

Therefore, in order to avoid many processes listening at the same time, socket should be monitored orderly among multiple worker. In order to make multiple worker orderly, the process lock mentioned in this paper appears, and only the process that grabs the lock can access the network request.

That is, the following process:


// worker  Core transaction framework 
// ngx_event.c
void
ngx_process_events_and_timers(ngx_cycle_t *cycle)
{
    ngx_uint_t  flags;
    ngx_msec_t  timer, delta;

    if (ngx_timer_resolution) {
        timer = NGX_TIMER_INFINITE;
        flags = 0;

    } else {
        timer = ngx_event_find_timer();
        flags = NGX_UPDATE_TIME;

#if (NGX_WIN32)

        /* handle signals from master in case of network inactivity */

        if (timer == NGX_TIMER_INFINITE || timer > 500) {
            timer = 500;
        }

#endif
    }

    if (ngx_use_accept_mutex) {
        //  For 1 Set the fairness and avoid repeatedly competing for locks 
        if (ngx_accept_disabled > 0) {
            ngx_accept_disabled--;

        } else {
            //  Only the process of grabbing the lock is carried out  socket  Adj.  accept()  Operation 
            //  Others worker The previously accessed request is processed, read/write Operation 
            if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) {
                return;
            }

            if (ngx_accept_mutex_held) {
                flags |= NGX_POST_EVENTS;

            } else {
                if (timer == NGX_TIMER_INFINITE
                    || timer > ngx_accept_mutex_delay)
                {
                    timer = ngx_accept_mutex_delay;
                }
            }
        }
    }
    //  Other core transactions 
    if (!ngx_queue_empty(&ngx_posted_next_events)) {
        ngx_event_move_posted_next(cycle);
        timer = 0;
    }

    delta = ngx_current_msec;

    (void) ngx_process_events(cycle, timer, flags);

    delta = ngx_current_msec - delta;

    ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "timer delta: %M", delta);

    ngx_event_process_posted(cycle, &ngx_posted_accept_events);

    if (ngx_accept_mutex_held) {
        ngx_shmtx_unlock(&ngx_accept_mutex);
    }

    if (delta) {
        ngx_event_expire_timers();
    }

    ngx_event_process_posted(cycle, &ngx_posted_events);
}
//  Acquire the lock and register socket accept()  The process is as follows 
ngx_int_t
ngx_trylock_accept_mutex(ngx_cycle_t *cycle)
{
    if (ngx_shmtx_trylock(&ngx_accept_mutex)) {

        ngx_log_debug0(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                       "accept mutex locked");

        if (ngx_accept_mutex_held && ngx_accept_events == 0) {
            return NGX_OK;
        }

        if (ngx_enable_accept_events(cycle) == NGX_ERROR) {
            //  Unlock operation 
            ngx_shmtx_unlock(&ngx_accept_mutex);
            return NGX_ERROR;
        }

        ngx_accept_events = 0;
        ngx_accept_mutex_held = 1;

        return NGX_OK;
    }

    ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "accept mutex lock failed: %ui", ngx_accept_mutex_held);

    if (ngx_accept_mutex_held) {
        if (ngx_disable_accept_events(cycle, 0) == NGX_ERROR) {
            return NGX_ERROR;
        }

        ngx_accept_mutex_held = 0;
    }

    return NGX_OK;
}

Needless to say, the core grabs the locked worker before accept operation can be carried out. If worker does not grab the lock, it should actively release the previous accept () power. Thus, at the same time, only one worker is processing accept events.

2. Entry-level lock use

Locks are generally interfaces defined by programming languages themselves, or fixed usage.

For example, synchronized, xxx and Lock in java are related and contract locks such as CountDownLatch, CyclicBarrier, ReentrantLock, ReentrantReadWriteLock and Semaphore...

For example, threading. Lock (), threading. RLock () in python...

For example, flock () in php...

The reason why it is an entry level is because these are all interfaces api. You only need to adjust 1 according to the usage specification, and you don't need more knowledge. But if you want to make good use of all the details, it is not simple.

3. Implementation of nginx Process Lock

Because nginx is written in C language, it is definitely closer to the bottom. Through its implementation, we can see how to implement the lock, which should enable us to better understand the deep meaning of the lock.

Generally speaking, locks include several general directions: lock data structure definition, locking logic, unlocking logic, and 1 notification mechanism, timeout mechanism and so on. Let's look at the implementation of nginx in several directions:

3.1. Data Structure of Lock

First, define what variables the lock has, then instantiate a value and share it with multiple processes.


// event/ngx_event.c
//  Global accept Lock variable definition 
ngx_shmtx_t           ngx_accept_mutex;
//  This lock has 1 A 
// atomic  Use  volatile  Modified implementation 
typedef volatile ngx_atomic_uint_t  ngx_atomic_t;
typedef struct {
#if (NGX_HAVE_ATOMIC_OPS)
    //  There are locks implemented using atomic update variables, behind which is a shared memory area 
    ngx_atomic_t  *lock;
#if (NGX_HAVE_POSIX_SEM)
    ngx_atomic_t  *wait;
    ngx_uint_t     semaphore;
    sem_t          sem;
#endif
#else
    //  Have use fd Implement the lock, fd Behind it is 1 File instances 
    ngx_fd_t       fd;
    u_char        *name;
#endif
    ngx_uint_t     spin;
} ngx_shmtx_t;
//  Shared memory data structure definition 
typedef struct {
    u_char      *addr;
    size_t       size;
    ngx_str_t    name;
    ngx_log_t   *log;
    ngx_uint_t   exists;   /* unsigned  exists:1;  */
} ngx_shm_t;

3.2. Lock/Unlock Implementation Based on fd

With a lock instance, you can lock and unlock it. There are two lock implementations for nginx, mainly based on platform differences: file-based or shared intrinsic implementations. Based on fd, that is, file-based implementation, this is still a bit heavy operation. As follows:


// ngx_shmtx.c
ngx_uint_t
ngx_shmtx_trylock(ngx_shmtx_t *mtx)
{
    ngx_err_t  err;

    err = ngx_trylock_fd(mtx->fd);

    if (err == 0) {
        return 1;
    }

    if (err == NGX_EAGAIN) {
        return 0;
    }

#if __osf__ /* Tru64 UNIX */

    if (err == NGX_EACCES) {
        return 0;
    }

#endif

    ngx_log_abort(err, ngx_trylock_fd_n " %s failed", mtx->name);

    return 0;
}
// core/ngx_shmtx.c
// 1.  Locking process 
ngx_err_t
ngx_trylock_fd(ngx_fd_t fd)
{
    struct flock  fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_WRLCK;
    fl.l_whence = SEEK_SET;

    if (fcntl(fd, F_SETLK, &fl) == -1) {
        return ngx_errno;
    }

    return 0;
}
// os/unix/ngx_file.c
ngx_err_t
ngx_lock_fd(ngx_fd_t fd)
{
    struct flock  fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_WRLCK;
    fl.l_whence = SEEK_SET;
    //  Invoke the lock method provided by the system 
    if (fcntl(fd, F_SETLKW, &fl) == -1) {
        return ngx_errno;
    }

    return 0;
}

// 2.  Unlocking implementation 
// core/ngx_shmtx.c
void
ngx_shmtx_unlock(ngx_shmtx_t *mtx)
{
    ngx_err_t  err;

    err = ngx_unlock_fd(mtx->fd);

    if (err == 0) {
        return;
    }

    ngx_log_abort(err, ngx_unlock_fd_n " %s failed", mtx->name);
}
// os/unix/ngx_file.c
ngx_err_t
ngx_unlock_fd(ngx_fd_t fd)
{
    struct flock  fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_UNLCK;
    fl.l_whence = SEEK_SET;

    if (fcntl(fd, F_SETLK, &fl) == -1) {
        return  ngx_errno;
    }

    return 0;
}

The key point is the call of fcntl (), which is the system api. Of course, from a bystander's point of view, it is actually because the operation of multiple processes on files is visible, so the purpose of process lock is achieved. Among them, there is a definite semantic difference between tryLock and lock, that is, when try, one mark of success will be obtained, while when lock is carried out directly, no mark can be obtained. 1 will ask to block the request

3.3. Initialization of nginx Lock Instance

Maybe in some places, the initialization of a lock instance is just a simple assignment of a variable. But there are some differences in nginx. First, it is necessary to ensure that each ES 104EN can see the same instance or equivalent instance. Because worker is a process coming out of fork () at master, as long as a good lock is instantiated in master, it is inevitable that each worker can get one sample value. So, is that all?


//  Initialization of shared locks, in ngx master  In progress, after fork() To worker Process 
// event/ngx_event.c
static ngx_int_t
ngx_event_module_init(ngx_cycle_t *cycle)
{
    void              ***cf;
    u_char              *shared;
    size_t               size, cl;
    //  Definition 1 Segment shared memory 
    ngx_shm_t            shm;
    ngx_time_t          *tp;
    ngx_core_conf_t     *ccf;
    ngx_event_conf_t    *ecf;

    cf = ngx_get_conf(cycle->conf_ctx, ngx_events_module);
    ecf = (*cf)[ngx_event_core_module.ctx_index];

    if (!ngx_test_config && ngx_process <= NGX_PROCESS_MASTER) {
        ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0,
                      "using the \"%s\" event method", ecf->name);
    }

    ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);

    ngx_timer_resolution = ccf->timer_resolution;

#if !(NGX_WIN32)
    {
    ngx_int_t      limit;
    struct rlimit  rlmt;

    if (getrlimit(RLIMIT_NOFILE, &rlmt) == -1) {
        ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno,
                      "getrlimit(RLIMIT_NOFILE) failed, ignored");

    } else {
        if (ecf->connections > (ngx_uint_t) rlmt.rlim_cur
            && (ccf->rlimit_nofile == NGX_CONF_UNSET
                || ecf->connections > (ngx_uint_t) ccf->rlimit_nofile))
        {
            limit = (ccf->rlimit_nofile == NGX_CONF_UNSET) ?
                         (ngx_int_t) rlmt.rlim_cur : ccf->rlimit_nofile;

            ngx_log_error(NGX_LOG_WARN, cycle->log, 0,
                          "%ui worker_connections exceed "
                          "open file resource limit: %i",
                          ecf->connections, limit);
        }
    }
    }
#endif /* !(NGX_WIN32) */


    if (ccf->master == 0) {
        return NGX_OK;
    }

    if (ngx_accept_mutex_ptr) {
        return NGX_OK;
    }


    /* cl should be equal to or greater than cache line size */

    cl = 128;

    size = cl            /* ngx_accept_mutex */
           + cl          /* ngx_connection_counter */
           + cl;         /* ngx_temp_number */

#if (NGX_STAT_STUB)

    size += cl           /* ngx_stat_accepted */
           + cl          /* ngx_stat_handled */
           + cl          /* ngx_stat_requests */
           + cl          /* ngx_stat_active */
           + cl          /* ngx_stat_reading */
           + cl          /* ngx_stat_writing */
           + cl;         /* ngx_stat_waiting */

#endif

    shm.size = size;
    ngx_str_set(&shm.name, "nginx_shared_zone");
    shm.log = cycle->log;
    //  Allocate shared memory space ,  Use  mmap  Realization 
    if (ngx_shm_alloc(&shm) != NGX_OK) {
        return NGX_ERROR;
    }

    shared = shm.addr;

    ngx_accept_mutex_ptr = (ngx_atomic_t *) shared;
    ngx_accept_mutex.spin = (ngx_uint_t) -1;
    //  Assign process locks based on shared files or memory, so as to realize multi-process control 
    if (ngx_shmtx_create(&ngx_accept_mutex, (ngx_shmtx_sh_t *) shared,
                         cycle->lock_file.data)
        != NGX_OK)
    {
        return NGX_ERROR;
    }

    ngx_connection_counter = (ngx_atomic_t *) (shared + 1 * cl);

    (void) ngx_atomic_cmp_set(ngx_connection_counter, 0, 1);

    ngx_log_debug2(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "counter: %p, %uA",
                   ngx_connection_counter, *ngx_connection_counter);

    ngx_temp_number = (ngx_atomic_t *) (shared + 2 * cl);

    tp = ngx_timeofday();

    ngx_random_number = (tp->msec << 16) + ngx_pid;

#if (NGX_STAT_STUB)

    ngx_stat_accepted = (ngx_atomic_t *) (shared + 3 * cl);
    ngx_stat_handled = (ngx_atomic_t *) (shared + 4 * cl);
    ngx_stat_requests = (ngx_atomic_t *) (shared + 5 * cl);
    ngx_stat_active = (ngx_atomic_t *) (shared + 6 * cl);
    ngx_stat_reading = (ngx_atomic_t *) (shared + 7 * cl);
    ngx_stat_writing = (ngx_atomic_t *) (shared + 8 * cl);
    ngx_stat_waiting = (ngx_atomic_t *) (shared + 9 * cl);

#endif

    return NGX_OK;
}
// core/ngx_shmtx.c
// 1.  File-based process space ,  Use  fd
ngx_int_t
ngx_shmtx_create(ngx_shmtx_t *mtx, ngx_shmtx_sh_t *addr, u_char *name)
{
    //  By master Process creation, so it is a process-safe operation, each worker You can use it directly 
    if (mtx->name) {
        //  If it has already been created, the  fd  Has been assigned, can not be created, directly shared fd You can 
        // fd  Behind it is 1 File instances 
        if (ngx_strcmp(name, mtx->name) == 0) {
            mtx->name = name;
            return NGX_OK;
        }

        ngx_shmtx_destroy(mtx);
    }
    //  Lock sharing using file creation 
    mtx->fd = ngx_open_file(name, NGX_FILE_RDWR, NGX_FILE_CREATE_OR_OPEN,
                            NGX_FILE_DEFAULT_ACCESS);

    if (mtx->fd == NGX_INVALID_FILE) {
        ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, ngx_errno,
                      ngx_open_file_n " \"%s\" failed", name);
        return NGX_ERROR;
    }
    //  After the creation is completed, it can be deleted, and subsequent operations are only based on fd Instance does locking operation 
    if (ngx_delete_file(name) == NGX_FILE_ERROR) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      ngx_delete_file_n " \"%s\" failed", name);
    }

    mtx->name = name;

    return NGX_OK;
}

// 2.  Creation of Shared Lock Based on Shared Memory 
// ngx_shmtx.c
ngx_int_t
ngx_shmtx_create(ngx_shmtx_t *mtx, ngx_shmtx_sh_t *addr, u_char *name)
{
    mtx->lock = &addr->lock;

    if (mtx->spin == (ngx_uint_t) -1) {
        return NGX_OK;
    }

    mtx->spin = 2048;

#if (NGX_HAVE_POSIX_SEM)

    mtx->wait = &addr->wait;

    if (sem_init(&mtx->sem, 1, 0) == -1) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      "sem_init() failed");
    } else {
        mtx->semaphore = 1;
    }

#endif

    return NGX_OK;
}
// os/unix/ngx_shmem.c
ngx_int_t
ngx_shm_alloc(ngx_shm_t *shm)
{
    shm->addr = (u_char *) mmap(NULL, shm->size,
                                PROT_READ|PROT_WRITE,
                                MAP_ANON|MAP_SHARED, -1, 0);

    if (shm->addr == MAP_FAILED) {
        ngx_log_error(NGX_LOG_ALERT, shm->log, ngx_errno,
                      "mmap(MAP_ANON|MAP_SHARED, %uz) failed", shm->size);
        return NGX_ERROR;
    }

    return NGX_OK;
}

The lock implementation based on fd is essentially based on the implementation of the file system behind it. Because the file system is visible to the process, the same fd control is the control of the common lock.

3.4. Lock/Unlock Implementation Based on Shared Memory

The so-called shared memory is actually a common memory area, which is beyond the scope of the process (managed by the operating system). This is the creation of mmap () we saw earlier, which is a block of shared memory.


// ngx_shmtx.c
ngx_uint_t
ngx_shmtx_trylock(ngx_shmtx_t *mtx)
{
    //  Change the value of the shared memory area directly 
    // cas  Success in changing is success in locking. 
    return (*mtx->lock == 0 && ngx_atomic_cmp_set(mtx->lock, 0, ngx_pid));
}

// shm Unlock operation of version , cas  Parsing, with notification 
void
ngx_shmtx_unlock(ngx_shmtx_t *mtx)
{
    if (mtx->spin != (ngx_uint_t) -1) {
        ngx_log_debug0(NGX_LOG_DEBUG_CORE, ngx_cycle->log, 0, "shmtx unlock");
    }

    if (ngx_atomic_cmp_set(mtx->lock, ngx_pid, 0)) {
        ngx_shmtx_wakeup(mtx);
    }
}
//  Notify the waiting process 
static void
ngx_shmtx_wakeup(ngx_shmtx_t *mtx)
{
#if (NGX_HAVE_POSIX_SEM)
    ngx_atomic_uint_t  wait;

    if (!mtx->semaphore) {
        return;
    }

    for ( ;; ) {

        wait = *mtx->wait;

        if ((ngx_atomic_int_t) wait <= 0) {
            return;
        }

        if (ngx_atomic_cmp_set(mtx->wait, wait, wait - 1)) {
            break;
        }
    }

    ngx_log_debug1(NGX_LOG_DEBUG_CORE, ngx_cycle->log, 0,
                   "shmtx wake %uA", wait);

    if (sem_post(&mtx->sem) == -1) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      "sem_post() failed while wake shmtx");
    }

#endif
}

Shared memory version of the lock implementation, the basic is cas of the memory variable settings. It's just that this oriented memory is the memory of the shared area.

4. What does a lock mean after all

I have seen many locks, but I still can't pass this level.

What exactly is a lock? In fact, a lock is an identification bit. When someone sees this identification bit, they actively stop the operation, or proceed, etc., so that it looks like a lock. This identification bit can be set in an object or a global value, and can also be set by means of various existing media, such as files, such as redis, such as zk. It doesn't make any difference. Because the key to the problem is not where to store it, but how to set this identification bit safely.

To implement locks, a strong underlying meaning guarantee is generally required, such as cas operation at cpu level and queue serial atomic operation at application level. . .
As for what, memory lock, file lock and advanced lock all have their own application scenarios. And to choose all kinds of locks, it has become the key to evaluate high and low. At this moment, you should be able to judge!

The above is a detailed explanation of the implementation of nginx process lock details, more information about nginx process lock please pay attention to other related articles on this site!


Related articles: