OpenCV implements face recognition

  • 2020-05-30 20:26:00
  • OfStack

There are mainly the following steps:

1. Face detection

2. Face preprocessing

3, from the collected face training machine learning algorithm

4. Face recognition

5. Tie up the loose ends

Face detection algorithm:

The basic idea behind the Haar-based face detector is that for most areas of the front of the face, there are situations where the eyes should be darker than the forehead and cheeks, and the mouth should be darker than the cheeks. It usually performs about 20 such comparisons to determine whether the object it detects is a face, often thousands of times in fact.

The basic idea of the LBP-based face detector is similar to that of the Haar-based face detector, but it compares the histogram of pixel brightness, such as the histogram of edges, corners, and flat areas.

Both face detectors can find faces by training large sets of images that are stored in opencv in XML files for later use.

These cascade classification detectors typically use at least 1,000 unique face images and 10,000 non-face images for training, which takes a few hours like LBP,

Haar takes a week.

The key codes in the project are as follows:


initDetectors
faceCascade.load(faceCascadeFilename);
eyeCascade1.load(eyeCascadeFilename1);
eyeCascade2.load(eyeCascadeFilename2);

initWebcam
videoCapture.open(cameraNumber);

cvtColor(img, gray, CV_BGR2GRAY);
// If necessary, shrink the image to make the test run faster, and then restore the original size 
resize(gray, inputImg, Size(scaledWidth, scaledHeight));
equalizeHist(inputImg, equalizedImg);
cascade.detectMultiScale(equalizedImg......);

Face preprocessing:

In practice, the training (image acquisition) and test (image from camera) images are usually very different.

The results will be poor, so the data sets used for training are important.

The purpose of face preprocessing is to reduce such problems and improve the reliability of the whole face recognition system.

The simplest form of face preprocessing is to use the equalizeHist() function for histogram equalization, which is similar to the step 1 of face detection.

In practice, in order to make the detection algorithm more reliable, facial feature detection will be used (for example, to detect eyes, nose, mouth and eyebrows), and only eye detection will be used in this project.

Use the trained eye detector that comes with OpenCV. For example, after the frontal face detection is completed, a person's face is obtained. The left eye region and the right eye region of the face are extracted with eye detector, and histogram equalization is carried out for each eye region.

This step involves the following operations:

1. Geometric transformation and clipping

Face alignment is important, rotate the face to keep the eye level, zoom the face to keep the distance between the eyes the same, pan the face so that the eye is always horizontally centered at the desired height,

Cut out the periphery of the face (such as the image background, hair, forehead, ears, and chin).

2. Use histogram to balance the left and right sides of the face

3, smooth

Use bilateral filters to reduce image noise

4. Oval mask

Remove the remaining hair and the background of the face image

The key codes in the project are as follows:


detectBothEyes(const Mat &face, CascadeClassifier &eyeCascade1, CascadeClassifier &eyeCascade2,
Point &leftEye, Point &rightEye, Rect *searchedLeftEye, Rect *searchedRightEye);
topLeftOfFace = face(Rect(leftX, topY, widthX, heightY));
// Examine the left eye in the left face area 
detectLargestObject(topLeftOfFace, eyeCascade1, leftEyeRect, topLeftOfFace.cols);
// The right eye is similar, so you get the center of the eye 
leftEye = Point(leftEyeRect.x + leftEyeRect.width/2, leftEyeRect.y + leftEyeRect.height/2);
// Get the midpoint of the eyes and calculate the Angle between them 
Point2f eyesCenter = Point2f( (leftEye.x + rightEye.x) * 0.5f, (leftEye.y + rightEye.y) * 0.5f );
// Affine distortion (Affine Warping) Need to be 1 Affine matrix 
rot_mat = getRotationMatrix2D(eyesCenter, angle, scale);
// Now you can transform the face to get the detected eyes in the desired position on the face 
warpAffine(gray, warped, rot_mat, warped.size());

// First, the left and right sides of the face were equalized by histogram 
equalizeHist(leftSide, leftSide);
equalizeHist(rightSide, rightSide);
// So let's merge. Let's merge on the left hand side 1/4 And on the right side 1/4 I'm just going to take the pixel value, in the middle 2/4 Area pixel values are passed 1 Definite computation is processed. 

// Bilateral filtering 
bilateralFilter(warped, filtered, 0, 20.0, 2.0);

// Use an elliptical mask to delete 1 Some areas 
filtered.copyTo(dstImg, mask);

Collect and train faces:

A good data set should contain various scenarios of face transformation, which may occur in the training set. If only the positive face is tested, the training image only needs to have a full positive face.

Therefore, a good training set should contain many practical situations.

The images collected in this project are at least 1 second apart, and the relative error evaluation criterion based on L2 norm is used to compare the similarity between the pixels of the two images.


errorL2 = norm(A, B, CV_L2);
similarity = errorL2 / (double)(A.rows * A.cols);

Then compare the threshold of collecting new faces to determine whether to collect this image or not.

Many techniques can be used to obtain more training data, such as mirroring a face, adding random noise, changing a few pixels of the face image, rotation, and so on.


// flip 
flip(preprocessedFace, mirroredFace, 1);

After enough face images are collected for each person, the machine learning algorithm suitable for face recognition must be selected to learn the collected data through it, so as to train a face recognition system for a person.

Face recognition algorithm:

1. Characteristic faces, also known as PCA (principal component analysis)

2. Fisher face, also known as LDA (linear discriminant analysis)

3. Local 2-value mode histogram (Local Binary Pattern Histograms, LBPH)

Other face recognition algorithms: www.face-rec.org /algorithms/

OpenCV provides the CV::Algorithm class, which has several different algorithms, one of which can be used to achieve simple and universal face recognition.

The OpenCV contrib template has an FaceRecognizer class that implements these face recognition algorithms.


initModule_contrib();
model = Algorithm::create<FaceRecognizer>(facerecAlgorithm);

model->train(preprocessedFaces, faceLabels);

This 1 code will execute the entire training algorithm for face recognition.

Face recognition:

1. Face recognition: recognize a person by his or her face

You can simply call the FaceRecognizer::predict() function to identify the person in the photo,

int identity = model->predict(preprocessedFace);

The problem with this is that it can always predict a given person (even if the input image does not belong to the training set).

The solution to this problem is to develop a standard of confidence. If the confidence is too low, it can be interpreted as a person you do not know.

2. Face verification: verify whether there are people you want to find in the image

In order to verify reliability, or whether the system can correctly identify an unknown person, face verification is required.

The method to calculate the confidence is as follows:

The face image is reconstructed using feature vectors and values, and then the input image is compared with the recomposition. If a person has multiple faces in the training set, use feature vectors and features

The value reconstruction should have a very good effect, if not, the difference is very big, indicating that it may be an unknown face.

The subspaceProject() function maps the face image to the feature space and then USES the subspaceReconstruct() function to reconstruct the image from the feature space.

Finish: interactive GUI

Using the OpenCV function it is easy to draw some components, mouse clicks, etc.


Related articles: