OpenCV realizes image correction function

  • 2020-06-19 11:07:49
  • OfStack

1. Demand analysis

The first is the requirement:

1. Make use of affine transformation function in OpenCV to perform some basic transformations on the image, such as translation, rotation and scaling
2. Learn the perspective transformation principle, carry out perspective transformation on a rectangle, and draw the transformation results. First call OpenCV function to achieve the perspective transformation, write your own code to achieve the perspective transformation.
3. Identify a piece of paper for oblique shooting, find out the outline and extract the position of the paper
4. Assuming that you have found the position of the deformed paper through the image processing algorithm, then transform the tilted paper to get the vertical view of the paper and achieve document calibration.

And then analysis:

1. First, the function of OpenCV should be called to translate, rotate and scale the image, and then affine transformation and perspective transformation should be performed.
2. Program to realize affine transformation and perspective transformation. Note that affine transformation is one kind of perspective transformation, so only perspective transformation is needed
3. Document calibration:

(1) Filtering. Considering the words (noise points) in the document, mean filtering and closed operation filtering are used simultaneously.
(2) Edge extraction. Library functions are used to extract edge information
(3) Edge recognition. Using the classical Hough transform, the boundary equation is obtained, and the coordinates of the four angles of the document are calculated
(4) Perspective transformation. Call library function to achieve document calibration

5. Since the source code of the first three requirements and the last one were not appropriate in the same project, I put the code and comments of the first three requirements in the project: job 2_2, and the development environment was win10 vs2017, openCV3.43

2. Implement

Note:

The following functions are all written in the header.h file. The function in the header.h file must be called in main to complete the function
There is the image input path to change.

1. Project: Implementation of operation 2_2

(1) Called the function in OpenCV and wrote an main_transform function. After the main function called it and input the image, the image was reduced, translated, rotated, perspective and affine transform, and the image was displayed and saved (in fact, Later I commented out the affine and perspective of openCV, instead of its built-in functions).
It's a direct call to a function, so there's nothing to talk about.

The following are the renderings of rotation, perspective, translation, reduction and affine:

(2) Manually implement affine, perspective transform functions toushibianhuan and toushibianhuan_gai_fangshebianhuan and call them in main_transform.

Implementation of perspective transformation:

Note that affine transformations are a special case of perspective transformations, so as long as perspective is implemented affine can be implemented.

Implementation of perspective function:

First use getPerspectiveTransform to get the transformation matrix, then look at the perspective function

The toushibianhuan function requires three input parameters:

Parameter 1: Perspective transform input image matrix, Mat Parameter 2: Output image container matrix, Mat Parameter 3: transformation matrix, Mat

After entering the function, first define a position matrix position_maxtri to depict the position of the image before the transformation, use the matrix product of elements, multiply by the transformation matrix, and then calculate the position matrix of the four angles after the transformation.

Using Max and Min functions, the highest point and the lowest point of the image are calculated, and then the height and width of the image are calculated

Then, the point is to define, update and compute the two remapping matrices. Map1 is from the original x -- > Map of the new x, Map2 from the original y -- > A map of the new figure y.


/*-----------------------------------------------------------------------------------------------------------------
Copyright (C),2018---, HUST Liu
 File name: image_solve.h
 Author:  Liu Junyuan  Version: 1 Date: 2018.10.3
 ------------------------------------------------------------------------------------------------------------------
 Description:
  Document correction project .cpp The main functions of phi are stored here 

 
-------  -----------   ------------  ------------   -----------  ---------
 Function description: comMatC For joining matrices 
 toushibianhuan_gai_fangshebianhuan For affine transformations 
 toushibianhuan For affine transformations 
 main_transform  Calls functions to process images, including translation, reduction, rotation, affine transformation, and perspective transformation 
 input_solve  It is used to correct documents, including opening images, filtering, extracting edges, drawing edges, and perspective transformation to correct documents 
--------------------------------------------------------------------------------
 Others:NONE
 Function List: comMatC , toushibianhuan , toushibianhuan_gai_fangshebianhuan , input_solve
 --------------------------------------------------------------------------------
 history: NONE

-------------------------------------------------------------------------------------*/

/*-----------------------------------------------------------------
 standard openCV At the beginning          --
 Reference header files and namespaces         --
------------------------------------------------------------------*/
#include <opencv2/opencv.hpp>
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <imgproc.hpp>
using namespace std;
using namespace cv;

/*-------------------------------------------------------------------------------
Function: comMatC
 Description: Connect the matrix up and down, and output 
 --------------------------------------------------------------------------------
 Calls:Create , copyTo
 Called By: main_transform
 Table Accessed: NONE
 Table Updated: NONE
 --------------------------------------------------------------------------------
 Input:
  The first 1 Parameters: the matrix above, Mat
  The first 2 Parameters: the following matrix, Mat
  The first 3 The output container after connection, Mat
 Output : Output the connected matrix 
 Return: The output matrix 
 Others: The number of columns do not 1 Will report an error! 
---------------------------------------------------------------------------------*/
Mat comMatC(Mat Matrix1, Mat Matrix2, Mat &MatrixCom)
{
 CV_Assert(Matrix1.cols == Matrix2.cols);
 MatrixCom.create(Matrix1.rows + Matrix2.rows, Matrix1.cols, Matrix1.type());
 Mat temp = MatrixCom.rowRange(0, Matrix1.rows);
 Matrix1.copyTo(temp);
 Mat temp1 = MatrixCom.rowRange(Matrix1.rows, Matrix1.rows + Matrix2.rows);
 Matrix2.copyTo(temp1);
 return MatrixCom;
}

/*--------------------------------------------------------------------------------
Function: toushibianhuan
 Description: Realize the function of perspective transformation   That will be input_image According to the tp_Transform_maxtri Matrix transformation output to another 1 Image container 
 -------------------------------------------------------------------------------
 Calls:max , min
 Called By:main_transform
 Table Accessed: NONE
 Table Updated: NONE
 ----------------------------------------------------------------------------------
 Input:
  The first 1 Parameters: image matrix input by perspective transformation, Mat
  The first 2 Parameters: output image container matrix, Mat
  The first 3 Parameters: transformation matrix, Mat
 Output : No return value. Print the original position matrix, the transformed image coordinate matrix and the transformed matrix on the console 
 Return:NONE
 Others:NONE
-----------------------------------------------------------------*/
void toushibianhuan(Mat input_image, Mat &output, Mat tp_Translater_maxtri)
{
 int qiu_max_flag;
 int j;
 int i;
 // Define the vertex position matrix 
 Mat position_maxtri(3, 4, CV_64FC1, Scalar(1));
 position_maxtri.at < double >(0, 0) = 0;
 position_maxtri.at < double >(1, 0) = 0;
 position_maxtri.at < double >(1, 1) = 0;
 position_maxtri.at < double >(0, 2) = 0;
 position_maxtri.at < double >(1, 2) = input_image.rows;
 position_maxtri.at < double >(0, 3) = input_image.cols;
 position_maxtri.at < double >(1, 3) = input_image.rows;
 position_maxtri.at < double >(0, 1) = input_image.cols;
 Mat new_corner = tp_Translater_maxtri * position_maxtri;
 // Print and monitor 3 A matrix 
 cout << "coner_maxtri" << new_corner << ";" << endl << endl;
 cout << "pos_maxtri" << position_maxtri << ";" << endl << endl;
 cout << "T_maxtri" << tp_Translater_maxtri << ";" << endl << endl;
 // To calculate the image height, initialize the highest, lowest, leftmost and rightmost points 
 double max_kuan = new_corner.at < double >(0, 0) / new_corner.at < double >(2, 0);
 double min_kuan = new_corner.at < double >(0, 0) / new_corner.at < double >(2, 0);
 double max_gao = new_corner.at < double >(1, 0) / new_corner.at < double >(2, 0);
 double min_gao = new_corner.at < double >(1, 0) / new_corner.at < double >(2, 0);
 for (qiu_max_flag = 1; qiu_max_flag < 4; qiu_max_flag++)
 {
 max_kuan = max(max_kuan,
 new_corner.at < double >(0, qiu_max_flag) / new_corner.at < double >(2, qiu_max_flag));
 min_kuan = min(min_kuan,
 new_corner.at < double >(0, qiu_max_flag) / new_corner.at < double >(2, qiu_max_flag));
 max_gao = max(max_gao,
 new_corner.at < double >(1, qiu_max_flag) / new_corner.at < double >(2, qiu_max_flag));
 min_gao = min(min_gao,
 new_corner.at < double >(1, qiu_max_flag) / new_corner.at < double >(2, qiu_max_flag));
 }
 // Create a forward mapping matrix  map1, map2
 output.create(int(max_gao - min_gao), int(max_kuan - min_kuan), input_image.type());
 Mat map1(output.size(), CV_32FC1);
 Mat map2(output.size(), CV_32FC1);
 Mat tp_point(3, 1, CV_32FC1, 1);
 Mat point(3, 1, CV_32FC1, 1);
 tp_Translater_maxtri.convertTo(tp_Translater_maxtri, CV_32FC1);
 Mat Translater_inv = tp_Translater_maxtri.inv();
 // The core step is to update the mapping matrix with matrix multiplication 
 for (i = 0; i < output.rows; i++)
 {
 for (j = 0; j < output.cols; j++)
 {
 point.at<float>(0) = j + min_kuan;
 point.at<float>(1) = i + min_gao;
 tp_point = Translater_inv * point;
 map1.at<float>(i, j) = tp_point.at<float>(0) / tp_point.at<float>(2);
 map2.at<float>(i, j) = tp_point.at<float>(1) / tp_point.at<float>(2);
 }
 }

 remap(input_image, output, map1, map2, CV_INTER_LINEAR);
}

/*--------------------------------------------------------------------------------
Function: toushibianhuan_gai_fangshebianhuan
 Description: Implement affine transformation function   That will be input_image According to the Translater_maxtri Matrix transformation output to another 1 Image container 
 ------------------------------------------------------------------------------------
 Calls:comMatC , max , min
 Called By:main_transform
 Table Accessed: NONE
 Table Updated: NONE
 ------------------------------------------------------------------------------------
 Input:
  The first 1 Parameters: image matrix input by perspective transformation, Mat
  The first 2 Parameters: output image matrix, Mat
  The first 3 Parameters: transformation matrix, Mat
 Output : No return value. Print the original position matrix, the transformed image coordinate matrix and the transformed matrix on the console 
 Return:NONE
 Others:NONE
-------------------------------------------------------------------------------*/
void toushibianhuan_gai_fangshebianhuan(Mat input_image, Mat &output, Mat Translater_maxtri)
{
 int width = 0;
 int height = 0;
 Mat tp_Translater_maxtri;
 Mat position_maxtri(3, 4, CV_64FC1, Scalar(1));
 Mat one_vector(1, 3, CV_64FC1, Scalar(0));
 one_vector.at<double>(0, 2) = 1.;
 comMatC(Translater_maxtri, one_vector, tp_Translater_maxtri);
 position_maxtri.at < double >(1, 1) = 0;
 position_maxtri.at < double >(0, 2) = 0;
 position_maxtri.at < double >(0, 0) = 0;
 position_maxtri.at < double >(1, 0) = 0;
 position_maxtri.at < double >(0, 3) = input_image.cols;
 position_maxtri.at < double >(1, 3) = input_image.rows;
 position_maxtri.at < double >(0, 1) = input_image.cols;
 position_maxtri.at < double >(1, 2) = input_image.rows;
 Mat new_corner = tp_Translater_maxtri * position_maxtri;
 cout << "coner_maxtri" << new_corner << ";" << endl << endl;
 cout << "pos_maxtri" << position_maxtri << ";" << endl << endl;
 cout << "T_maxtri" << tp_Translater_maxtri << ";" << endl << endl;
 double max_kuan = new_corner.at < double >(0, 0) / new_corner.at < double >(2, 0);
 double min_kuan = new_corner.at < double >(0, 0) / new_corner.at < double >(2, 0);
 double max_gao = new_corner.at < double >(1, 0) / new_corner.at < double >(2, 0);
 double min_gao = new_corner.at < double >(1, 0) / new_corner.at < double >(2, 0);
 for (int flag = 1; flag < 4; flag++)
 {
 max_kuan = max(max_kuan, new_corner.at < double >(0, flag) / new_corner.at < double >(2, flag));
 min_kuan = min(min_kuan, new_corner.at < double >(0, flag) / new_corner.at < double >(2, flag));
 max_gao = max(max_gao, new_corner.at < double >(1, flag) / new_corner.at < double >(2, flag));
 min_gao = min(min_gao, new_corner.at < double >(1, flag) / new_corner.at < double >(2, flag));
 }
 output.create(int(max_gao - min_gao), int(max_kuan - min_kuan), input_image.type());
 Mat map1(output.size(), CV_32FC1);
 Mat map2(output.size(), CV_32FC1);
 Mat tp_point(3, 1, CV_32FC1, 1);
 Mat point(3, 1, CV_32FC1, 1);
 tp_Translater_maxtri.convertTo(tp_Translater_maxtri, CV_32FC1);
 Mat Translater_inv = tp_Translater_maxtri.inv();
 for (int i = 0; i < output.rows; i++)
 {
 for (int j = 0; j < output.cols; j++)
 {
 point.at<float>(1) = i + min_gao;
 point.at<float>(0) = j + min_kuan;
 tp_point = Translater_inv * point;
 map1.at<float>(i, j) = tp_point.at<float>(0) / tp_point.at<float>(2);
 map2.at<float>(i, j) = tp_point.at<float>(1) / tp_point.at<float>(2);
 }
 }
 remap(input_image, output, map1, map2, CV_INTER_LINEAR);
}

/*------------------------------------------------------------------------------
Function: main_transform
 Description: Realize the function of affine transformation of minification, translation and rotation   , and save the picture in the current project directory 
 ---------------------------------------------------------------------------
 Calls: resize ,  warpAffine ,  Size  ,  Scalar  , getRotationMatrix2D ,  namedWindow , 
 toushibianhuan_gai_fangshebianhuan ,  imshow ,  imwrite , waitKey , printf , warpPerspective , fangshebianhuan
 Called By: main
 Table Accessed: NONE
 Table Updated: NONE
 --------------------------------------------------------------------------------
 Input:
  The first 1 A parameter: float Rotation Angle of type (not radians) 
  The first 2 Parameters: pixels shifted to the right, int type 
  The first 3 Parameters: pixels shifted downward, int type 
  The first 4 Parameters: read the image path, const char type 
  The first 5 A parameter: x Directional expansion ratio, float type 
  The first 6 A parameter: y Directional expansion ratio, float type 
 Output : The images after affine transformation and perspective transformation are saved in the current project directory , All parameters have been set, and the correction effect is not good 
 Return: There is no return value 
 Others:NONE
---------------------------------------------------------------------------*/
void main_transform(float angle, int right_translate, int down_translate,
 const char* road_read_image, float x_tobe, float y_tobe)
{
 Point2f input_image1[3] = { Point2f(50,50),Point2f(200,50),Point2f(50,200) };
 Point2f dst1[3] = { Point2f(0,100),Point2f(200,50),Point2f(180,300) };
 Point2f input_image[4] = { Point2f(100,50),Point2f(100,550),Point2f(350,50),Point2f(350,550) };
 Point2f dst[4] = { Point2f(100,50),Point2f(340,550),Point2f(350,80),Point2f(495,550) };
 Mat kernel2 = getPerspectiveTransform(input_image, dst);
 Mat kernel = getAffineTransform(input_image1, dst1);
 Mat one_vector(1, 3, CV_64FC1, Scalar(0));
 Mat Temp_kernel;
 one_vector.at<double>(0, 2) = 1.;
 comMatC(kernel, one_vector, Temp_kernel);
 float all_tobe = x_tobe / 2 + y_tobe / 2;
 Mat old_image = imread(road_read_image);
 Mat new_min_image;
 Mat new_translation_image;
 Mat rotate_image;
 Mat translater(2, 3, CV_32F, Scalar(0));
 Mat rotater;
 Mat fangshe_image;
 Mat toushi_image;
 vector<int> compression_params;
 resize(old_image, new_min_image, Size(), x_tobe, y_tobe, INTER_CUBIC);
 translater.at<float>(0, 0) = 1;
 translater.at<float>(1, 1) = 1;
 translater.at<float>(0, 2) = right_translate;
 translater.at<float>(1, 2) = down_translate;
 warpAffine(new_min_image, new_translation_image, translater,
 Size(new_min_image.cols*1.5, new_min_image.rows*1.5));
 Point rotate_center = Point(new_translation_image.cols / 3, new_translation_image.rows / 2);
 rotater = getRotationMatrix2D(rotate_center, angle, all_tobe);
 warpAffine(new_translation_image, rotate_image, rotater, Size(),
 INTER_CUBIC | CV_WARP_FILL_OUTLIERS, BORDER_CONSTANT, Scalar(0));
 //warpAffine(new_translation_image, fangshe_image, kernel, Size(new_translation_image.cols*1.5, new_translation_image.rows*1.5));
 // This is a OpenCV Student: Intrinsic affine transformation .........
 compression_params.push_back(IMWRITE_PNG_COMPRESSION);
 toushibianhuan_gai_fangshebianhuan(new_translation_image, fangshe_image, kernel);
 toushibianhuan(fangshe_image, toushi_image, kernel2);

 //warpPerspective(fangshe_image, toushi_image, kernel2, Size(new_translation_image.cols, new_translation_image.rows));
 // This is a openCV Perspective transformation 
 compression_params.push_back(9);
 namedWindow("new_min_image");
 imshow("new_min_image", new_min_image);
 imwrite("task2_1 Zoom in .png", old_image, compression_params);
 namedWindow("new_translation_image");
 imshow("new_translation_image", new_translation_image);
 bool flags = imwrite("task2_1 translation .png", new_translation_image, compression_params);
 namedWindow("rotate_image");
 imshow("rotate_image", rotate_image);
 imwrite("task2_1 rotating .png", rotate_image, compression_params);
 namedWindow("fangshe_image");
 imshow("fangshe_image", fangshe_image);
 imwrite("task2_1 affine .png", fangshe_image, compression_params);
 namedWindow("toushi_image");
 imshow("toushi_image", toushi_image);
 imwrite("task2_1 perspective .png", toushi_image, compression_params);
 printf("%d", flags);

}

/*----------------------------------------------------------------------------
Function: getCrossPoint
 Description: Find the intersection of the two lines 
 -----------------------------------------------------------------------------
 Calls: NONE
 Called By: input_solve
 Table Accessed: NONE
 Table Updated: NONE
 -----------------------------------------------------------------------------
 Input:
  The first 1 The type represented by two points is Vec4i The straight line A
  The first 2 The type represented by two points is Vec4i The straight line B
 Output : Point2f The point of 
 Return:Point2f The point of 
 Others:NONE
--------------------------------------------------------------------------------*/
Point2f getCrossPoint(Vec4i LineA, Vec4i LineB)
{
 double ka, kb;
 // calculate LineA The slope 
 ka = (double)(LineA[3] - LineA[1]) / (double)(LineA[2] - LineA[0]); 
 // calculate LineB The slope 
 kb = (double)(LineB[3] - LineB[1]) / (double)(LineB[2] - LineB[0]); 


 Point2f crossPoint;
 crossPoint.x = (ka*LineA[0] - LineA[1] - kb * LineB[0] + LineB[1]) / (ka - kb);
 crossPoint.y = (ka*kb*(LineA[0] - LineB[0]) + ka * LineB[1] - kb * LineA[1]) / (ka - kb);
 return crossPoint;
}

/*----------------------------------------------------------------------
Function: input_solve
 Description: Function used to open image, filter, extract edge, draw edge, perspective transform correction document. Notice the graph in this function 
  The parameters in the process have been adjusted 
 ------------------------------------------------------------------------
 Calls: imread , resize , morphologyEx , blur , Canny , HoughLines , warpPerspective
 Called By: main
 Table Accessed: NONE
 Table Updated: NONE
 -------------------------------------------------------------------------
 Input:
  The first 1 Parameters: the path of the input image 
 Output : The image corrected by the document 
 Return:NONE
 Others: Corrective images are saved in the current directory: 
 "C:/Users/liujinyuan/source/repos/ homework 2_2/ homework 2_2/task2_2 correct .png"
---------------------------------------------------------------*/
void input_solve(const char* image_road)
{
 // Define the save image parameter vector 
 vector<int> compression_params;
 compression_params.push_back(IMWRITE_PNG_COMPRESSION);
 compression_params.push_back(9);
 // Get the kernel of closed operational filtering 
 Mat element = getStructuringElement(MORPH_RECT, Size(5, 5));
 Mat new_min_image;
 Mat last_kernel;
 // Grayscale map acquisition 
 Mat old_image = imread(image_road,0);
 vector<Vec2f>lines;
 vector<Vec2f>coners;
 vector<Vec4i>lines_2pt(10);
 Point pt1, pt2,pt3,pt4,pt5,pt6;
 Mat last_image;
 Mat new_min_image2;
 resize(old_image, new_min_image, Size(), 0.5, 0.5, INTER_CUBIC);
 resize(old_image, new_min_image2, Size(), 0.5, 0.5, INTER_CUBIC);
 // Closed operational filtering 
 morphologyEx(new_min_image, new_min_image, MORPH_CLOSE, element);
 blur(new_min_image,new_min_image,Size(10,10));
 Canny(new_min_image, new_min_image,8.9,9,3 );
 HoughLines(new_min_image,lines,1,CV_PI/180,158,0,0);
 // Using this loop, you can draw the Hough transform to get a rendering of the line, but for brevity I've temporarily deleted the code that creates the window drawing 
 for (rsize_t i = 0 ; i < lines.size(); i++)
 {
 if (i!=lines.size()-2)
 {
 float zhongxinjuli = lines[i][0], theta = lines[i][1];
 double cos_theta = cos(theta), sin_theta = sin(theta);
 double x0 = zhongxinjuli * cos_theta, y0 = zhongxinjuli * sin_theta;
 pt1.x = cvRound(x0 - 1000 * sin_theta);
 pt1.y = cvRound(y0 + 1000 * cos_theta);
 pt2.x = cvRound(x0 + 1000 * sin_theta);
 pt2.y = cvRound(y0 - 1000 * cos_theta);
 line(new_min_image, pt1, pt2, Scalar(255, 255, 255), 1, LINE_AA);
 }
 }
 // Get the intersection of the Hough transformation line 
 for (rsize_t flag = 0,flag2=0; flag < lines.size(); flag++)
 {
 if (flag != lines.size() - 2)
 {
 float zx_juli = lines[flag][0], theta2 = lines[flag][1];
 double cos_theta2 = cos(theta2), sin_theta2 = sin(theta2);
 double x1 = zx_juli * cos_theta2, y1 = zx_juli * sin_theta2;
 lines_2pt[flag2][0]= cvRound(x1 - 1000 * sin_theta2);
 lines_2pt[flag2][1] = cvRound(y1 +1000 * cos_theta2);
 lines_2pt[flag2][2] = cvRound(x1 + 1000 * sin_theta2);
 lines_2pt[flag2][3] = cvRound(y1 - 1000 * cos_theta2);
 flag2++;
 }
 }
 for(int flag3=0;flag3<4;flag3++)
 {
 cout << "line_vector=" <<lines_2pt [flag3] << " ; " << endl;
 }
 pt3=getCrossPoint(lines_2pt[0],lines_2pt[1]);
 cout << "pt3=" << pt3 << " ; " << endl;
 pt4 = getCrossPoint(lines_2pt[1], lines_2pt[2]);
 cout << "pt4=" << pt4 << " ; " << endl;
 pt5 = getCrossPoint(lines_2pt[2], lines_2pt[3]);
 cout << "pt5=" << pt5<< " ; " << endl;
 pt6= getCrossPoint(lines_2pt[3], lines_2pt[0]);
 cout << "pt6=" << pt6 << " ; " << endl;
 // Perspective transformation 
 Point2f point_set[4] = { pt3,pt6,pt4,pt5 };
 Point2f point_set_transform[4] = { Point2f(50,50),Point2f(500,50) ,Point2f(50,600),Point2f(500,600) };
 last_kernel = getPerspectiveTransform(point_set,point_set_transform);
 warpPerspective(new_min_image2, last_image, last_kernel, Size(old_image.cols, old_image.rows));
 namedWindow("new_min_image");
 // Draw the final renderings 
 imshow("new_min_image", last_image);
 imwrite("task2_2 correct .png", last_image, compression_params);
 waitKey(0);
}

/*-------------------------------------------------------------------------------------------------------------------------------------
Copyright (C),2018---, HUST Liu
 File name: Document correction project .cpp
 Author:  Liu Junyuan  Version: 1 Date:2018.10.3
 Description:
   Part1
  According to the assignment ( 2 ) task ( 1 ) ( 2 ) 
  The following work was done: 
  ( 1 ) after affine transformation, the image zooming translation rotation 
  ( 2 ) calls functions for affine and perspective transformation 
  ( 3 ) to implement the function to do perspective transformation, affine transformation 
------------    ----------    -----------   --------------------   ---- 
 Part2
  According to the assignment ( 2 ) task ( 3 ) ( 4 ) 
  The following work was done: 
  ( 1 ) The gray image is read in, and the edge line is extracted by filtering, edge extraction and Hough transform 
   Paper position (i.e 4 Vertices) 
  ( 2 ) Use perspective transformation to correct documents 
---------  *  ----------  *  --------------- * --------------- * ---------
  Specific task process: 
 Part1
  call OpenCV Inside the function, write main_transform Functions for minification, translation, rotation, and affine transformation 
  (Actually later openCV I have commented out the affine, perspective, and don't use its function. 
  Realize affine, perspective transformation function  toushibianhuan , toushibianhuan_gai_fangshebianhuan And, in main_transform
  In the call 
  Note: in main Call header in .h In the file main_transfom Function to achieve minification, translation, rotation and affine, perspective transformation!! 
------------    ----------   ------------------    -----------------  ---------
  Task process: 
 Part2
  in input_solve The use of imread Read in the grayscale and call blur , morphologyEx Filter, utilize canny extract 
  Edge, call HoughLines Get the edge line, call getCrossPoint Get the line intersection, call 
 getPerspectiveTransform Get the transformation matrix, call warpPerspective Implementing perspective transformation 
  Note: Write input_solve Function to implement the processing function, Ben cpp Is in the main Call header in .h  In the input_solve Function!! 
------------------------------------------------------------------------------------------
 Others:   Image input path: Job 2_2/ homework 2_2/task2.png
  Output image save path: Project folder: job 2_2/ homework 2_2
  Note: When running in other environments 1 Be sure to change the read path!! 
 May Function List: main , main_transform , input_solve
-----------------------------------------------------------------------------------------------
 History:
 as folwing
 ----- -------------   ----------------   ------------------  --------------  -----
 1.2018.10.3 
 2.by  Liu Junyuan 
 3.description: In engineering work 2_1 Lt.  comMatC , toushibianhuan , toushibianhuan_gai_fangshebianhuan Move to the header file 
 image_solve.h In the 
 ----- -------------   ----------------   ------------------  --------------  -----
 1.2018.10.4
 2.by  Liu Junyuan 
 3.description: In engineering work 2_2 Lt.  main_transfom write main, To get rid of main_transform Function of the waitKey(0)

 ----- -------------   ----------------   ------------------  --------------  -----
 --------------------------------------------------------------------------------*/


 /*-----------------------------------------------------------------
   standard openCV At the beginning      --
---------------------------------------------------------------------
 Reference header files and namespaces         --
------------------------------------------------------------------*/

#include <opencv2/opencv.hpp>
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include " The header .h"
using namespace std;
using namespace cv;
int main()
{
 main_transform(90, 0, 100, "task2.png", 0.5, 0.5);
 input_solve("task2.png");
 waitKey(0);
}

Related articles: