Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an image perspective correction method and device based on camera visual angle transformation, which can accurately and rapidly realize the image perspective correction of a solar cell slice, thereby meeting the positioning and detection requirements in the production process.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
In a first aspect, the present invention provides an image perspective correction method based on camera perspective transformation, including:
calibrating a camera to obtain camera external parameters, camera internal parameters and distortion parameters;
Acquiring an image through the calibrated camera, and carrying out distortion correction based on distortion parameters;
constructing a perspective mapping matrix based on the camera external parameters and the camera internal parameters;
and performing perspective correction on the image after distortion correction through the transmission mapping matrix.
Optionally, calibrating the camera includes:
Shooting at least 3 images of checkerboard calibration plates at different positions at preset positions by a camera, and overlapping each checkerboard calibration plate to cover the whole visual field of the camera;
extracting sub-pixel angular points of each image and corresponding image coordinates;
Initializing world coordinates of the sub-pixel corner points, and calculating and acquiring camera external parameters, camera internal parameters and distortion parameters based on Zhang Zhengyou calibration algorithm according to the corresponding relation between the world coordinates of the sub-pixel corner points and the image coordinates.
Optionally, the distortion correction based on the distortion parameter is:
wherein, (u, v) and (u ', v') are the coordinates of the image points before and after distortion correction, k 1,k2,k3]、[p1,p2 is the radial distortion parameter and tangential distortion parameter in the distortion parameters, k 1、k2、k3 is the element in the radial distortion parameter, and p 1、p2 is the element in the tangential distortion parameter;
optionally, the constructing the perspective mapping matrix based on the camera external parameters and the camera internal parameters includes:
Solving the object distance of the space point relative to the camera based on the camera external reference and the camera internal reference, and acquiring a Z-axis projection value of the space point relative to a camera coordinate system;
Constructing a transformation relation from an image coordinate system to a camera coordinate system and from the camera coordinate system to the image coordinate system according to the Z-axis projection value and the camera internal parameters;
acquiring an axis rotation angle when the world coordinate system is converted into a camera coordinate system according to the camera external parameters;
Constructing a transformation relation from a camera coordinate system to a camera view angle of a target camera coordinate system according to the axis rotation angle and the camera external parameters;
and constructing a perspective mapping matrix according to the transformation relation from the image coordinate system to the camera coordinate system, the transformation relation from the camera coordinate system to the camera view angle of the target camera coordinate system and the transformation relation from the camera coordinate system to the image coordinate system.
Optionally, the acquiring the Z-axis projection value of the spatial point relative to the camera coordinate system includes:
Obtaining a conversion relation from a world coordinate system to a camera coordinate system according to camera external parameters:
In the formula, The camera external reference rotation matrix R,r11、r12、r13、r21、r22、r23、r31、r32、r33 is an element in the camera external reference rotation matrix R, [ T 1,t2,t3 ] is a translation vector T of the camera external reference, T 1、t2、t3 is an element in the translation vector T of the camera internal reference, and (X w,Yw,Zw)、(Xc,Yc,Zc) is a coordinate point of a space point in a world coordinate system and a camera coordinate system;
obtaining a conversion relation from a camera coordinate system to an image coordinate system according to the camera internal parameters:
Wherein, (u, v) is coordinate point of space point in image coordinate system, [ f x,fy,u0,v0 ] is camera internal reference, f x、fy、u0、v0 is camera internal reference element;
let Z w =0, it is possible to obtain:
and eliminating X c、Yc、Xw、Yw to obtain a Z-axis projection value:
optionally, the transformation relation from the image coordinate system to the camera coordinate system is as follows:
The transformation relation from the camera coordinate system to the image coordinate system is as follows:
Wherein [ f x,fy,u0,v0 ] is a camera internal reference, Z c is a Z-axis projection value of a space point relative to a camera coordinate system, and (X c,Yc,Zc) and (u, v) are coordinate points in the camera coordinate system and an image coordinate system respectively.
Optionally, the obtaining the axis rotation angle when the world coordinate system is converted to the camera coordinate system according to the camera external parameters includes:
Let R x、Ry、Rz be the rotational components of three coordinate axes of X, Y, Z rotated by the angles α, β, θ when the world coordinate system is converted to the camera coordinate system, respectively:
Rotation matrix of camera external parameters Can be expressed as:
R=RxRyRz
Then:
By a corresponding equality method:
Solving the above method can obtain the rotation angle of the shaft:
optionally, the transforming relation of the camera view angle from the camera coordinate system to the target camera coordinate system according to the axis rotation angle and the camera external parameter includes:
Acquiring a conversion relation from an original view angle to a vertical view angle in a camera coordinate system based on the axis rotation angle:
Wherein alpha and beta are axial rotation angles, (X c,Yc,Zc)、(X′c,Y′c,Z′c) are midpoint coordinates of the original view angle and the vertical view angle in a camera coordinate system respectively;
deriving a transformation relation from the camera coordinate system to the target camera coordinate system based on the axis rotation angle and the translation vector in the camera external parameters:
Wherein T 3 is a parameter in a translation vector T= [ T 1,t2,t3 ] in a camera external parameter, Z 3 is a distance between a camera optical center and a shooting object plane along an optical axis in a camera coordinate system;
obtaining a transformation relation of the camera view angle based on the transformation relation of the original view angle to the vertical view angle in the camera coordinate system and the transformation relation of the camera coordinate system to the target camera coordinate system:
The writing into homogeneous form is as follows:
Wherein (X' c,Y″c,Z″c) is the midpoint coordinate in the target camera coordinate system after the camera view angle transformation.
Optionally, the perspective mapping matrix is:
Wherein [ f x,fy,u0,v0 ] is an internal reference of the camera, Z c is a Z-axis projection value of a space point relative to a camera coordinate system, Z 3 is a distance between a camera optical center and a shooting object plane along an optical axis in the camera coordinate system, alpha and beta are axial rotation angles, and [ T 1,T2,T3 ] is a conversion relation from the camera coordinate system to a target camera coordinate system.
In a second aspect, the present invention provides an image perspective correction device based on camera perspective transformation, which is characterized by comprising a processor and a storage medium;
The storage medium is used for storing instructions;
the processor is operative according to the instructions to perform steps according to the method described above.
Compared with the prior art, the invention has the beneficial effects that:
The invention provides an image perspective correction method and device based on camera perspective transformation, which are used for realizing perspective correction of an image by utilizing the camera perspective transformation method by deducing the perspective mapping relation from an image under an original camera perspective to an image under the camera perspective of which the optical axis is perpendicular to an object plane. According to the invention, the perspective correction effect is more accurate through the analysis and calculation from the imaging angle of the camera lens, the calculated amount can be reduced, the correction speed can be improved, and the method can be effectively applied to the positioning and detection of the solar cell in the photovoltaic module production equipment so as to meet the requirements of high accuracy and high speed of the solar cell image processing.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Embodiment one:
As shown in fig. 1, an embodiment of the present invention provides an image perspective correction method based on camera perspective transformation, including the following steps:
1. Calibrating a camera to obtain camera external parameters, camera internal parameters and distortion parameters;
the camera is calibrated by adopting a currently mature calibration technology 'Zhang Zhengyou calibration method', and the main process comprises the following steps:
1.1, shooting at least 3 images of checkerboard calibration plates at different positions at preset positions through a camera, and overlapping each checkerboard calibration plate to cover all fields of view of the camera;
1.2, extracting sub-pixel angular points of each image and corresponding image coordinates;
1.3, initializing world coordinates (Z-axis coordinates are set to 0) of the sub-pixel corner points, and calculating and acquiring camera external parameters, camera internal parameters and distortion parameters based on Zhang Zhengyou calibration algorithm according to the corresponding relation between the world coordinates and the image coordinates of the sub-pixel corner points. Since Zhang Zhengyou calibration algorithm calculation process is tedious and mature, it will not be described too much here.
The camera external parameters comprise a rotation matrix R and a translation matrix T:
Wherein R 11、r12、r13、r21、r22、r23、r31、r32、r33 is an element in a rotation matrix R of the camera external reference, T 1、t2、t3 is an element in a translation vector T of the camera internal reference;
the camera intrinsic parameter is [ f x,fy,u0,v0],fx、fy、u0、v0 ] which is an element in the camera intrinsic parameter;
The distortion parameters include a radial distortion parameter [ k 1,k2,k3 ], a tangential distortion parameter [ p 1,p2],k1、k2、k3 ] which is an element in the radial distortion parameter, and p 1、p2 which is an element in the tangential distortion parameter.
2. Acquiring an image through the calibrated camera, and carrying out distortion correction based on distortion parameters;
the correction formula is:
wherein, (u, v) and (u ', v') are the coordinates of the image points before and after the distortion correction,
3. Constructing a perspective mapping matrix based on the camera external parameters and the camera internal parameters, as shown in fig. 2, specifically including:
3.1, solving object distance of the space point relative to the camera based on the camera external reference and the camera internal reference to obtain a Z-axis projection value of the space point relative to a camera coordinate system, wherein the method specifically comprises the following steps:
3.1.1, obtaining a conversion relation from a world coordinate system to a camera coordinate system according to camera external parameters:
Wherein, (X w,Yw,Zw)、(Xc,Yc,Zc) is a coordinate point of the space point in a world coordinate system and a camera coordinate system;
3.1.2, obtaining a conversion relation from a camera coordinate system to an image coordinate system according to the camera internal parameters:
wherein, (u, v) is a coordinate point of the spatial point in the image coordinate system;
3.1.3, let Z w =0, to obtain:
3.1.4, eliminating X c、Yc、Xw、Yw to obtain a Z-axis projection value:
3.2, constructing a transformation relation from an image coordinate system to a camera coordinate system and from the camera coordinate system to the image coordinate system according to the Z-axis projection value and the camera internal parameters;
The transformation relation from the image coordinate system to the camera coordinate system is:
the transformation relation of the camera coordinate system to the image coordinate system is as follows:
3.3, obtaining an axis rotation angle when the world coordinate system is converted into the camera coordinate system according to the camera external parameters, wherein the method specifically comprises the following steps:
3.3.1, let R x、Ry、Rz be the rotational component of three coordinate axes of X, Y, Z rotated by α, β, θ angles when the world coordinate system is converted to the camera coordinate system, respectively:
3.3.2 rotation matrix for camera parameters Can be expressed as:
R=RxRyRz
3.3.3, then:
3.3.4, obtainable by a corresponding equivalent method:
3.3.5, solving the above method to obtain the shaft rotation angle:
And 3.4, constructing a transformation relation from a camera coordinate system to a camera view angle of a target camera coordinate system according to the axis rotation angle and the camera external parameters, wherein the key step is that if perspective correction is realized, an original image is required to be converted into a front view, as shown in fig. 3, an O c-XcYcZc is an (original) camera coordinate system, the original image is rotated into an O 'c-X′cY′cZ′c so that a camera optical axis (Z axis) is perpendicular to an object plane, an obtained image is the front view, and meanwhile, if the intersection point of the optical axis and the object plane is required to be unchanged, the camera coordinate system is required to be translated into an O' c-X″cY″cZ″c.
The Z axis of the world coordinate system is set to be perpendicular to the plane of the object during calibration, so that the original camera coordinate system is rotated by alpha degrees along the X axis in the opposite direction and then rotated by beta degrees along the Y axis in the opposite direction, the phase optical axis is parallel to the Z axis of the world coordinate system so as to be perpendicular to the plane of the object, and the camera coordinate system is a left-hand coordinate system and a right-hand coordinate system during setting, so that the rotated camera coordinate system Y and Z axes take opposite directions, thereby obtaining a rotation transformation relation of the camera coordinate system, and the method specifically comprises the following steps:
3.4.1, obtaining a conversion relation from an original view angle to a vertical view angle in a camera coordinate system based on the axis rotation angle:
Wherein alpha and beta are axial rotation angles, (X c,Yc,Zc)、(X′c,Y′c,Z′c) are midpoint coordinates of the original view angle and the vertical view angle in a camera coordinate system respectively;
3.4.2, deriving a transformation relation from the camera coordinate system to the target camera coordinate system based on the axis rotation angle and the translation vector in the camera external parameters:
Wherein Z 3 is the distance between the camera optical center and the plane of the shooting object along the optical axis in the camera coordinate system;
3.4.3, obtaining a transformation relation of the camera view angle based on the transformation relation from the original view angle to the vertical view angle in the camera coordinate system and the transformation relation from the camera coordinate system to the target camera coordinate system:
3.4.4, write in homogeneous form:
Wherein (X' c,Y″c,Z″c) is the midpoint coordinate in the target camera coordinate system after the camera view angle transformation.
And 3.5, constructing a perspective mapping matrix according to the transformation relation from the image coordinate system to the camera coordinate system, the transformation relation from the camera coordinate system to the camera view angle of the target camera coordinate system and the transformation relation from the camera coordinate system to the image coordinate system. The image coordinates are converted into the coordinates under the original camera coordinate system, then converted into the coordinates under the camera coordinate system after the visual angle transformation, and then converted back into the two-dimensional pixel coordinates, thereby completing perspective correction.
The perspective mapping matrix is:
The finishing method can obtain:
4. performing perspective correction on the image after distortion correction through a transmission mapping matrix, wherein the expression is as follows:
where, (u ', v') is the distortion corrected image point coordinates and (u '', v '') is the perspective corrected image point coordinates.
In order to verify the method, the industrial camera, the lens and the light source are selected and fixedly installed, an imaging platform is built, 300 solar cell images are taken, one cell image shot by the camera is shown in fig. 4, and an image obtained after perspective correction by the method is shown in fig. 5. Meanwhile, the perspective correction of 300 battery piece images after the distortion correction is used as a comparison by using a traditional control point transformation method, and the same battery piece image obtained after the correction by using the traditional method is shown in fig. 6. The traditional control point transformation method firstly carries out Hough straight line detection on a battery piece image to extract four sides of a battery piece outline, calculates four intersection point coordinates, calculates standard rectangular four corner point pixel coordinates according to camera precision, calculates a homography matrix by utilizing four pairs of control points, and finally carries out perspective transformation on an original image to realize correction. The correction method of the invention has better effect as can be seen by comparing fig. 4, 5 and 6.
Because the solar cell is a standard rectangle, the four sides of the corrected cell outline are detected, the difference between the average included angle of the adjacent sides and 90 degrees is calculated, and the correction effect of the method of the invention and the traditional method can be compared through the difference quantification, and the result is shown in fig. 7. The test result of the included angle shows that the method has better perspective correction effect, the average included angle error of the four-side outline of the battery piece corrected by the traditional calculation method is 1.6692 degrees, the average included angle error of the perspective correction method is 0.4973 degrees, and the method has smaller fluctuation and more stable operation.
In addition, the processing run times of the two perspective correction methods were also subjected to comparative tests, and the results are shown in fig. 8. It can be seen that the conventional approach runs around 145 milliseconds, while the perspective correction approach herein is around 25 milliseconds. The traditional method increases the correction time because the image edge is detected first and then the Hough straight line detection is carried out, and compared with the traditional method, the perspective correction method is faster by about 120 milliseconds, and the detection time of the solar cell can be greatly shortened.
According to the invention, the perspective mapping relation between the image under the original camera view angle and the image under the camera view angle with the optical axis perpendicular to the object plane is obtained through calculation, so that the perspective correction of the solar cell image is realized by using a camera view angle conversion method. Compared with the traditional method, the method has the advantages that the perspective correction effect is more accurate through the angle analysis and calculation of the camera lens imaging model, the calculated amount is reduced, the correction speed is improved, and the method can be effectively applied to the positioning and detection of the solar cell in the photovoltaic module production equipment so as to meet the requirements of high accuracy and high speed of the solar cell image processing.
In a second aspect, the present invention provides an image perspective correction device based on camera perspective transformation, which is characterized by comprising a processor and a storage medium;
The storage medium is used for storing instructions;
The processor is operative according to the instructions to perform steps according to the method described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.