Disclosure of Invention
The invention discloses a wide-angle lens calibration and image correction method, which is characterized by comprising the following steps of:
s1: selecting an n multiplied by n checkerboard image as a calibration template, wherein n is an even number greater than 2;
s2: adjusting the positions of the wide-angle lens and the calibration template, and shooting to obtain a spherical image;
s3: applying formula (1), performing RGB conversion YUV operation on the spherical image to obtain gray image Pg,
in the formula (1), R, G, B respectively represents the red, green and blue color values of the pixel points in the spherical image, Y represents the brightness value of the pixel points in the gray image Pg, and U, V represents the color difference value;
s4: processing the Pg to obtain an image HEp0 containing (n-1) x (n-1) boundary line corner points;
s5: if the center corner point is located at the center of the HEp0 and the rest corner points are symmetrical up, down, left and right, the sequence is carried out, otherwise, the step S2 is returned;
s6: the correction coefficient R is calculated by the ellipse formula (2),
in the formula (2), the middle corner point of the top row of corner points of the HEp0 is selected as an upper central corner point, the common corner point of the n/2 row corner point and the n/2 column corner point of the HEp0 is selected as a central corner point, the central corner point is set as a coordinate origin, b is the distance between the upper central corner point and the central corner point, x is the distance between the upper central corner point and the central corner point, and the upper central corner point and the central corner point are arranged in a same plane, and the central corner point is arranged in a same plane as the upper central corner point and the central corner point1、y1Is thatTop left corner point coordinate values of HEp 0;
s7: setting the geometric center point of the image shot by the wide-angle lens as a coordinate origin, and using the correction coefficient R to perform correction operation on the image shot by the wide-angle lens according to the formula (3) to obtain a corrected image;
in the formula (3), u and v are plane coordinate values of each pixel point in the image shot by the wide-angle lens, x and y are plane coordinate values of the corrected image, z is obtained according to the formula (4),
in equation (4), H, L represents the number of horizontal pixels and the number of vertical pixels of the wide-angle lens captured image.
Further, step S4 includes the following specific steps:
s41: taking one pixel in the Pg as Pgk, taking Y values of 9 points of the surrounding pixels Pgk and Pgk to form a 3x3 matrix A, and carrying out convolution operation according to an equation (5) and an equation (6) to obtain an approximate value G of the transverse and longitudinal brightness difference of the PgkxAnd Gy,
Comparison Gx、GyAnd a threshold Vth1, if Gx<Vth1 and Gy>Vth1, taking "1" as the output value corresponding to the pixel Pgk as a boundary point, otherwise, taking "0" as the output value corresponding to the pixel Pgk as a non-boundary point, and performing traversal operation to obtain an image E1 including a boundary from a white lattice to a black lattice;
by comparison of Gx、GyAnd a threshold Vth1, if Gx>Vth1 and Gy<Vth1, taking "1" as the output value corresponding to the pixel Pgk as the boundary point, otherwise, taking "0" as the output value corresponding to the pixel Pgk as the non-boundary point, and performing traversal operation to obtain an image E2 including a boundary from a black lattice to a white lattice;
s42: traversing AND operation is carried out on the E1 and the E2 through a 3x3 full '1' matrix window, if the value of any one pixel point in the AND operation window is '1', the value of the currently traversed pixel point (namely the central pixel point of the window) is '1', otherwise, the value of the currently traversed pixel point is '0', and boundary expansion images Ep1 and Ep2 are obtained after traversal;
s43: the Ep1 and the Ep2 are subjected to AND operation to obtain an image HEp containing (n-1) x (n-1) corner point areas;
s44: and carrying out coordinate averaging operation on each corner point region of the HEp to obtain an image HEp0 containing (n-1) x (n-1) corner points.
Further, the calibration template selects a 4 × 4 checkerboard image.
Further, Vth1 is an average value of the full white gradation value and the full black gradation value.
Further, in S43, the specific determination process of the corner region is as follows: if the coordinate distance of the pixel point with the pixel value of '1' is smaller than the threshold value Vth2, the pixel points belong to the same corner point area, otherwise, the pixel points belong to different corner point areas.
Further, in the traversal operations of S41 and S42, when two rows or two columns of pixel points at the edge of the image are processed, the corresponding values of the pixels at the rows or columns at the opposite outer edges are filled into the edge of the 3 × 3 matrix.
The wide-angle lens calibration and image correction method provided by the invention is simple in calculation, the calibration parameter can be directly calculated through embedded equipment, the spherical radius is taken as the correction coefficient, the correction calculation amount is very low, the effect is satisfactory, the hardware implementation can be carried out in an FPGA image acquisition system, and the real-time processing requirement is met. Because experimental equipment for calibration does not need to be built and the calculation of an upper computer is not needed, the calibration method is low in cost and simple to operate.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention discloses a wide-angle lens calibration and image correction method, which is characterized by comprising the following steps:
s1: selecting an n multiplied by n checkerboard image as a calibration template, wherein n is an even number greater than 2;
s2: adjusting the positions of the wide-angle lens and the calibration template, and shooting to obtain a spherical image;
s3: applying formula (1), performing RGB conversion YUV operation on the spherical image to obtain gray image Pg,
in the formula (1), R, G, B respectively represents the red, green and blue color values of the pixel points in the spherical image, Y represents the brightness value of the pixel points in the gray image Pg, and U, V represents the color difference value;
s4: processing the Pg to obtain an image HEp0 containing (n-1) x (n-1) boundary line corner points;
s5: if the center corner point is located at the center of the HEp0 and the rest corner points are symmetrical up, down, left and right, the sequence is carried out, otherwise, the step S2 is returned
S6: the correction coefficient R is calculated by the ellipse formula (2),
in the formula (2), the middle corner point of the top row of corner points of the HEp0 is selected as an upper central corner point, the common corner point of the n/2 row corner point and the n/2 column corner point of the HEp0 is selected as a central corner point, the central corner point is set as a coordinate origin, b is the distance between the upper central corner point and the central corner point, x is the distance between the upper central corner point and the central corner point, and the upper central corner point and the central corner point are arranged in a same plane, and the central corner point is arranged in a same plane as the upper central corner point and the central corner point1、y1Is the coordinate value of the upper left corner point of the HEp 0;
s7: setting the geometric center point of the image shot by the wide-angle lens as a coordinate origin, and using the correction coefficient R to perform correction operation on the image shot by the wide-angle lens according to the formula (4) to obtain a corrected image;
when n takes a value of 4, the calibration template, the spherical image and the corrected image are respectively shown in fig. 2, fig. 3 and fig. 4.
The specific derivation calculation process is as follows:
as shown in fig. 5, the spherical surface projection formula used for the correction coordinates: x is the number of2+y2+z2=R2A point in space A (x, y, z) points to the origin O, and the ray intersects the surface of the sphere at A/And A is/Projected onto xoy plane at A//(u, v, o). Then A is//(u, v, o) is the spherical projection coordinate of A (x, y, z). The spherical projection model can obtain the corresponding relation between a space coordinate system (x, y, z) and an imaging coordinate system (u, v, o):
equation (4) for converting from spherical imaging coordinates (u, v) to corrected image coordinates (x, y) is derived from the above equation.
In the formula (4), u and v are plane coordinate values of each pixel point in an image photographed by the wide-angle lens, x and y are plane coordinate values of the corrected image, and in order to make the image size of the spherical projection and the corrected image size uniform, the z value in the formula (4) is obtained according to the formula (5),
in the formula (5), H is the number of image horizontal pixels, and L is the number of image vertical pixels.
The wide-angle lens calibration and image correction method provided by the invention is simple in calculation, the calibration parameter can be directly calculated through embedded equipment, the spherical radius is taken as the correction coefficient, the correction calculation amount is very low, the effect is satisfactory, the hardware implementation can be carried out in an FPGA image acquisition system, and the real-time processing requirement is met. Because experimental equipment for calibration does not need to be built and the calculation of an upper computer is not needed, the calibration method is low in cost and simple to operate.
Further, as shown in fig. 6, step S4 includes the following specific steps:
s41: taking one pixel in the Pg as Pgk, taking Y values of 9 points of the surrounding pixels Pgk and Pgk to form a 3x3 matrix A, and carrying out convolution operation according to an equation (6) and an equation (7) to obtain an approximate value G of the transverse and longitudinal brightness difference of the PgkxAnd Gy,
Comparison Gx、GyAnd a threshold Vth1, if Gx<Vth1 and Gy>Vth1, taking "1" as the output value corresponding to the pixel Pgk as the boundary point, otherwise, taking "0" as the output value corresponding to the pixel Pgk as the non-boundary point, and performing traversal operation to obtain an image E1 (as shown in fig. 7) including a boundary from a white lattice to a black lattice;
by comparison of Gx、GyAnd a threshold Vth1, if Gx>Vth1 and Gy<Vth1, taking "1" as the output value corresponding to the pixel Pgk as the boundary point, otherwise, taking "0" as the output value corresponding to the pixel Pgk as the non-boundary point, and performing traversal operation to obtain an image E2 (as shown in fig. 8) including a boundary from a black lattice to a white lattice;
s42: traversing AND operation is carried out on the E1 and the E2 through a 3x3 full '1' matrix window, if the value of any one pixel point in the AND operation window is '1', the value of the currently traversed pixel point (namely the central pixel point of the window) is '1', otherwise, the value of the currently traversed pixel point is '0', and boundary expansion images Ep1 and Ep2 are obtained after traversal;
s43: the Ep1 and the Ep2 are subjected to AND operation to obtain an image HEp (shown in figure 9) containing (n-1) x (n-1) corner point areas;
s44: and carrying out coordinate averaging operation on each corner point region of the HEp to obtain an image HEp0 (shown in figure 10) containing (n-1) x (n-1) corner points.
And obtaining an image E1 containing a boundary from a white grid to a black grid and an image E2 containing a boundary from a black grid to a white grid, sequentially expanding and intersecting to obtain corner regions, and finally carrying out average calculation to obtain corners, so that the calculation result is stable, the calculation error is reduced as much as possible, and a foundation is laid for obtaining accurate correction coefficients.
Further, the calibration template selects a 4 × 4 checkerboard image. The 4 multiplied by 4 checkerboard image has small calculated amount, and the correction coefficient obtained after the experiment shooting calculation can meet the precision requirement.
Further, Vth1 is an average value of the full white gradation value and the full black gradation value. The average value can make the calculation result more stable.
Further, in S43, the specific determination process of the corner region is as follows: if the coordinate distance of the pixel point with the pixel value of "1" is smaller than the threshold Vth2, the pixel points belong to the same corner region, otherwise, the pixel points belong to different corner regions, and the Vth2 usually takes a value of 20, and can also be adjusted according to the actual effect.
Further, in the traversal operations of S41 and S42, when two rows or two columns of pixel points at the edge of the image are processed, the corresponding values of the pixels at the rows or columns at the opposite outer edges are filled into the edge of the 3 × 3 matrix. . Because the corner points are key information required by calculation and are not positioned at the edge of the image, the calculation accuracy is not influenced by adopting a direct extension filling mode when the pixel points at the edge of the image are processed, and the calculation amount is favorably reduced.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.