Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
Technical Field
The invention relates to the technical field of close-range photogrammetry, in particular to a construction method and a calibration method of a linear array rotary scanning camera imaging model based on projection transformation.
Background
The linear array rotary scanning camera is a non-traditional one-dimensional imaging device and is widely applied to the fields of industrial detection and satellite imaging. Line cameras generally have higher sampling rates and spatial resolutions than frame cameras. The line camera has better performance in many close-range photography applications, such as three-dimensional scene reconstruction and attitude measurement of high-speed targets. In these applications, camera geometric calibration is an essential step in order to obtain accurate metrology information for linear array images.
The dynamic camera geometric calibration method based on the rotary motion platform has less research results on the problems at present. Because the linear array camera based on the rotary motion platform is usually only used for pure imaging purposes, the linear array camera has great application potential in the aspects of high-precision measurement, three-dimensional reconstruction and the like. At present, researchers establish a linear array camera imaging model suitable for a rotary platform according to the imaging characteristics of a rotary scanning linear array camera, and the model considers more error items on the basis of an ideal imaging model, so that the model becomes more complex, the difficulty of model calculation is increased, and the specific precision needs to be verified.
Because the imaging model of the traditional rotary scanning camera is complex, the parameters in the model are many, the calculation is complex, and the result is seriously dependent on the selection of the initial value. If a linear transformation method cannot be used to obtain a more accurate initial value, iteration of the nonlinear optimization process may not be converged or the deviation of the obtained result is large, which seriously affects the accuracy of camera calibration. In addition, the traditional calibration method needs more calibration data, and for the hyperspectral linear array camera, because the number of pixels on the sensor array is less, a lot of calibration points are difficult to acquire at one time. Therefore, the traditional calibration method is not suitable for geometric calibration of the hyperspectral rotary scanning line camera, and the camera parameters with low precision obtained by the traditional calibration method can seriously influence the effect of three-dimensional reconstruction.
Aiming at the problems, a linear array rotary scanning camera imaging model based on projection transformation is deduced, and a calibration method is provided, the method is simple and flexible, camera calibration can be carried out only by less calibration data, and a result with higher precision can be obtained. And the calibration result obtained by the method can be used for subsequent three-dimensional reconstruction.
Disclosure of Invention
The invention mainly aims to provide a method for constructing and calibrating an imaging model of a rotary scanning line camera suitable for three-dimensional reconstruction, which solves the problems of more parameters, complex resolving and low calibration result precision of the existing imaging model of the rotary scanning line camera, thereby solving the problem that a rotary scanning line image is not suitable for three-dimensional reconstruction, and further widening the application field of the rotary scanning line image.
In order to achieve the above object, a method for constructing a rotary scanning line camera imaging model based on projection transformation is provided, the imaging model determines a geometric relationship between coordinates of image points on an original imaging plane of a line rotary scanning camera and coordinates of corresponding image points on a virtual frame-type image taking a tangent plane as an imaging plane according to parameters of a camera rotary platform and a positional relationship between the imaging plane of the line rotary scanning camera and the tangent plane of the line rotary scanning camera, so as to project the rotary scanning image into the frame-type image, and the method specifically comprises the following steps:
step 1, selecting a plane tangent to a cylindrical projection plane of an original rotary scanning line array camera as a virtual frame type imaging plane of projection transformation;
step 2, establishing a pixel coordinate system, a camera coordinate system and a world coordinate system;
step 3, solving the size of the virtual frame-type image after projection transformation according to the geometric relation between the cylindrical surface and the tangent plane thereof and the size of the original linear array image;
step 4, calculating the image point coordinates of the corresponding points on the virtual frame-type image after projection transformation by using the imaging relation and the imaging positions of the space points on the two imaging planes and the known image point coordinates of the original linear array image;
and 5, deducing a back projection formula according to the forward projection relation in the step 3, namely, carrying out back projection on the image point coordinates on the virtual frame type image to obtain the corresponding image point coordinates on the rotary scanning line array image.
Further, the virtual frame type imaging plane in step 1 is a plane tangent to the cylindrical projection plane and to the centerline of the original rotating scanning line array image.
Furthermore, the size calculation formula of the virtual frame image after the projection transformation in the step 3 is as follows,
in the above formula, (m)r,nr) Representing the size of the original rotated scan image, representing the number of dots on a scan line and the number of scan lines, respectively, α tableIndicating the angle between adjacent pixels of the rotated scanned image, β indicating the total scan angle of the rotated scanned image, mf,nf) Representing the size of the projectively transformed image.
Furthermore, the calculation formula of the image point coordinates of the virtual frame image after projection transformation in the step 4 is as follows,
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)
r,y
r) Representing point coordinates on the original rotational scan image; (x)
f,y
f) Representing the coordinates of points on the frame-type image after projection transformation;
representing the reference scan line at the time of the projective transformation,
representing the image principal point coordinates of the original rotary scanning camera;
and the image principal point coordinates of the frame image after the projection transformation are shown.
Further, in step 5, the calculation formula of the image point coordinates of the linear array rotation scanning image after the inverse projection transformation is as follows:
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)
r,y
r) Representing point coordinates on the original rotational scan image; (x)
f,y
f) Representing the coordinates of points on the frame-type image after projection transformation;
representing the reference scan line at the time of the projective transformation,
representing the image principal point coordinates of the original rotary scanning camera;
and the image principal point coordinates of the frame image after the projection transformation are shown.
In addition, the invention also provides a linear array rotary scanning camera calibration method based on projection transformation, which is based on the imaging model in the technical scheme and adopts a direct linear transformation method and a nonlinear optimization method to calibrate the linear array rotary scanning camera, and specifically comprises the following steps:
the method comprises the following steps that firstly, a rotary scanning line array camera is used for collecting close-range photogrammetry three-dimensional control field images;
acquiring pixel coordinates of the calibration point in the three-dimensional control field image by adopting an automatic extraction method or a manual extraction method;
thirdly, carrying out projection transformation on the coordinates of the obtained original image calibration points by using an imaging model to obtain the coordinates of the calibration points on the virtual frame type image;
step four, solving camera parameters by using the calibration data after projection transformation and adopting a direct linear transformation method;
and step five, taking the camera parameters obtained in the step four and the included angle α between the adjacent scanning lines given by the camera as initial values, taking the minimized space point reprojection error as an optimization target, performing combined adjustment on the included angle of the scanning lines and the camera external parameters, and performing iterative optimization by adopting a nonlinear optimization method, thereby obtaining a final camera calibration result.
Furthermore, in step three, the world coordinates (X, Y, Z) of the three-dimensional space point corresponding to the calibration point and the pixel coordinates (X) of the point projected on the frame imagef,yf) The following formula is satisfied,
where λ represents a scale factor, (x)f,yf) Pixel coordinates representing image points, M represents a camera matrix, (X, Y, Z) world coordinate system coordinates representing three-dimensional space points, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements.
Furthermore, the direct linear transformation formula in step four is,
wherein (x)f,yf) Pixel coordinates representing an image point, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements, (X, Y, Z) representing three-dimensional spatial point world coordinate system coordinates.
Further, the calculation formula for minimizing the spatial point reprojection error is as follows:
in the formula (I), the compound is shown in the specification,
and
a formula for calculating the coordinates of the image points of the linear array rotary scanning image after the inverse projection transformation, wherein N is the number of the calibration points,
representing virtual framed image point coordinates, x, estimated using camera matrix and world coordinates of the points
i rAnd y
i rThe measured coordinate values of the points on the rotated scanned image are indicated.
The foregoing is a brief summary of the invention, including the basic principles and implementation steps of the methods of the invention. The foregoing summary, however, is not intended to be a complete description of the invention, nor is it intended to be used to identify key or critical elements of the invention or to delineate the scope of the invention, but rather to present some sort of shorthand description of the spirit of the invention.
Compared with the prior art, the invention has the advantages and beneficial effects that: the method is simple and flexible, can calibrate the camera by only needing less calibration data, and can obtain a result with higher precision. In addition, the method can fully utilize the existing research results of multi-view geometry during subsequent processing; the invention is easier to construct the epipolar line image, thus being more suitable for three-dimensional reconstruction.
Drawings
FIG. 1 is a pixel coordinate system;
FIG. 2 is a projective transformation geometry diagram;
FIG. 3 is a flow chart of a camera calibration method;
fig. 4 is a close-up photogrammetry control field image.
Detailed Description
In order to explain technical solutions and technical advantages of the present invention in more detail, the present invention will be described more fully by way of specific embodiments with reference to the accompanying drawings.
Firstly, the method for constructing the imaging model of the rotary scanning line camera based on projection transformation comprises the following specific steps:
step 1, selecting a plane tangent to a cylindrical projection plane of an original rotary scanning line array camera as a virtual frame type imaging plane for projection transformation, and generally selecting a plane tangent to the cylindrical projection plane and the central line of an image of the original rotary scanning line array;
step 2, establishing a pixel coordinate system, a camera coordinate system and a world coordinate system;
step 3, solving the size of the virtual frame-type image after projection transformation according to the geometric relation between the cylindrical surface and the tangent plane thereof and the size of the original linear array image;
step 4, calculating the image point coordinates of the corresponding points on the virtual frame-type image after projection transformation by using the imaging relation and the imaging positions of the space points on the two imaging planes and the known image point coordinates of the original linear array image;
and 5, deducing a back projection formula according to the forward projection relation in the step 3, namely, carrying out back projection on the image point coordinates on the virtual frame type image to obtain the corresponding image point coordinates on the rotary scanning line array image.
The specific implementation steps are as follows, the image pixel coordinate system is as shown in fig. 1, the pixel coordinate system takes the upper left corner of the image as the origin, the horizontal direction is the x-axis, and the vertical direction is the y-axis.
As shown in fig. 2, a camera coordinate system is constructed with point C as the origin, CG as the x-axis, and a rotation axis as the z-axis, which forms the right-hand coordinate system. And E point is any point on the virtual frame image, B point is the corresponding point of the virtual frame image, and the conversion relation of the pixel coordinates on the rotating scanning image and the frame image is deduced through the position relation of the two points in the camera coordinate system. Specific symbols are defined as follows:
(xr,yr) Representing pixel coordinates on the original rotated scanned image;
(xf,yf) Representing pixel coordinates on the frame image after projection transformation;
(mr,nr) The pixel size of the original rotation scanning image is represented, and the pixel size represents the number of points on one scanning line and the number of the scanning lines respectively;
in
Representing the reference scan line at the time of the projective transformation,
representing the image principal point coordinates of the original rotational scanning camera. The connecting line of the coordinate and the photographing center forms a main axis of the frame-type image after the projection transformation, and a plane which passes through the point and is vertical to the main axis forms an imaging plane of the frame-type image after the projection transformation;
the image principal point coordinates of the frame-type image after projective transformation are expressed, and for the convenience of calculation, in the projection process, the point is generally positioned at the central point of the image after projective transformation, that is:
(mf,nf) Representing the size of the projectively transformed image;
α denotes the angle between adjacent pixels of the rotated scanned image;
the field angle of the central pixel of the frame image after projection transformation in the x direction is represented;
β denotes the total scan angle of the rotated scan image;
according to the imaging characteristics of the rotary scanning line camera, the calculation formula of α is as follows:
let the size of each pixel be d, and its calculation formula be:
as shown in FIG. 2, point G represents the principal point of the frame-in-frame image, which corresponds to p of the original scanned imagexColumn scan line, corner β1And β2The calculation formulas of (A) and (B) are respectively as follows:
FG and KG represent the widths of the right and left sides of the projected image, respectively, and the length is denoted as lFGAnd lKGAnd the line DG represents the focal length of the camera, and is denoted as f, then lFGAnd lKGThe calculation formula of (2) is as follows:
therefore, the width calculation formula of the projective transformed image is as follows:
it can be seen that because the focal length of the image after projection transformation is equal to that of the original linear array camera during projection, the width of the image after projection is irrelevant to the focal length, only to the included angle between the scanning lines of the linear array image, the number of the scanning lines and the selection of the central scanning line during projection transformation, and irrelevant to other factors.
In addition, according to the relation of similar triangles have
And DH is equal to f, or f,
since the pixel size before and after projection is unchanged, the pixel length calculation formula of EF is:
the formula for calculating the pixel length of DF in the same way is as follows:
therefore, the calculation formula of the height of the image after projective transformation is as follows:
when the central straight line of the original scanned image is taken as the reference straight line,while the main point of the image of the rotary scanning camera is at the center of the scanning line array, i.e. when
And
the calculation of the size of the projective transformed image can be simplified as follows:
for any point E on the virtual frame image, a similar derivation method can be adopted to obtain the transformation relation with the corresponding point on the rotation scanning image.
The calculation formula for obtaining the coordinates after projection transformation is as follows:
the inverse projective transformation formula is then:
if the visible angle of the image principal point pixel of the frame image after projection in the x direction is specified to be
The size of the projectively transformed image pixels in the x-direction is then
The x-coordinates before and after projective transformation satisfy the following relation:
the final projection calculation formula is therefore:
a calibration method of a rotary scanning line camera based on projection transformation comprises the following steps:
step one, a rotary scanning line array camera is adopted to obtain a calibration field image.
And step two, processing the image obtained in the step one, mainly adopting a Gaussian filtering method, then extracting image mark points by using an ellipse fitting method, and obtaining corresponding world point coordinates according to the extracted calibration point numbers and positions.
And step three, utilizing the calibration point data obtained in the step two to obtain virtual frame type camera calibration data through projection transformation.
Step four, solving camera parameters by adopting the data obtained in the step three through a direct linear transformation method;
and step five, taking the camera parameters obtained in the step four and the included angle α between the adjacent scanning lines given by the camera as initial values, taking the minimized space point reprojection error as an optimization target, performing combined adjustment on the included angle of the scanning lines and the camera external parameters, and performing iterative optimization by adopting a nonlinear optimization method, thereby obtaining a final camera calibration result.
Wherein, the imaging geometry of the frame image obtained after the projection transformation in the third step accords with the shooting geometric constraint, namely the world coordinates (X, Y, Z) of the three-dimensional space point corresponding to the calibration point and the image point (X) projected to the frame imagef,yf) The following formula is satisfied:
in the formula, λ represents a scale conversion factor, (x)f,yf) Pixel coordinates representing image points, M represents a camera matrix, (X, Y, Z) world coordinate system coordinates representing three-dimensional space points, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements.
The direct linear transformation formula in step four is,
in the formula (x)f,yf) Pixel coordinates representing an image point, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements, (X, Y, Z) representing three-dimensional spatial point world coordinate system coordinates.
The calculation formula for minimizing the reprojection error for projecting a spatial point onto an image is therefore:
in the formula (I), the compound is shown in the specification,
and
a formula for calculating the coordinates of the image points of the linear array rotary scanning image after the inverse projection transformation, wherein N is the number of the calibration points,
representing virtual framed image point coordinates, x, estimated using camera matrix and world coordinates of the points
i rAnd y
i rThe measured coordinate values of the points on the rotated scanned image are indicated.
If the parameters obtained when considering rotational scan imaging are accurate values, i.e., the scan line clip angle α is known, then the above equation degenerates to a standard frame-and-frame camera imaging model, which can be solved using a linear transformation method.
In practical application, however, the adjacent scanning line included angle α given by the rotary scanning platform has a deviation from the actual value, and if the value given by the system is used for camera calibration, the calibration result will be inaccurate, so the calibration should be divided into two steps, firstly, the α angle given by the system is used as an initial value, a camera parameter initial value is solved by adopting a direct linear transformation method, then, the reprojection error of the image point coordinate is used as a target function to optimize the camera parameters and the α angle, and finally, more accurate camera parameters are obtained, and a specific calibration flow is shown in fig. 3.
Example 1
The method comprises the steps of collecting images in a close-range photogrammetry three-dimensional control field by combining a linear array imaging camera with a rotary scanning platform, and calibrating the linear array imaging camera by using a control point.
TABLE 1 parameter settings for camera shots
Scanning angle/deg
|
Integration time/ms
|
Rotational speed/deg/s
|
Number of lines scanned per second
|
53
|
200
|
0.177
|
5.04 |
Fig. 4 is an acquired close-range photogrammetry three-dimensional control field image, and the control point coordinates in the extracted image can be automatically extracted by using a computer vision method or extracted by using a method of manually extracting a mark point. A total of 53 marker points were extracted.
The method comprises the steps of taking a rotation included angle α set during camera shooting as a true value, calculating projected point coordinate values by using an image point coordinate calculation formula of a virtual frame-type image after projection transformation, and then calibrating a camera by adopting a direct linear transformation method according to the projected point coordinates and corresponding space point coordinates, wherein the table 2 is solved camera parameters, and the table 3 is a reprojection error.
TABLE 2 Camera parameters
Parameter(s)
|
x0(pixel)
|
y0(pixel)
|
fx(pixel)
|
fy(pixel)
|
X0(mm)
|
Y0(mm)
|
Z0(mm)
|
DLT
|
353.58
|
804.85
|
1724.5
|
1689.08
|
2217.04
|
1296.18
|
138.942 |
TABLE 3 reprojection error
Considering that the inaccurate included angle α between adjacent pixels given by the system affects the result of camera calibration, the camera parameters obtained by the direct linear transformation method and the α angle given by the system are used as initial values to perform nonlinear optimization, and the optimized objective function is:
the Levenberg-Marquardt algorithm is adopted to carry out iteration to solve the optimal solution, and the optimization result is shown in the table 4:
TABLE 4 optimization results
Parameter(s)
|
Original α/deg
|
Optimized α/deg
|
Results
|
0.0353
|
0.0337 |
Carrying out reprojection error calculation by using the optimized parameters, substituting the solved camera parameters and the three-dimensional coordinates of the control points into an imaging model formula, calculating to obtain the coordinate values of the reprojected image points of the virtual frame-type image, then solving the reprojected image point coordinate values of the original linear array rotary scanning image by using an inverse projection formula, solving the Euclidean distance between the reprojected image point coordinate values and the extracted image point coordinate values to obtain reprojection errors, wherein the calculation result is shown in a table 5:
TABLE 5 optimized post-reprojection error
According to the calibration result, the error of the calibration result obtained by using the linear array image after projection transformation and the calibration method of the area array camera is less than 1 pixel, and the linear array image after projection transformation is more consistent with the characteristics of the area array image. The experimental result proves the accuracy of the projection transformation relation and also illustrates the feasibility of utilizing the projection transformation method to calibrate the rotary scanning line camera.