[go: up one dir, main page]

CN109272574B - Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation - Google Patents

Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation Download PDF

Info

Publication number
CN109272574B
CN109272574B CN201811052380.8A CN201811052380A CN109272574B CN 109272574 B CN109272574 B CN 109272574B CN 201811052380 A CN201811052380 A CN 201811052380A CN 109272574 B CN109272574 B CN 109272574B
Authority
CN
China
Prior art keywords
image
camera
coordinates
representing
rotary scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811052380.8A
Other languages
Chinese (zh)
Other versions
CN109272574A (en
Inventor
巫兆聪
闫钊
苏琳
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201811052380.8A priority Critical patent/CN109272574B/en
Publication of CN109272574A publication Critical patent/CN109272574A/en
Application granted granted Critical
Publication of CN109272574B publication Critical patent/CN109272574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

本发明提供了一种基于投影变换的线阵旋转扫描相机成像模型构建方法和标定方法,其中成像模型构建方法包括:利用线阵旋转扫描相机和框幅式相机的成像原理、相机旋转平台参数、线阵旋转扫描相机成像平面与其切平面的位置关系确定线阵旋转扫描相机原始成像平面上的像点坐标与以切平面为成像平面的虚拟框幅式图像上对应像点坐标的几何关系,从而确定一种新的线阵旋转扫描相机成像模型;标定方法包括:利用上述成像模型,根据获取的控制点和对应像点坐标,采用直接线性变换方法和非线性优化方法进行线阵旋转扫描相机标定。与传统方法相比,本发明方法简单快速,并具有较高精度;更易于构建核线影像,因此更加适用于三维重建。

Figure 201811052380

The present invention provides a method for constructing an imaging model and a calibration method for a linear scan camera based on projection transformation, wherein the method for constructing the imaging model includes: using the imaging principle of the linear scan camera and the frame camera, camera rotating platform parameters, The positional relationship between the imaging plane of the linear scan camera and its tangent plane determines the geometric relationship between the image point coordinates on the original imaging plane of the linear scan camera and the corresponding image point coordinates on the virtual frame image with the tangent plane as the imaging plane, so that A new imaging model of a line array rotary scanning camera is determined; the calibration method includes: using the above imaging model, according to the obtained control point and corresponding image point coordinates, using a direct linear transformation method and a nonlinear optimization method to calibrate the line array rotary scanning camera . Compared with the traditional method, the method of the present invention is simple, fast, and has higher precision; it is easier to construct epipolar images, and thus is more suitable for three-dimensional reconstruction.

Figure 201811052380

Description

Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
Technical Field
The invention relates to the technical field of close-range photogrammetry, in particular to a construction method and a calibration method of a linear array rotary scanning camera imaging model based on projection transformation.
Background
The linear array rotary scanning camera is a non-traditional one-dimensional imaging device and is widely applied to the fields of industrial detection and satellite imaging. Line cameras generally have higher sampling rates and spatial resolutions than frame cameras. The line camera has better performance in many close-range photography applications, such as three-dimensional scene reconstruction and attitude measurement of high-speed targets. In these applications, camera geometric calibration is an essential step in order to obtain accurate metrology information for linear array images.
The dynamic camera geometric calibration method based on the rotary motion platform has less research results on the problems at present. Because the linear array camera based on the rotary motion platform is usually only used for pure imaging purposes, the linear array camera has great application potential in the aspects of high-precision measurement, three-dimensional reconstruction and the like. At present, researchers establish a linear array camera imaging model suitable for a rotary platform according to the imaging characteristics of a rotary scanning linear array camera, and the model considers more error items on the basis of an ideal imaging model, so that the model becomes more complex, the difficulty of model calculation is increased, and the specific precision needs to be verified.
Because the imaging model of the traditional rotary scanning camera is complex, the parameters in the model are many, the calculation is complex, and the result is seriously dependent on the selection of the initial value. If a linear transformation method cannot be used to obtain a more accurate initial value, iteration of the nonlinear optimization process may not be converged or the deviation of the obtained result is large, which seriously affects the accuracy of camera calibration. In addition, the traditional calibration method needs more calibration data, and for the hyperspectral linear array camera, because the number of pixels on the sensor array is less, a lot of calibration points are difficult to acquire at one time. Therefore, the traditional calibration method is not suitable for geometric calibration of the hyperspectral rotary scanning line camera, and the camera parameters with low precision obtained by the traditional calibration method can seriously influence the effect of three-dimensional reconstruction.
Aiming at the problems, a linear array rotary scanning camera imaging model based on projection transformation is deduced, and a calibration method is provided, the method is simple and flexible, camera calibration can be carried out only by less calibration data, and a result with higher precision can be obtained. And the calibration result obtained by the method can be used for subsequent three-dimensional reconstruction.
Disclosure of Invention
The invention mainly aims to provide a method for constructing and calibrating an imaging model of a rotary scanning line camera suitable for three-dimensional reconstruction, which solves the problems of more parameters, complex resolving and low calibration result precision of the existing imaging model of the rotary scanning line camera, thereby solving the problem that a rotary scanning line image is not suitable for three-dimensional reconstruction, and further widening the application field of the rotary scanning line image.
In order to achieve the above object, a method for constructing a rotary scanning line camera imaging model based on projection transformation is provided, the imaging model determines a geometric relationship between coordinates of image points on an original imaging plane of a line rotary scanning camera and coordinates of corresponding image points on a virtual frame-type image taking a tangent plane as an imaging plane according to parameters of a camera rotary platform and a positional relationship between the imaging plane of the line rotary scanning camera and the tangent plane of the line rotary scanning camera, so as to project the rotary scanning image into the frame-type image, and the method specifically comprises the following steps:
step 1, selecting a plane tangent to a cylindrical projection plane of an original rotary scanning line array camera as a virtual frame type imaging plane of projection transformation;
step 2, establishing a pixel coordinate system, a camera coordinate system and a world coordinate system;
step 3, solving the size of the virtual frame-type image after projection transformation according to the geometric relation between the cylindrical surface and the tangent plane thereof and the size of the original linear array image;
step 4, calculating the image point coordinates of the corresponding points on the virtual frame-type image after projection transformation by using the imaging relation and the imaging positions of the space points on the two imaging planes and the known image point coordinates of the original linear array image;
and 5, deducing a back projection formula according to the forward projection relation in the step 3, namely, carrying out back projection on the image point coordinates on the virtual frame type image to obtain the corresponding image point coordinates on the rotary scanning line array image.
Further, the virtual frame type imaging plane in step 1 is a plane tangent to the cylindrical projection plane and to the centerline of the original rotating scanning line array image.
Furthermore, the size calculation formula of the virtual frame image after the projection transformation in the step 3 is as follows,
Figure BDA0001794902330000021
in the above formula, (m)r,nr) Representing the size of the original rotated scan image, representing the number of dots on a scan line and the number of scan lines, respectively, α tableIndicating the angle between adjacent pixels of the rotated scanned image, β indicating the total scan angle of the rotated scanned image, mf,nf) Representing the size of the projectively transformed image.
Furthermore, the calculation formula of the image point coordinates of the virtual frame image after projection transformation in the step 4 is as follows,
Figure BDA0001794902330000022
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)r,yr) Representing point coordinates on the original rotational scan image; (x)f,yf) Representing the coordinates of points on the frame-type image after projection transformation;
Figure BDA0001794902330000031
representing the reference scan line at the time of the projective transformation,
Figure BDA0001794902330000032
representing the image principal point coordinates of the original rotary scanning camera;
Figure BDA0001794902330000033
and the image principal point coordinates of the frame image after the projection transformation are shown.
Further, in step 5, the calculation formula of the image point coordinates of the linear array rotation scanning image after the inverse projection transformation is as follows:
Figure BDA0001794902330000034
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)r,yr) Representing point coordinates on the original rotational scan image; (x)f,yf) Representing the coordinates of points on the frame-type image after projection transformation;
Figure BDA0001794902330000035
representing the reference scan line at the time of the projective transformation,
Figure BDA0001794902330000036
representing the image principal point coordinates of the original rotary scanning camera;
Figure BDA0001794902330000037
and the image principal point coordinates of the frame image after the projection transformation are shown.
In addition, the invention also provides a linear array rotary scanning camera calibration method based on projection transformation, which is based on the imaging model in the technical scheme and adopts a direct linear transformation method and a nonlinear optimization method to calibrate the linear array rotary scanning camera, and specifically comprises the following steps:
the method comprises the following steps that firstly, a rotary scanning line array camera is used for collecting close-range photogrammetry three-dimensional control field images;
acquiring pixel coordinates of the calibration point in the three-dimensional control field image by adopting an automatic extraction method or a manual extraction method;
thirdly, carrying out projection transformation on the coordinates of the obtained original image calibration points by using an imaging model to obtain the coordinates of the calibration points on the virtual frame type image;
step four, solving camera parameters by using the calibration data after projection transformation and adopting a direct linear transformation method;
and step five, taking the camera parameters obtained in the step four and the included angle α between the adjacent scanning lines given by the camera as initial values, taking the minimized space point reprojection error as an optimization target, performing combined adjustment on the included angle of the scanning lines and the camera external parameters, and performing iterative optimization by adopting a nonlinear optimization method, thereby obtaining a final camera calibration result.
Furthermore, in step three, the world coordinates (X, Y, Z) of the three-dimensional space point corresponding to the calibration point and the pixel coordinates (X) of the point projected on the frame imagef,yf) The following formula is satisfied,
Figure BDA0001794902330000038
where λ represents a scale factor, (x)f,yf) Pixel coordinates representing image points, M represents a camera matrix, (X, Y, Z) world coordinate system coordinates representing three-dimensional space points, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements.
Furthermore, the direct linear transformation formula in step four is,
Figure BDA0001794902330000041
wherein (x)f,yf) Pixel coordinates representing an image point, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements, (X, Y, Z) representing three-dimensional spatial point world coordinate system coordinates.
Further, the calculation formula for minimizing the spatial point reprojection error is as follows:
Figure BDA0001794902330000042
in the formula (I), the compound is shown in the specification,
Figure BDA0001794902330000043
and
Figure BDA0001794902330000044
a formula for calculating the coordinates of the image points of the linear array rotary scanning image after the inverse projection transformation, wherein N is the number of the calibration points,
Figure BDA0001794902330000045
representing virtual framed image point coordinates, x, estimated using camera matrix and world coordinates of the pointsi rAnd yi rThe measured coordinate values of the points on the rotated scanned image are indicated.
The foregoing is a brief summary of the invention, including the basic principles and implementation steps of the methods of the invention. The foregoing summary, however, is not intended to be a complete description of the invention, nor is it intended to be used to identify key or critical elements of the invention or to delineate the scope of the invention, but rather to present some sort of shorthand description of the spirit of the invention.
Compared with the prior art, the invention has the advantages and beneficial effects that: the method is simple and flexible, can calibrate the camera by only needing less calibration data, and can obtain a result with higher precision. In addition, the method can fully utilize the existing research results of multi-view geometry during subsequent processing; the invention is easier to construct the epipolar line image, thus being more suitable for three-dimensional reconstruction.
Drawings
FIG. 1 is a pixel coordinate system;
FIG. 2 is a projective transformation geometry diagram;
FIG. 3 is a flow chart of a camera calibration method;
fig. 4 is a close-up photogrammetry control field image.
Detailed Description
In order to explain technical solutions and technical advantages of the present invention in more detail, the present invention will be described more fully by way of specific embodiments with reference to the accompanying drawings.
Firstly, the method for constructing the imaging model of the rotary scanning line camera based on projection transformation comprises the following specific steps:
step 1, selecting a plane tangent to a cylindrical projection plane of an original rotary scanning line array camera as a virtual frame type imaging plane for projection transformation, and generally selecting a plane tangent to the cylindrical projection plane and the central line of an image of the original rotary scanning line array;
step 2, establishing a pixel coordinate system, a camera coordinate system and a world coordinate system;
step 3, solving the size of the virtual frame-type image after projection transformation according to the geometric relation between the cylindrical surface and the tangent plane thereof and the size of the original linear array image;
step 4, calculating the image point coordinates of the corresponding points on the virtual frame-type image after projection transformation by using the imaging relation and the imaging positions of the space points on the two imaging planes and the known image point coordinates of the original linear array image;
and 5, deducing a back projection formula according to the forward projection relation in the step 3, namely, carrying out back projection on the image point coordinates on the virtual frame type image to obtain the corresponding image point coordinates on the rotary scanning line array image.
The specific implementation steps are as follows, the image pixel coordinate system is as shown in fig. 1, the pixel coordinate system takes the upper left corner of the image as the origin, the horizontal direction is the x-axis, and the vertical direction is the y-axis.
As shown in fig. 2, a camera coordinate system is constructed with point C as the origin, CG as the x-axis, and a rotation axis as the z-axis, which forms the right-hand coordinate system. And E point is any point on the virtual frame image, B point is the corresponding point of the virtual frame image, and the conversion relation of the pixel coordinates on the rotating scanning image and the frame image is deduced through the position relation of the two points in the camera coordinate system. Specific symbols are defined as follows:
(xr,yr) Representing pixel coordinates on the original rotated scanned image;
(xf,yf) Representing pixel coordinates on the frame image after projection transformation;
(mr,nr) The pixel size of the original rotation scanning image is represented, and the pixel size represents the number of points on one scanning line and the number of the scanning lines respectively;
Figure BDA0001794902330000051
in
Figure BDA0001794902330000052
Representing the reference scan line at the time of the projective transformation,
Figure BDA0001794902330000053
representing the image principal point coordinates of the original rotational scanning camera. The connecting line of the coordinate and the photographing center forms a main axis of the frame-type image after the projection transformation, and a plane which passes through the point and is vertical to the main axis forms an imaging plane of the frame-type image after the projection transformation;
Figure BDA0001794902330000054
the image principal point coordinates of the frame-type image after projective transformation are expressed, and for the convenience of calculation, in the projection process, the point is generally positioned at the central point of the image after projective transformation, that is:
Figure BDA0001794902330000055
(mf,nf) Representing the size of the projectively transformed image;
α denotes the angle between adjacent pixels of the rotated scanned image;
Figure BDA0001794902330000056
the field angle of the central pixel of the frame image after projection transformation in the x direction is represented;
β denotes the total scan angle of the rotated scan image;
according to the imaging characteristics of the rotary scanning line camera, the calculation formula of α is as follows:
Figure BDA0001794902330000061
let the size of each pixel be d, and its calculation formula be:
Figure BDA0001794902330000062
as shown in FIG. 2, point G represents the principal point of the frame-in-frame image, which corresponds to p of the original scanned imagexColumn scan line, corner β1And β2The calculation formulas of (A) and (B) are respectively as follows:
Figure BDA0001794902330000063
FG and KG represent the widths of the right and left sides of the projected image, respectively, and the length is denoted as lFGAnd lKGAnd the line DG represents the focal length of the camera, and is denoted as f, then lFGAnd lKGThe calculation formula of (2) is as follows:
Figure BDA0001794902330000064
therefore, the width calculation formula of the projective transformed image is as follows:
Figure BDA0001794902330000065
it can be seen that because the focal length of the image after projection transformation is equal to that of the original linear array camera during projection, the width of the image after projection is irrelevant to the focal length, only to the included angle between the scanning lines of the linear array image, the number of the scanning lines and the selection of the central scanning line during projection transformation, and irrelevant to other factors.
In addition, according to the relation of similar triangles have
Figure BDA0001794902330000066
And DH is equal to f, or f,
Figure BDA0001794902330000067
since the pixel size before and after projection is unchanged, the pixel length calculation formula of EF is:
Figure BDA0001794902330000068
the formula for calculating the pixel length of DF in the same way is as follows:
Figure BDA0001794902330000071
therefore, the calculation formula of the height of the image after projective transformation is as follows:
Figure BDA0001794902330000072
when the central straight line of the original scanned image is taken as the reference straight line,while the main point of the image of the rotary scanning camera is at the center of the scanning line array, i.e. when
Figure BDA0001794902330000073
And
Figure BDA0001794902330000074
the calculation of the size of the projective transformed image can be simplified as follows:
Figure BDA0001794902330000075
for any point E on the virtual frame image, a similar derivation method can be adopted to obtain the transformation relation with the corresponding point on the rotation scanning image.
The calculation formula for obtaining the coordinates after projection transformation is as follows:
Figure BDA0001794902330000076
the inverse projective transformation formula is then:
Figure BDA0001794902330000077
if the visible angle of the image principal point pixel of the frame image after projection in the x direction is specified to be
Figure BDA0001794902330000078
The size of the projectively transformed image pixels in the x-direction is then
Figure BDA0001794902330000079
The x-coordinates before and after projective transformation satisfy the following relation:
Figure BDA00017949023300000710
the final projection calculation formula is therefore:
Figure BDA00017949023300000711
a calibration method of a rotary scanning line camera based on projection transformation comprises the following steps:
step one, a rotary scanning line array camera is adopted to obtain a calibration field image.
And step two, processing the image obtained in the step one, mainly adopting a Gaussian filtering method, then extracting image mark points by using an ellipse fitting method, and obtaining corresponding world point coordinates according to the extracted calibration point numbers and positions.
And step three, utilizing the calibration point data obtained in the step two to obtain virtual frame type camera calibration data through projection transformation.
Step four, solving camera parameters by adopting the data obtained in the step three through a direct linear transformation method;
and step five, taking the camera parameters obtained in the step four and the included angle α between the adjacent scanning lines given by the camera as initial values, taking the minimized space point reprojection error as an optimization target, performing combined adjustment on the included angle of the scanning lines and the camera external parameters, and performing iterative optimization by adopting a nonlinear optimization method, thereby obtaining a final camera calibration result.
Wherein, the imaging geometry of the frame image obtained after the projection transformation in the third step accords with the shooting geometric constraint, namely the world coordinates (X, Y, Z) of the three-dimensional space point corresponding to the calibration point and the image point (X) projected to the frame imagef,yf) The following formula is satisfied:
Figure BDA0001794902330000081
in the formula, λ represents a scale conversion factor, (x)f,yf) Pixel coordinates representing image points, M represents a camera matrix, (X, Y, Z) world coordinate system coordinates representing three-dimensional space points, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements.
The direct linear transformation formula in step four is,
Figure BDA0001794902330000082
in the formula (x)f,yf) Pixel coordinates representing an image point, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements, (X, Y, Z) representing three-dimensional spatial point world coordinate system coordinates.
The calculation formula for minimizing the reprojection error for projecting a spatial point onto an image is therefore:
Figure BDA0001794902330000083
in the formula (I), the compound is shown in the specification,
Figure BDA0001794902330000084
and
Figure BDA0001794902330000085
a formula for calculating the coordinates of the image points of the linear array rotary scanning image after the inverse projection transformation, wherein N is the number of the calibration points,
Figure BDA0001794902330000086
representing virtual framed image point coordinates, x, estimated using camera matrix and world coordinates of the pointsi rAnd yi rThe measured coordinate values of the points on the rotated scanned image are indicated.
If the parameters obtained when considering rotational scan imaging are accurate values, i.e., the scan line clip angle α is known, then the above equation degenerates to a standard frame-and-frame camera imaging model, which can be solved using a linear transformation method.
In practical application, however, the adjacent scanning line included angle α given by the rotary scanning platform has a deviation from the actual value, and if the value given by the system is used for camera calibration, the calibration result will be inaccurate, so the calibration should be divided into two steps, firstly, the α angle given by the system is used as an initial value, a camera parameter initial value is solved by adopting a direct linear transformation method, then, the reprojection error of the image point coordinate is used as a target function to optimize the camera parameters and the α angle, and finally, more accurate camera parameters are obtained, and a specific calibration flow is shown in fig. 3.
Example 1
The method comprises the steps of collecting images in a close-range photogrammetry three-dimensional control field by combining a linear array imaging camera with a rotary scanning platform, and calibrating the linear array imaging camera by using a control point.
TABLE 1 parameter settings for camera shots
Scanning angle/deg Integration time/ms Rotational speed/deg/s Number of lines scanned per second
53 200 0.177 5.04
Fig. 4 is an acquired close-range photogrammetry three-dimensional control field image, and the control point coordinates in the extracted image can be automatically extracted by using a computer vision method or extracted by using a method of manually extracting a mark point. A total of 53 marker points were extracted.
The method comprises the steps of taking a rotation included angle α set during camera shooting as a true value, calculating projected point coordinate values by using an image point coordinate calculation formula of a virtual frame-type image after projection transformation, and then calibrating a camera by adopting a direct linear transformation method according to the projected point coordinates and corresponding space point coordinates, wherein the table 2 is solved camera parameters, and the table 3 is a reprojection error.
TABLE 2 Camera parameters
Parameter(s) x0(pixel) y0(pixel) fx(pixel) fy(pixel) X0(mm) Y0(mm) Z0(mm)
DLT 353.58 804.85 1724.5 1689.08 2217.04 1296.18 138.942
TABLE 3 reprojection error
Figure BDA0001794902330000091
Considering that the inaccurate included angle α between adjacent pixels given by the system affects the result of camera calibration, the camera parameters obtained by the direct linear transformation method and the α angle given by the system are used as initial values to perform nonlinear optimization, and the optimized objective function is:
Figure BDA0001794902330000101
the Levenberg-Marquardt algorithm is adopted to carry out iteration to solve the optimal solution, and the optimization result is shown in the table 4:
TABLE 4 optimization results
Parameter(s) Original α/deg Optimized α/deg
Results 0.0353 0.0337
Carrying out reprojection error calculation by using the optimized parameters, substituting the solved camera parameters and the three-dimensional coordinates of the control points into an imaging model formula, calculating to obtain the coordinate values of the reprojected image points of the virtual frame-type image, then solving the reprojected image point coordinate values of the original linear array rotary scanning image by using an inverse projection formula, solving the Euclidean distance between the reprojected image point coordinate values and the extracted image point coordinate values to obtain reprojection errors, wherein the calculation result is shown in a table 5:
TABLE 5 optimized post-reprojection error
Figure BDA0001794902330000102
According to the calibration result, the error of the calibration result obtained by using the linear array image after projection transformation and the calibration method of the area array camera is less than 1 pixel, and the linear array image after projection transformation is more consistent with the characteristics of the area array image. The experimental result proves the accuracy of the projection transformation relation and also illustrates the feasibility of utilizing the projection transformation method to calibrate the rotary scanning line camera.

Claims (9)

1. A method for constructing a linear array rotary scanning camera imaging model based on projection transformation is disclosed, the imaging model determines the geometric relationship between the coordinates of image points on the original imaging plane of a linear array rotary scanning camera and the coordinates of corresponding image points on a virtual frame type image taking a tangent plane as an imaging plane according to the parameters of a camera rotary platform and the position relationship between the imaging plane of the linear array rotary scanning camera and the tangent plane of the linear array rotary scanning camera, so as to project the rotary scanning image into a frame type image, and the method specifically comprises the following steps:
step 1, selecting a plane tangent to a cylindrical projection plane of an original rotary scanning line array camera as a virtual frame type imaging plane of projection transformation;
step 2, establishing a pixel coordinate system, a camera coordinate system and a world coordinate system;
step 3, solving the size of the virtual frame-type image after projection transformation according to the geometric relation between the cylindrical surface and the tangent plane thereof and the size of the original linear array image;
step 4, calculating the image point coordinates of the corresponding points on the virtual frame-type image after projection transformation by using the imaging relation and the imaging positions of the space points on the two imaging planes and the known image point coordinates of the original linear array image;
and 5, deducing a back projection formula according to the forward projection relation in the step 3, namely, carrying out back projection on the image point coordinates on the virtual frame type image to obtain the corresponding image point coordinates on the rotary scanning line array image.
2. The projection transformation-based line array rotary scanning camera imaging model construction method of claim 1, characterized in that: in the step 1, the virtual frame type imaging plane is a plane tangent to the central line of the original rotary scanning line array image with the cylindrical projection plane.
3. The projection transformation-based line array rotary scanning camera imaging model construction method of claim 1, characterized in that: the size calculation formula of the virtual frame-type image after the projection transformation in the step 3 is as follows,
Figure FDA0002360714800000011
in the above formula, mrRepresenting the number of points in a scan line of the rotated scan image, α representing the angle between adjacent pixels of the rotated scan image, β representing the total scan angle of the rotated scan image, and mf,nf) Representing the size of the projectively transformed image.
4. The projection transformation-based line array rotary scanning camera imaging model construction method of claim 2, characterized in that: the image point coordinate calculation formula of the virtual frame-type image after projection transformation in the step 4 is as follows,
Figure FDA0002360714800000021
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)r,yr) Representing point coordinates on the original rotational scan image; (x)f,yf) Representing the coordinates of points on the frame-type image after projection transformation;
Figure FDA0002360714800000022
representing the coordinates of principal points of the image on the original scanned image, wherein
Figure FDA0002360714800000023
Representing the reference scan line at the time of the projective transformation,
Figure FDA0002360714800000024
representing the image principal point coordinates of the original rotary scanning camera;
Figure FDA0002360714800000025
and the image principal point coordinates of the frame image after the projection transformation are shown.
5. The projection transformation-based line array rotary scanning camera imaging model construction method of claim 3, characterized in that: in step 5, the calculation formula of the image point coordinates of the linear array rotation scanning image after the inverse projection transformation is as follows:
Figure FDA0002360714800000026
in the above formula, α represents the angle between adjacent pixels in the rotated scanned image, (x)r,yr) Representing point coordinates on the original rotational scan image; (x)f,yf) Representing the coordinates of points on the frame-type image after projection transformation;
Figure FDA0002360714800000027
representing the reference scan line at the time of the projective transformation,
Figure FDA0002360714800000028
representing the image principal point coordinates of the original rotary scanning camera;
Figure FDA0002360714800000029
and the image principal point coordinates of the frame image after the projection transformation are shown.
6. A calibration method of a linear array rotary scanning camera based on projection transformation is characterized in that based on an imaging model in any claim of claims 1 to 5, a direct linear transformation method and a nonlinear optimization method are adopted to calibrate the linear array rotary scanning camera, and the calibration method specifically comprises the following steps:
the method comprises the following steps that firstly, a rotary scanning line array camera is used for collecting close-range photogrammetry three-dimensional control field images;
acquiring pixel coordinates of the calibration point in the three-dimensional control field image by adopting an automatic extraction method or a manual extraction method;
thirdly, carrying out projection transformation on the coordinates of the obtained original image calibration points by using an imaging model to obtain the coordinates of the calibration points on the virtual frame type image;
step four, solving camera parameters by using the calibration data after projection transformation and adopting a direct linear transformation method;
and step five, taking the camera parameters obtained in the step four and the included angle α between the adjacent scanning lines given by the camera as initial values, taking the minimized space point reprojection error as an optimization target, performing combined adjustment on the included angle of the scanning lines and the camera external parameters, and performing iterative optimization by adopting a nonlinear optimization method, thereby obtaining a final camera calibration result.
7. The method for calibrating a linear array rotary scanning camera based on projection transformation as claimed in claim 6, wherein: in step three, the world coordinates (X, Y, Z) of the three-dimensional space point corresponding to the calibration point and the pixel coordinates (X) of the point projected on the frame imagef,yf) The following formula is satisfied,
Figure FDA0002360714800000031
where λ represents a scale factor, (x)f,yf) Pixel coordinates representing image points, M represents a camera matrix, (X, Y, Z) world coordinate system coordinates representing three-dimensional space points, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements.
8. The calibration method of the linear array rotary scanning camera based on the projection transformation as claimed in claim 6 or 7, characterized in that: the direct linear transformation formula in step four is,
Figure FDA0002360714800000032
wherein (x)f,yf) Pixel coordinates representing an image point, a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4Representing camera matrix elements, (X, Y, Z) representing three-dimensional spatial point world coordinate system coordinates.
9. The method for calibrating a linear array rotary scanning camera based on projection transformation as claimed in claim 6, wherein: the calculation formula for minimizing the spatial point reprojection error is as follows:
Figure FDA0002360714800000033
in the formula (I), the compound is shown in the specification,
Figure FDA0002360714800000034
and
Figure FDA0002360714800000035
a formula for calculating the coordinates of the image points of the linear array rotary scanning image after the inverse projection transformation, wherein N is the number of the calibration points,
Figure FDA0002360714800000036
representing virtual framed image point coordinates, x, estimated using camera matrix and world coordinates of the pointsi rAnd yi rThen it is indicated as rotatingThe measured coordinate values of the points on the scanned image are transferred.
CN201811052380.8A 2018-09-10 2018-09-10 Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation Active CN109272574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811052380.8A CN109272574B (en) 2018-09-10 2018-09-10 Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811052380.8A CN109272574B (en) 2018-09-10 2018-09-10 Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation

Publications (2)

Publication Number Publication Date
CN109272574A CN109272574A (en) 2019-01-25
CN109272574B true CN109272574B (en) 2020-04-10

Family

ID=65188875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811052380.8A Active CN109272574B (en) 2018-09-10 2018-09-10 Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation

Country Status (1)

Country Link
CN (1) CN109272574B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919835B (en) * 2019-03-20 2022-07-26 湖北省电力勘测设计院有限公司 Oversea power line selection method based on multi-source satellite remote sensing image joint adjustment
CN110548289B (en) * 2019-09-18 2023-03-17 网易(杭州)网络有限公司 Method and device for displaying three-dimensional control
CN111612692B (en) * 2020-04-24 2023-10-24 西安理工大学 Cell image reconstruction method based on double-linear-array scanning imaging system
CN111561936A (en) * 2020-05-19 2020-08-21 中国科学院微小卫星创新研究院 Precise processing method and system for rotating large-breadth optical satellite
CN111798476B (en) * 2020-06-08 2023-10-20 国网江西省电力有限公司电力科学研究院 Extraction method for conductive arm axis of high-voltage isolating switch
CN112489135B (en) * 2020-11-27 2024-04-19 深圳市深图医学影像设备有限公司 Calibration method of virtual three-dimensional face reconstruction system
CN112989506A (en) * 2021-01-26 2021-06-18 中国科学院上海技术物理研究所 Parameter design method of area array rotary scanning space camera imaging system
CN113962853B (en) * 2021-12-15 2022-03-15 武汉大学 An automatic and precise calculation method for rotating line scan image pose
CN117956093B (en) * 2024-03-27 2024-06-18 深圳市云希谷科技有限公司 Scanning pen text scanning method, scanning pen text scanning device, medium and computer equipment
CN117994359B (en) * 2024-04-07 2024-06-11 广东工业大学 A linear array camera calibration method based on auxiliary camera and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355234A (en) * 1993-07-31 1994-10-11 Samsung Electronics Co., Ltd. Image scanning apparatus
CN103018738A (en) * 2011-09-20 2013-04-03 中国科学院电子学研究所 Microwave three-dimensional imaging method based on rotary antenna array
CN104182969A (en) * 2014-08-08 2014-12-03 河南科技大学 Internal and external parameter calibration method of single-scanline camera
CN105046715A (en) * 2015-09-16 2015-11-11 北京理工大学 Space analytic geometry-based line-scan camera calibration method
CN106982370A (en) * 2017-05-03 2017-07-25 武汉科技大学 A kind of camera high-precision calibration scaling board of many line-scan digital camera detecting systems and the method for realizing calibration
CN107314742A (en) * 2017-05-31 2017-11-03 合肥工业大学 A kind of rotary optical chromatographic imaging system and imaging method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4086766B2 (en) * 2003-11-28 2008-05-14 キヤノン株式会社 Process cartridge and process cartridge assembling method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355234A (en) * 1993-07-31 1994-10-11 Samsung Electronics Co., Ltd. Image scanning apparatus
CN103018738A (en) * 2011-09-20 2013-04-03 中国科学院电子学研究所 Microwave three-dimensional imaging method based on rotary antenna array
CN104182969A (en) * 2014-08-08 2014-12-03 河南科技大学 Internal and external parameter calibration method of single-scanline camera
CN105046715A (en) * 2015-09-16 2015-11-11 北京理工大学 Space analytic geometry-based line-scan camera calibration method
CN106982370A (en) * 2017-05-03 2017-07-25 武汉科技大学 A kind of camera high-precision calibration scaling board of many line-scan digital camera detecting systems and the method for realizing calibration
CN107314742A (en) * 2017-05-31 2017-11-03 合肥工业大学 A kind of rotary optical chromatographic imaging system and imaging method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种车载线阵激光扫描测量系统几何标定模型;张成;《测绘科学技术学报》;20131215;第61-613页 *

Also Published As

Publication number Publication date
CN109272574A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN111369630A (en) A method of multi-line lidar and camera calibration
CN105931222B (en) The method for realizing high-precision camera calibration with low precision two dimensional surface target
CN108288294A (en) A kind of outer ginseng scaling method of a 3D phases group of planes
CN105184857B (en) Monocular vision based on structure light ranging rebuilds mesoscale factor determination method
CN113920205B (en) Calibration method of non-coaxial camera
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
Larsson et al. Revisiting radial distortion absolute pose
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN102314674B (en) Registering method for data texture image of ground laser radar
CN103150724B (en) Segmented model-based camera calibration method
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN101586943B (en) Method for calibrating structure light vision transducer based on one-dimensional target drone
CN109272555B (en) A method of obtaining and calibrating external parameters of RGB-D camera
CN113962853B (en) An automatic and precise calculation method for rotating line scan image pose
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN115861445B (en) Hand-eye calibration method based on three-dimensional point cloud of calibration plate
CN111968182B (en) Calibration method for nonlinear model parameters of binocular camera
CN113947638A (en) Image orthorectification method for fisheye camera
CN112465849A (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN106530342B (en) Full-view image generation method is measured using what laser point cloud was aided in
CN113963067B (en) A calibration method using a small target to calibrate a vision sensor with a large field of view

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant