Disclosure of Invention
The invention provides a camera combined external parameter calibration method, a camera combined external parameter calibration system and a storage medium by utilizing a laser radar, which are used for solving the technical problems in the prior art, and accurately extracting laser point cloud characteristic points by utilizing the reflection intensity of the laser radar so as to realize more accurate characteristic point extraction of a calibration plate.
An embodiment of the present application provides a camera combined external parameter calibration method using a laser radar, including the following steps:
s1: acquiring image information of a checkerboard, and calculating to obtain 2D positions of feature points of the checkerboard based on the image information;
s2: acquiring point cloud data, acquiring candidate point cloud clusters according to the point cloud data, and generating characteristic points of a checkerboard template through the checkerboard template;
s3: acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix;
s4: and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
According to the scheme, when the 3D positions of the characteristic points of the checkerboard are extracted, the difference of the reflectivity of the point clouds on the checkerboard is utilized, all the point clouds are matched with the checkerboard template, the matching precision with the checkerboard template is improved, the accuracy of the characteristic points is further improved, meanwhile, the boundary points of all the black and white lattices can be obtained in a checkerboard template matching mode, more characteristic points are obtained, and the joint external parameter calibration precision by utilizing the laser radar and the camera can be further improved.
Preferably, the calculating to obtain the 2D position of the checkerboard feature point based on the image information includes:
converting the image information into a gray level image, and extracting all corner points on the checkerboard calibration plate as 2D positions of the checkerboard feature points;
the method for extracting all the corner points on the checkerboard calibration plate at least comprises the following steps: a sub-pixel level corner detection method, a Harris corner detection algorithm and/or a Shi-Tomasi corner detection algorithm.
Preferably, the obtaining the candidate point cloud cluster according to the point cloud data includes:
constructing a KD tree structure by using the point cloud data, calculating the point cloud density around each point, screening, and dividing the screened point cloud into candidate point cloud clusters;
the method comprises the steps of adjusting a point Yun Di-density threshold according to actual conditions, screening point clouds with the point cloud density being greater than a first point cloud density threshold, and filtering point clouds with the point cloud density being less than the first point cloud density threshold;
the point cloud of the sparse region may contain noise or invalid data; by filtering the point cloud with the point cloud density smaller than the threshold value, the influence of noise can be reduced, and the quality and accuracy of the point cloud can be improved.
Preferably, the generating the characteristic points of the checkerboard template through the checkerboard template includes:
obtaining the real size of a checkerboard calibration plate and the size of each grid, and generating a checkerboard template;
and obtaining a checkerboard template image according to the checkerboard template, extracting characteristic points, and obtaining the characteristic points of the checkerboard template through matching and optimizing the characteristic points.
Preferably, the obtaining the point cloud of the checkerboard calibration board by using the point cloud reflection intensity information according to the candidate point cloud cluster and the checkerboard template specifically includes:
traversing all candidate point cloud clusters, and sequentially calculating a checkerboard template, an optimal transformation matrix T of each point cloud cluster and a corresponding matching error L, wherein the matching error L is the most matched point cloud cluster and is used as a checkerboard calibration plate point cloud;
in some embodiments, for each point cloud cluster, an optimal transformation matrix T between the point cloud cluster and the checkerboard template is calculated using an ICP algorithm or other point cloud registration algorithm;
the optimal transformation matrix T is obtained according to a minimized matching error model, and the matching error L is the alignment degree L of the point cloud cluster and the template 1 Degree of alignment L with point cloud clustering and template overall shape 2 And (3) summing;
wherein, according to the difference of reflectivity of the laser point cloud on the checkerboard, the white checkerboard presents high reflectivity and needs to be aligned with the white checkerboard template; the black lattice has low reflectivity and needs to be aligned with the black lattice of the checkerboard template, and the higher the alignment degree is L 1 The smaller; the L is 2 The alignment degree of the reaction point cloud cluster and the overall shape of the template is smaller as the alignment degree is higher, and L2 is smaller;
the alignment degree L of the template is obtained by calculating the Euclidean distance d between the point Pi obtained by the original point cloud P through the optimal transformation matrix T and the nearest point Ci in all the characteristic points of the checkerboard template 1 Degree of alignment L with the point cloud cluster and template overall shape 2 。
Preferably, the obtaining the checkerboard calibration plate point cloud by using the point cloud reflection intensity information further includes:
degree of alignment L of the templates 1 And the alignment degree L of the point cloud cluster and the overall shape of the template 2 Also according to a binary function delta in And (3) calculating to obtain: wherein the binary function delta is when the point Pi is within the template range S in Is 1, is not within the template range S in Is 0.
Preferably, the calculating the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template obtains the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix, specifically:
acquiring an inverse matrix of the transformation matrix;
loading 2D coordinates corresponding to the 2D positions of the characteristic points of the checkerboard template and point cloud data of the checkerboard;
performing projective transformation on the characteristic points of the checkerboard template by using the inverse matrix; in the projection transformation process, each characteristic point coordinate is taken as the first row of homogeneous coordinates, then multiplied by the inverse matrix of the transformation matrix to obtain the projected homogeneous coordinates, and the three-dimensional positions of the characteristic points are obtained by extracting the first three elements of the projected homogeneous coordinates.
Preferably, the obtaining pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points realizes external parameter calibration of the camera, specifically includes:
the pose information of the camera comprises a rotation matrix and a translation vector of the camera;
the visual field of the camera can be aligned with the world coordinate system through the external parameter calibration of the camera, so that the conversion from the camera coordinate system to the world coordinate system is realized;
acquiring pose information of the camera by using a PnP algorithm according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera;
the basic principle of the PnP algorithm is that pose information of a camera, namely external parameters, is estimated through two-dimensional image points corresponding to known three-dimensional space points in an image;
the PnP algorithm at least comprises: EPnP algorithm, DLS algorithm, AP3P algorithm, and UPnP algorithm.
A second aspect of embodiments of the present application provides a camera-combined external parameter calibration system using a lidar, the system comprising:
the device comprises an acquisition module, a calculation module, a judgment module and a processing module;
the acquisition module is used for acquiring image information, point cloud data, the real size of the checkerboard calibration plate and the size of each grid;
the computing module at least comprises a first computing module, a second computing module, a third computing module and a fourth computing module; wherein,
the first calculation module is used for converting the image information into a gray level image and extracting all angular points on the checkerboard calibration plate;
the second calculation module is used for constructing a KD tree structure by using the point cloud data and calculating the point cloud density around each point;
the third calculation module is used for obtaining the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, and calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates;
the fourth calculation module is used for obtaining an inverse matrix of the transformation matrix to obtain 3D positions of characteristic points of the checkerboard;
the judging module is used for screening point clouds with the point cloud density being greater than the first density threshold value of the point clouds;
the processing module is used for realizing external parameter calibration of the camera according to pose information of the camera.
A third aspect of the embodiments of the present application provides a storage medium, which is one of computer-readable storage media, on which a computer program is stored, which when executed by a processor, implements a camera joint external parameter calibration method using a laser radar as described above.
In summary, the present application provides a method, a system and a storage medium for calibrating camera combined external parameters by using a laser radar, which obtain the 2D position of characteristic points of a checkerboard by obtaining the image information of the checkerboard and calculating based on the image information; acquiring point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template; acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix; and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
Compared with the prior art, the application has the following technical effects:
by utilizing the difference of laser radar reflection intensity and point cloud reflectivity on the checkerboard, all the point clouds are matched with the checkerboard template to obtain more accurate characteristic points, and meanwhile, the external parameter calibration method can be used for laser radars in various scanning modes, is higher in applicability, can improve matching precision, and enhances robustness, so that point cloud data are more visual and easier to understand.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Example 1:
the camera combined external parameter calibration method using the laser radar as shown in fig. 1 comprises the following steps:
s1: and acquiring the image information of the checkerboard, and calculating to obtain the 2D positions of the feature points of the checkerboard based on the image information.
Preferably, converting the image information into a gray scale, extracting all the corner points on the checkerboard calibration plate as the 2D positions of the checkerboard feature points, wherein the gray scale value= 0.2989 ×red instead of +0.5870 ×green channel+ 0.1140 ×blue channel;
the method for extracting all the corner points on the checkerboard calibration plate at least comprises the following steps: a sub-pixel level corner detection method, a Harris corner detection algorithm and/or a Shi-Tomasi corner detection algorithm.
As shown in fig. 2, in an embodiment, image information is acquired by using OpenCV (image processing library) and converted into a grayscale image;
s100: converting the read color image into a gray image using a cv2.cvtdcolor () function;
s101: using the cv2.final panels () function to find the corner points of the checkerboard calibration plates and storing the results in the corners variable;
s102: extracting corner coordinates of a sub-pixel level using a cv2.corersubpix () function to improve the accuracy of the corner;
s103: the marked corner points are drawn on the image using the cv2.Drawchessboard corners () function and the 2D positions of the corner points are printed out.
S2: and obtaining point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template.
Preferably, the obtaining the candidate point cloud cluster according to the point cloud data specifically includes:
and constructing a KD tree structure by using the point cloud data, calculating the point cloud density around each point, screening, and dividing the screened point cloud into candidate point cloud clusters.
In an embodiment, the point cloud is segmented into a plurality of candidate point cloud clusters by using a region growing method. The normal estimation can be performed by using KD trees by using PCL::::: loadPCDFile () function in PCL to load the screened point cloud data, then setting region growing parameters including a smoothness threshold, a curvature threshold, a minimum point threshold and a maximum point threshold, creating an object, setting the screened point cloud data as input, then calculating the normal of the point cloud, and setting it as the input normal of the region growing object; after setting the parameters of region growing, the reg.extract () function is called to execute region growing segmentation, and the result is stored in clusters, and finally the number of clusters is output and traversed.
Further, according to the actual conditions, adjusting a point Yun Di-density threshold, screening point clouds with the point cloud density being greater than the first density threshold, and filtering point clouds with the point cloud density being less than the first density threshold;
wherein the point cloud of the sparse region may contain noise or invalid data; by filtering the point cloud with the point cloud density smaller than the threshold value, the influence of noise can be reduced, and the quality and accuracy of the point cloud can be improved.
Preferably, the generating the characteristic points of the checkerboard template through the checkerboard template includes:
obtaining the real size of a checkerboard calibration plate and the size of each grid, and generating a checkerboard template;
and obtaining a checkerboard template image according to the checkerboard template, extracting characteristic points, and obtaining the characteristic points of the checkerboard template through matching and optimizing the characteristic points.
In one embodiment, the actual dimensions of the plate are calibrated by measuring the checkerboard, including the size of the entire calibration plate and the size of each cell; generating a checkerboard template according to the acquired calibration plate size and the grid size, wherein the checkerboard template can be a two-dimensional array and represents the position and the coordinates of each grid in the checkerboard; feature point positions are determined, and for each grid, one or more corner points are selected as feature points. Four corner points of the grid are typically chosen as feature points, but other corner points or feature points within the grid may also be chosen. Ensuring that the feature points have unique identification marks, such as row and column indexes, in the checkerboard template; for each feature point, converting its position (row and column index) in the tessellation template into two-dimensional image coordinates (x, y), which can be obtained by multiplying the grid size and adding an offset to each grid on the tessellation; a set of tessellated template feature points will eventually be obtained, where each feature point has two-dimensional image coordinates (x, y) and a corresponding line index, which may be expressed in the form of (row, col, x, y).
S3: and according to the candidate point cloud clusters and the checkerboard templates, acquiring the checkerboard calibration plate point clouds by utilizing the point cloud reflection intensity information, calculating a transformation matrix between the checkerboard calibration plate point clouds and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix.
The step S3 specifically comprises the following steps:
traversing all candidate point cloud clusters, and sequentially calculating a checkerboard template, an optimal transformation matrix T of each point cloud cluster and a corresponding matching error L, wherein the matching error L is the most matched point cloud cluster and is used as a checkerboard calibration plate point cloud; the optimal transformation matrix T is obtained according to a minimized matching error model, and the matching error L is the alignment degree L of the point cloud cluster and the template 1 Degree of alignment L with point cloud clustering and template overall shape 2 And (3) summing;
as shown in fig. 3, the laser point clouds have different reflectivities on the checkerboard, and the white checkers are high in reflectivity and need to be aligned with the checkerboard template white checkers; the black lattice has low reflectivity and needs to be aligned with the black lattice of the checkerboard template, and the higher the alignment degree is L 1 The smaller; the L is 2 The alignment degree of the reaction point cloud cluster and the overall shape of the template is higher, and the higher the alignment degree is L 2 The smaller;
a point P obtained by passing through a transformation matrix T according to the original point cloud P i Closest point C to all checkerboard template feature points i Is calculated to obtain the L 1 And L 2 ;
Degree of alignment L of the templates 1 And the alignment degree L of the point cloud cluster and the overall shape of the template 2 Also according to a binary function delta in And (3) calculating to obtain: wherein the binary function delta is when the point Pi is within the template range S in Is 1, is not within the template range S in Is 0.
In one embodiment, the optimal transformation matrix T is solved by constructing a least squares problem and using a Gauss Newton iterative algorithm model_org The matching error is minimized.
Matching error:
L=L 1 +L 2
the degree of alignment of the point cloud clusters and templates:
the original point cloud P passes through the point P obtained by the transformation matrix T i :
Euclidean distance d (P) between point Pi and nearest point Ci in all checkerboard template feature points i ,C i ):
Computing to obtain the optimal point cloud cluster, substituting
Calculating a change matrix T between the calibration plate and the template _model_org ;
According to the template characteristic point C j And the transformation matrix T _model_org Substituted into
And calculating to obtain the point cloud characteristic points, namely 3D positions of the characteristic points of the checkerboard calibration plate.
Preferably, the calculating the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template obtains the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix, specifically:
acquiring an inverse matrix of the transformation matrix;
loading the 2D coordinates of the characteristic points of the checkerboard template and the point cloud data of the checkerboard;
performing projective transformation on the characteristic points of the checkerboard template by using the inverse matrix; in the projection transformation process, each characteristic point coordinate is taken as the first row of homogeneous coordinates, then multiplied by the inverse matrix of the transformation matrix to obtain the projected homogeneous coordinates, and the three-dimensional positions of the characteristic points are obtained by extracting the first three elements of the projected homogeneous coordinates.
S4: and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
Preferably, the pose information of the camera includes a rotation matrix and a translation vector of the camera.
Wherein the Rotation Matrix (Rotation Matrix) is a 3x3 Matrix, which represents the transformation relation of the camera coordinate system rotating clockwise to the world coordinate system;
the translation vector (Translation Vector) is a 3-dimensional vector representing the position of the camera coordinate system origin in the world coordinate system;
points in the camera coordinate system may be converted into the world coordinate system or projected into the camera coordinate system by rotating the matrix and translating the vector.
Preferably, according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points, pose information of the camera is obtained by using a PnP algorithm, so that external parameter calibration of the camera is realized.
Preferably, the PnP algorithm at least includes: EPnP algorithm, DLS algorithm, AP3P algorithm, and UPnP algorithm.
As shown in fig. 4, in an embodiment, according to the 2D positions of the checkerboard feature points and the 3D positions of the checkerboard feature points, pose information of the camera is obtained by using a PnP algorithm to realize external parameter calibration of the camera.
S401: collecting 3D positions of the checkerboard feature points with known three-dimensional coordinates, shooting checkerboard images in a camera, and extracting 2D positions of the corresponding checkerboard feature points; the characteristic points of the checkerboard calibration plate in all postures are guaranteed to be completely captured;
s402: for each pair of corresponding checkerboard feature points (2D-3D pairs), constructing a matching relationship;
s403: selecting a proper PnP algorithm to estimate the camera gesture, wherein common algorithms include an EPnP algorithm, a DLS algorithm, an AP3P algorithm, a UPnP algorithm and the like;
s404: providing the 2D pixel coordinates (image coordinates) of the feature points and their corresponding 3D spatial coordinates (on the world coordinate system) to the selected PnP algorithm;
s405: solving a rotation matrix and a translation vector of the camera by minimizing the reprojection error by using the selected PnP algorithm;
s406: and obtaining camera pose information comprising a rotation matrix and a translation vector.
Embodiment two:
as shown in fig. 5, the present application further provides a camera combined external parameter calibration system using a laser radar, the system comprising:
the device comprises an acquisition module, a calculation module, a judgment module and a processing module;
the acquisition module is used for acquiring image information, point cloud data, the real size of the checkerboard calibration plate and the size of each grid;
the computing module at least comprises a first computing module, a second computing module, a third computing module and a fourth computing module: wherein,
the first calculation module is used for converting the image information into a gray level image and extracting all angular points on the checkerboard calibration plate;
the second calculation module is used for constructing a KD tree structure by using the point cloud data and calculating the point cloud density around each point;
the third calculation module is used for obtaining the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, and calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates;
the fourth calculation module is used for obtaining an inverse matrix of the transformation matrix to obtain 3D positions of characteristic points of the checkerboard;
the judging module is used for screening point clouds with the point cloud density being greater than the first density threshold value of the point clouds;
the processing module is used for realizing external parameter calibration of the camera according to pose information of the camera.
Embodiment III:
the present application also provides a storage medium, which is one of computer-readable storage media, on which a computer program is stored, which when executed by a processor, implements a camera joint external parameter calibration method using a laser radar as described above.
In summary, the present application provides a method, a system and a storage medium for calibrating camera combined external parameters by using a laser radar, which obtain the 2D position of characteristic points of a checkerboard by obtaining the image information of the checkerboard and calculating based on the image information; acquiring point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template; acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix; obtaining pose information of the camera according to the 2D positions of the checkerboard feature points and the 3D positions of the checkerboard feature points to realize external reference of the camera
By utilizing the difference of laser radar reflection intensity and point cloud reflectivity on the checkerboard, all the point clouds are matched with the checkerboard template to obtain more accurate characteristic points, and meanwhile, the external parameter calibration method can be used for laser radars in various scanning modes, is higher in applicability, can improve matching precision, and enhances robustness, so that point cloud data are more visual and easier to understand.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.