[go: up one dir, main page]

CN117252931A - Camera combined external parameter calibration method and system using laser radar and storage medium - Google Patents

Camera combined external parameter calibration method and system using laser radar and storage medium Download PDF

Info

Publication number
CN117252931A
CN117252931A CN202311244551.8A CN202311244551A CN117252931A CN 117252931 A CN117252931 A CN 117252931A CN 202311244551 A CN202311244551 A CN 202311244551A CN 117252931 A CN117252931 A CN 117252931A
Authority
CN
China
Prior art keywords
checkerboard
point cloud
template
camera
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311244551.8A
Other languages
Chinese (zh)
Inventor
贾楠
杨鑫
叶晟
刘怡初
彭登富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Kafrog Technology Co ltd
Original Assignee
Chengdu Kafrog Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Kafrog Technology Co ltd filed Critical Chengdu Kafrog Technology Co ltd
Priority to CN202311244551.8A priority Critical patent/CN117252931A/en
Publication of CN117252931A publication Critical patent/CN117252931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a camera combined external parameter calibration method, a camera combined external parameter calibration system and a storage medium by utilizing a laser radar, wherein the 2D position of characteristic points of a checkerboard is obtained by acquiring image information of the checkerboard and calculating based on the image information; acquiring point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template; acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix; and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera. According to the method, the laser radar reflection intensity and the difference of the point cloud reflectivity on the checkerboard are utilized, all the point clouds are matched with the checkerboard template, more accurate characteristic points are obtained, and meanwhile, the applicability of the external parameter calibration method is higher.

Description

Camera combined external parameter calibration method and system using laser radar and storage medium
Technical Field
The invention belongs to the technical field of joint calibration of sensors, and particularly relates to a joint external parameter calibration method, a joint external parameter calibration system and a joint external parameter calibration storage medium for a camera by utilizing a laser radar.
Background
The joint calibration of the sensors is an essential link in an automatic driving sensing system, is an essential step and a prerequisite for the subsequent sensor fusion, and means that the data of a plurality of sensors are fused, and the sensors are subjected to unified coordinate transformation and calibration so as to realize accurate alignment and registration of the sensor data. The joint external parameter calibration of the laser radar (Lidar) and the Camera (Camera) aims at determining the relative position and posture relation between the laser radar and the Camera so as to realize accurate alignment and fusion of data of the laser radar and the Camera. The technology is widely applied to the fields of robot navigation, augmented reality, virtual reality and the like. Sensor extrinsic calibration is a process of determining the relative relationship between sensors and the world coordinate system. There may be translational, rotational, and attitude transformations between the sensors. The external parameter calibration can use methods such as feature point matching, attitude estimation or system calibration to obtain the relative attitude transformation between the sensors.
The joint external parameter calibration of the lidar and the camera generally uses a static calibration method, which is a calibration method based on static scenes, in which the lidar and the camera are required to be fixed in a stable position by collecting a set of images with calibration plates and corresponding lidar point cloud data. And (3) extracting characteristic points in the image and the point cloud, establishing a corresponding relation between the image and the point cloud, and calculating external parameters between the camera and the laser radar by using an optimization algorithm. However, due to the fact that the depth difference of the laser point cloud at the edge of the calibration plate is utilized due to the existence of the point cloud noise, geometric features of the fitting edge point cloud are inaccurate, and therefore feature point extraction of the calibration plate is inaccurate, and joint external parameter calibration of the laser radar and the camera is affected. In addition, in some existing methods, ring id (ring used for identifying point cloud data) is needed for joint external parameter calibration of the laser radar and the camera, and due to the fact that different types of laser radar scanning modes are different, some types of lasers do not have the parameter of ring id, so that the existing methods cannot be applied, and the universality is low.
Disclosure of Invention
The invention provides a camera combined external parameter calibration method, a camera combined external parameter calibration system and a storage medium by utilizing a laser radar, which are used for solving the technical problems in the prior art, and accurately extracting laser point cloud characteristic points by utilizing the reflection intensity of the laser radar so as to realize more accurate characteristic point extraction of a calibration plate.
An embodiment of the present application provides a camera combined external parameter calibration method using a laser radar, including the following steps:
s1: acquiring image information of a checkerboard, and calculating to obtain 2D positions of feature points of the checkerboard based on the image information;
s2: acquiring point cloud data, acquiring candidate point cloud clusters according to the point cloud data, and generating characteristic points of a checkerboard template through the checkerboard template;
s3: acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix;
s4: and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
According to the scheme, when the 3D positions of the characteristic points of the checkerboard are extracted, the difference of the reflectivity of the point clouds on the checkerboard is utilized, all the point clouds are matched with the checkerboard template, the matching precision with the checkerboard template is improved, the accuracy of the characteristic points is further improved, meanwhile, the boundary points of all the black and white lattices can be obtained in a checkerboard template matching mode, more characteristic points are obtained, and the joint external parameter calibration precision by utilizing the laser radar and the camera can be further improved.
Preferably, the calculating to obtain the 2D position of the checkerboard feature point based on the image information includes:
converting the image information into a gray level image, and extracting all corner points on the checkerboard calibration plate as 2D positions of the checkerboard feature points;
the method for extracting all the corner points on the checkerboard calibration plate at least comprises the following steps: a sub-pixel level corner detection method, a Harris corner detection algorithm and/or a Shi-Tomasi corner detection algorithm.
Preferably, the obtaining the candidate point cloud cluster according to the point cloud data includes:
constructing a KD tree structure by using the point cloud data, calculating the point cloud density around each point, screening, and dividing the screened point cloud into candidate point cloud clusters;
the method comprises the steps of adjusting a point Yun Di-density threshold according to actual conditions, screening point clouds with the point cloud density being greater than a first point cloud density threshold, and filtering point clouds with the point cloud density being less than the first point cloud density threshold;
the point cloud of the sparse region may contain noise or invalid data; by filtering the point cloud with the point cloud density smaller than the threshold value, the influence of noise can be reduced, and the quality and accuracy of the point cloud can be improved.
Preferably, the generating the characteristic points of the checkerboard template through the checkerboard template includes:
obtaining the real size of a checkerboard calibration plate and the size of each grid, and generating a checkerboard template;
and obtaining a checkerboard template image according to the checkerboard template, extracting characteristic points, and obtaining the characteristic points of the checkerboard template through matching and optimizing the characteristic points.
Preferably, the obtaining the point cloud of the checkerboard calibration board by using the point cloud reflection intensity information according to the candidate point cloud cluster and the checkerboard template specifically includes:
traversing all candidate point cloud clusters, and sequentially calculating a checkerboard template, an optimal transformation matrix T of each point cloud cluster and a corresponding matching error L, wherein the matching error L is the most matched point cloud cluster and is used as a checkerboard calibration plate point cloud;
in some embodiments, for each point cloud cluster, an optimal transformation matrix T between the point cloud cluster and the checkerboard template is calculated using an ICP algorithm or other point cloud registration algorithm;
the optimal transformation matrix T is obtained according to a minimized matching error model, and the matching error L is the alignment degree L of the point cloud cluster and the template 1 Degree of alignment L with point cloud clustering and template overall shape 2 And (3) summing;
wherein, according to the difference of reflectivity of the laser point cloud on the checkerboard, the white checkerboard presents high reflectivity and needs to be aligned with the white checkerboard template; the black lattice has low reflectivity and needs to be aligned with the black lattice of the checkerboard template, and the higher the alignment degree is L 1 The smaller; the L is 2 The alignment degree of the reaction point cloud cluster and the overall shape of the template is smaller as the alignment degree is higher, and L2 is smaller;
the alignment degree L of the template is obtained by calculating the Euclidean distance d between the point Pi obtained by the original point cloud P through the optimal transformation matrix T and the nearest point Ci in all the characteristic points of the checkerboard template 1 Degree of alignment L with the point cloud cluster and template overall shape 2
Preferably, the obtaining the checkerboard calibration plate point cloud by using the point cloud reflection intensity information further includes:
degree of alignment L of the templates 1 And the alignment degree L of the point cloud cluster and the overall shape of the template 2 Also according to a binary function delta in And (3) calculating to obtain: wherein the binary function delta is when the point Pi is within the template range S in Is 1, is not within the template range S in Is 0.
Preferably, the calculating the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template obtains the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix, specifically:
acquiring an inverse matrix of the transformation matrix;
loading 2D coordinates corresponding to the 2D positions of the characteristic points of the checkerboard template and point cloud data of the checkerboard;
performing projective transformation on the characteristic points of the checkerboard template by using the inverse matrix; in the projection transformation process, each characteristic point coordinate is taken as the first row of homogeneous coordinates, then multiplied by the inverse matrix of the transformation matrix to obtain the projected homogeneous coordinates, and the three-dimensional positions of the characteristic points are obtained by extracting the first three elements of the projected homogeneous coordinates.
Preferably, the obtaining pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points realizes external parameter calibration of the camera, specifically includes:
the pose information of the camera comprises a rotation matrix and a translation vector of the camera;
the visual field of the camera can be aligned with the world coordinate system through the external parameter calibration of the camera, so that the conversion from the camera coordinate system to the world coordinate system is realized;
acquiring pose information of the camera by using a PnP algorithm according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera;
the basic principle of the PnP algorithm is that pose information of a camera, namely external parameters, is estimated through two-dimensional image points corresponding to known three-dimensional space points in an image;
the PnP algorithm at least comprises: EPnP algorithm, DLS algorithm, AP3P algorithm, and UPnP algorithm.
A second aspect of embodiments of the present application provides a camera-combined external parameter calibration system using a lidar, the system comprising:
the device comprises an acquisition module, a calculation module, a judgment module and a processing module;
the acquisition module is used for acquiring image information, point cloud data, the real size of the checkerboard calibration plate and the size of each grid;
the computing module at least comprises a first computing module, a second computing module, a third computing module and a fourth computing module; wherein,
the first calculation module is used for converting the image information into a gray level image and extracting all angular points on the checkerboard calibration plate;
the second calculation module is used for constructing a KD tree structure by using the point cloud data and calculating the point cloud density around each point;
the third calculation module is used for obtaining the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, and calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates;
the fourth calculation module is used for obtaining an inverse matrix of the transformation matrix to obtain 3D positions of characteristic points of the checkerboard;
the judging module is used for screening point clouds with the point cloud density being greater than the first density threshold value of the point clouds;
the processing module is used for realizing external parameter calibration of the camera according to pose information of the camera.
A third aspect of the embodiments of the present application provides a storage medium, which is one of computer-readable storage media, on which a computer program is stored, which when executed by a processor, implements a camera joint external parameter calibration method using a laser radar as described above.
In summary, the present application provides a method, a system and a storage medium for calibrating camera combined external parameters by using a laser radar, which obtain the 2D position of characteristic points of a checkerboard by obtaining the image information of the checkerboard and calculating based on the image information; acquiring point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template; acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix; and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
Compared with the prior art, the application has the following technical effects:
by utilizing the difference of laser radar reflection intensity and point cloud reflectivity on the checkerboard, all the point clouds are matched with the checkerboard template to obtain more accurate characteristic points, and meanwhile, the external parameter calibration method can be used for laser radars in various scanning modes, is higher in applicability, can improve matching precision, and enhances robustness, so that point cloud data are more visual and easier to understand.
Drawings
FIG. 1 is a flow chart of a method for camera-combined external parameter calibration using lidar as described in this application.
FIG. 2 is a flowchart of capturing and converting image information into grayscale images using OpenCV (image processing library) in one embodiment.
FIG. 3 is a schematic diagram of point cloud-to-template matching in an embodiment.
FIG. 4 is a flowchart of obtaining pose information of a camera by using a PnP algorithm to realize external parameter calibration of the camera in an embodiment.
FIG. 5 is a flow chart of a camera-combined external parameter calibration system using lidar as described in this application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Example 1:
the camera combined external parameter calibration method using the laser radar as shown in fig. 1 comprises the following steps:
s1: and acquiring the image information of the checkerboard, and calculating to obtain the 2D positions of the feature points of the checkerboard based on the image information.
Preferably, converting the image information into a gray scale, extracting all the corner points on the checkerboard calibration plate as the 2D positions of the checkerboard feature points, wherein the gray scale value= 0.2989 ×red instead of +0.5870 ×green channel+ 0.1140 ×blue channel;
the method for extracting all the corner points on the checkerboard calibration plate at least comprises the following steps: a sub-pixel level corner detection method, a Harris corner detection algorithm and/or a Shi-Tomasi corner detection algorithm.
As shown in fig. 2, in an embodiment, image information is acquired by using OpenCV (image processing library) and converted into a grayscale image;
s100: converting the read color image into a gray image using a cv2.cvtdcolor () function;
s101: using the cv2.final panels () function to find the corner points of the checkerboard calibration plates and storing the results in the corners variable;
s102: extracting corner coordinates of a sub-pixel level using a cv2.corersubpix () function to improve the accuracy of the corner;
s103: the marked corner points are drawn on the image using the cv2.Drawchessboard corners () function and the 2D positions of the corner points are printed out.
S2: and obtaining point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template.
Preferably, the obtaining the candidate point cloud cluster according to the point cloud data specifically includes:
and constructing a KD tree structure by using the point cloud data, calculating the point cloud density around each point, screening, and dividing the screened point cloud into candidate point cloud clusters.
In an embodiment, the point cloud is segmented into a plurality of candidate point cloud clusters by using a region growing method. The normal estimation can be performed by using KD trees by using PCL::::: loadPCDFile () function in PCL to load the screened point cloud data, then setting region growing parameters including a smoothness threshold, a curvature threshold, a minimum point threshold and a maximum point threshold, creating an object, setting the screened point cloud data as input, then calculating the normal of the point cloud, and setting it as the input normal of the region growing object; after setting the parameters of region growing, the reg.extract () function is called to execute region growing segmentation, and the result is stored in clusters, and finally the number of clusters is output and traversed.
Further, according to the actual conditions, adjusting a point Yun Di-density threshold, screening point clouds with the point cloud density being greater than the first density threshold, and filtering point clouds with the point cloud density being less than the first density threshold;
wherein the point cloud of the sparse region may contain noise or invalid data; by filtering the point cloud with the point cloud density smaller than the threshold value, the influence of noise can be reduced, and the quality and accuracy of the point cloud can be improved.
Preferably, the generating the characteristic points of the checkerboard template through the checkerboard template includes:
obtaining the real size of a checkerboard calibration plate and the size of each grid, and generating a checkerboard template;
and obtaining a checkerboard template image according to the checkerboard template, extracting characteristic points, and obtaining the characteristic points of the checkerboard template through matching and optimizing the characteristic points.
In one embodiment, the actual dimensions of the plate are calibrated by measuring the checkerboard, including the size of the entire calibration plate and the size of each cell; generating a checkerboard template according to the acquired calibration plate size and the grid size, wherein the checkerboard template can be a two-dimensional array and represents the position and the coordinates of each grid in the checkerboard; feature point positions are determined, and for each grid, one or more corner points are selected as feature points. Four corner points of the grid are typically chosen as feature points, but other corner points or feature points within the grid may also be chosen. Ensuring that the feature points have unique identification marks, such as row and column indexes, in the checkerboard template; for each feature point, converting its position (row and column index) in the tessellation template into two-dimensional image coordinates (x, y), which can be obtained by multiplying the grid size and adding an offset to each grid on the tessellation; a set of tessellated template feature points will eventually be obtained, where each feature point has two-dimensional image coordinates (x, y) and a corresponding line index, which may be expressed in the form of (row, col, x, y).
S3: and according to the candidate point cloud clusters and the checkerboard templates, acquiring the checkerboard calibration plate point clouds by utilizing the point cloud reflection intensity information, calculating a transformation matrix between the checkerboard calibration plate point clouds and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix.
The step S3 specifically comprises the following steps:
traversing all candidate point cloud clusters, and sequentially calculating a checkerboard template, an optimal transformation matrix T of each point cloud cluster and a corresponding matching error L, wherein the matching error L is the most matched point cloud cluster and is used as a checkerboard calibration plate point cloud; the optimal transformation matrix T is obtained according to a minimized matching error model, and the matching error L is the alignment degree L of the point cloud cluster and the template 1 Degree of alignment L with point cloud clustering and template overall shape 2 And (3) summing;
as shown in fig. 3, the laser point clouds have different reflectivities on the checkerboard, and the white checkers are high in reflectivity and need to be aligned with the checkerboard template white checkers; the black lattice has low reflectivity and needs to be aligned with the black lattice of the checkerboard template, and the higher the alignment degree is L 1 The smaller; the L is 2 The alignment degree of the reaction point cloud cluster and the overall shape of the template is higher, and the higher the alignment degree is L 2 The smaller;
a point P obtained by passing through a transformation matrix T according to the original point cloud P i Closest point C to all checkerboard template feature points i Is calculated to obtain the L 1 And L 2
Degree of alignment L of the templates 1 And the alignment degree L of the point cloud cluster and the overall shape of the template 2 Also according to a binary function delta in And (3) calculating to obtain: wherein the binary function delta is when the point Pi is within the template range S in Is 1, is not within the template range S in Is 0.
In one embodiment, the optimal transformation matrix T is solved by constructing a least squares problem and using a Gauss Newton iterative algorithm model_org The matching error is minimized.
Matching error:
L=L 1 +L 2
the degree of alignment of the point cloud clusters and templates:
the original point cloud P passes through the point P obtained by the transformation matrix T i
Euclidean distance d (P) between point Pi and nearest point Ci in all checkerboard template feature points i ,C i ):
Computing to obtain the optimal point cloud cluster, substituting
Calculating a change matrix T between the calibration plate and the template _model_org
According to the template characteristic point C j And the transformation matrix T _model_org Substituted into
And calculating to obtain the point cloud characteristic points, namely 3D positions of the characteristic points of the checkerboard calibration plate.
Preferably, the calculating the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template obtains the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix, specifically:
acquiring an inverse matrix of the transformation matrix;
loading the 2D coordinates of the characteristic points of the checkerboard template and the point cloud data of the checkerboard;
performing projective transformation on the characteristic points of the checkerboard template by using the inverse matrix; in the projection transformation process, each characteristic point coordinate is taken as the first row of homogeneous coordinates, then multiplied by the inverse matrix of the transformation matrix to obtain the projected homogeneous coordinates, and the three-dimensional positions of the characteristic points are obtained by extracting the first three elements of the projected homogeneous coordinates.
S4: and acquiring pose information of the camera according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points to realize external parameter calibration of the camera.
Preferably, the pose information of the camera includes a rotation matrix and a translation vector of the camera.
Wherein the Rotation Matrix (Rotation Matrix) is a 3x3 Matrix, which represents the transformation relation of the camera coordinate system rotating clockwise to the world coordinate system;
the translation vector (Translation Vector) is a 3-dimensional vector representing the position of the camera coordinate system origin in the world coordinate system;
points in the camera coordinate system may be converted into the world coordinate system or projected into the camera coordinate system by rotating the matrix and translating the vector.
Preferably, according to the 2D positions of the checkered feature points and the 3D positions of the checkered feature points, pose information of the camera is obtained by using a PnP algorithm, so that external parameter calibration of the camera is realized.
Preferably, the PnP algorithm at least includes: EPnP algorithm, DLS algorithm, AP3P algorithm, and UPnP algorithm.
As shown in fig. 4, in an embodiment, according to the 2D positions of the checkerboard feature points and the 3D positions of the checkerboard feature points, pose information of the camera is obtained by using a PnP algorithm to realize external parameter calibration of the camera.
S401: collecting 3D positions of the checkerboard feature points with known three-dimensional coordinates, shooting checkerboard images in a camera, and extracting 2D positions of the corresponding checkerboard feature points; the characteristic points of the checkerboard calibration plate in all postures are guaranteed to be completely captured;
s402: for each pair of corresponding checkerboard feature points (2D-3D pairs), constructing a matching relationship;
s403: selecting a proper PnP algorithm to estimate the camera gesture, wherein common algorithms include an EPnP algorithm, a DLS algorithm, an AP3P algorithm, a UPnP algorithm and the like;
s404: providing the 2D pixel coordinates (image coordinates) of the feature points and their corresponding 3D spatial coordinates (on the world coordinate system) to the selected PnP algorithm;
s405: solving a rotation matrix and a translation vector of the camera by minimizing the reprojection error by using the selected PnP algorithm;
s406: and obtaining camera pose information comprising a rotation matrix and a translation vector.
Embodiment two:
as shown in fig. 5, the present application further provides a camera combined external parameter calibration system using a laser radar, the system comprising:
the device comprises an acquisition module, a calculation module, a judgment module and a processing module;
the acquisition module is used for acquiring image information, point cloud data, the real size of the checkerboard calibration plate and the size of each grid;
the computing module at least comprises a first computing module, a second computing module, a third computing module and a fourth computing module: wherein,
the first calculation module is used for converting the image information into a gray level image and extracting all angular points on the checkerboard calibration plate;
the second calculation module is used for constructing a KD tree structure by using the point cloud data and calculating the point cloud density around each point;
the third calculation module is used for obtaining the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, and calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates;
the fourth calculation module is used for obtaining an inverse matrix of the transformation matrix to obtain 3D positions of characteristic points of the checkerboard;
the judging module is used for screening point clouds with the point cloud density being greater than the first density threshold value of the point clouds;
the processing module is used for realizing external parameter calibration of the camera according to pose information of the camera.
Embodiment III:
the present application also provides a storage medium, which is one of computer-readable storage media, on which a computer program is stored, which when executed by a processor, implements a camera joint external parameter calibration method using a laser radar as described above.
In summary, the present application provides a method, a system and a storage medium for calibrating camera combined external parameters by using a laser radar, which obtain the 2D position of characteristic points of a checkerboard by obtaining the image information of the checkerboard and calculating based on the image information; acquiring point cloud data, obtaining candidate point cloud clusters according to the point cloud data, and generating characteristic points of the checkerboard template through the checkerboard template; acquiring the point cloud of the checkerboard calibration plate by utilizing the point cloud reflection intensity information according to the candidate point cloud clusters and the checkerboard templates, calculating a transformation matrix between the point cloud of the checkerboard calibration plate and the checkerboard templates, and acquiring the checkerboard feature points as 3D positions of the checkerboard feature points according to the transformation matrix; obtaining pose information of the camera according to the 2D positions of the checkerboard feature points and the 3D positions of the checkerboard feature points to realize external reference of the camera
By utilizing the difference of laser radar reflection intensity and point cloud reflectivity on the checkerboard, all the point clouds are matched with the checkerboard template to obtain more accurate characteristic points, and meanwhile, the external parameter calibration method can be used for laser radars in various scanning modes, is higher in applicability, can improve matching precision, and enhances robustness, so that point cloud data are more visual and easier to understand.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.

Claims (10)

1.利用激光雷达的相机联合外参标定方法,其特征在于,包括:1. The camera joint external parameter calibration method using lidar is characterized by: S1:获取棋盘格的图像信息,基于所述图像信息计算得到棋盘格特征点2D位置;S1: Obtain the image information of the checkerboard, and calculate the 2D position of the checkerboard feature points based on the image information; S2:获取点云数据,根据所述点云数据得到候选点云聚类,通过棋盘格模板生成棋盘格模板特征点;S2: Obtain point cloud data, obtain candidate point cloud clusters based on the point cloud data, and generate checkerboard template feature points through the checkerboard template; S3:根据所述候选点云聚类和棋盘格模板,利用点云反射强度信息获取棋盘格标定板点云,计算棋盘格标定板点云和棋盘格模板间的变换矩阵,根据所述变换矩阵获取棋盘格特征点作为棋盘格特征点3D位置;S3: According to the candidate point cloud clustering and checkerboard template, use the point cloud reflection intensity information to obtain the checkerboard calibration plate point cloud, calculate the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template, according to the transformation matrix Obtain the checkerboard feature points as the 3D positions of the checkerboard feature points; S4:根据所述棋盘格特征点2D位置和棋盘格特征点3D位置获取相机的位姿信息实现相机的外参标定。S4: Acquire the pose information of the camera based on the 2D position of the checkerboard feature point and the 3D position of the checkerboard feature point to implement external parameter calibration of the camera. 2.根据权利要求1所述利用激光雷达的相机联合外参标定方法,其特征在于,所述基于所述图像信息计算得到棋盘格特征点2D位置,包括:2. The camera joint external parameter calibration method using lidar according to claim 1, characterized in that the 2D position of the checkerboard feature point calculated based on the image information includes: 将图像信息转换为灰度图,提取棋盘格标定板上的所有角点作为棋盘格特征点2D位置。The image information is converted into a grayscale image, and all corner points on the checkerboard calibration plate are extracted as the 2D positions of the checkerboard feature points. 3.根据权利要求2所述利用激光雷达的相机联合外参标定方法,其特征在于,所述根据所述点云数据得到候选点云聚类,包括:3. The camera joint external parameter calibration method using lidar according to claim 2, characterized in that, obtaining candidate point cloud clustering based on the point cloud data includes: 使用点云数据构建KD树结构,计算每个点周围的点云密度并进行筛选,将筛选后的点云分割成候选点云聚类;Use point cloud data to build a KD tree structure, calculate the point cloud density around each point and filter it, and segment the filtered point cloud into candidate point cloud clusters; 其中,根据实际情况调整点云第一密度阈值,筛选点云密度大于所述点云第一密度阈值的点云,过滤点云密度小于所述点云第一密度阈值的点云。Wherein, the first point cloud density threshold is adjusted according to the actual situation, point clouds with a point cloud density greater than the first point cloud density threshold are filtered, and point clouds with a point cloud density less than the first point cloud density threshold are filtered. 4.根据权利要求3所述的利用激光雷达的相机联合外参标定方法,其特征在于,所述通过棋盘格模板生成棋盘格模板特征点,包括:4. The camera joint external parameter calibration method using lidar according to claim 3, characterized in that generating the checkerboard template feature points through the checkerboard template includes: 获取棋盘格标定板真实尺寸和每个格子尺寸,生成棋盘格模板;Obtain the real size of the checkerboard calibration plate and the size of each grid, and generate a checkerboard template; 根据所述棋盘格模板获取棋盘格模板图像,提取特征点并通过特征点的匹配和优化,得到棋盘格模板特征点。A checkerboard template image is obtained according to the checkerboard template, feature points are extracted, and the checkerboard template feature points are obtained through matching and optimization of the feature points. 5.根据权利要求4所述的利用激光雷达的相机联合外参标定方法,其特征在于,所述根据所述候选点云聚类和棋盘格模板,利用点云反射强度信息获取棋盘格标定板点云,具体为:5. The camera joint external parameter calibration method using lidar according to claim 4, characterized in that, according to the candidate point cloud clustering and the checkerboard template, the point cloud reflection intensity information is used to obtain the checkerboard calibration plate. Point cloud, specifically: 遍历所有候选点云聚类,依次计算棋盘格模板和每个点云聚类的最优变换矩阵T及对应的匹配误差L,其中,匹配误差L最小的为最匹配点云聚类,作为棋盘格标定板点云;Traverse all candidate point cloud clusters, and sequentially calculate the checkerboard template and the optimal transformation matrix T of each point cloud cluster and the corresponding matching error L. Among them, the one with the smallest matching error L is the most matching point cloud cluster, as the checkerboard Grid calibration plate point cloud; 所述最优变换矩阵根据最小化匹配误差模型获取,所述匹配误差L为点云聚类和模板的对齐程度L1与点云聚类和模板整体形状的对齐程度L2之和;The optimal transformation matrix is obtained according to the minimization matching error model, and the matching error L is the sum of the alignment degree L 1 of the point cloud cluster and the template and the alignment degree L 2 of the point cloud cluster and the overall shape of the template; 其中,根据激光点云在棋盘格上反射率不同,白色格呈高反射率,需要与棋盘格模板白格对齐;黑色格呈低反射率,需要与棋盘格模板黑格对齐,对齐程度越高L1越小;所述L2反应点云聚类和模板整体形状的对齐程度,对齐程度越高L2越小;Among them, according to the different reflectivity of the laser point cloud on the checkerboard, the white grid has high reflectivity and needs to be aligned with the white grid of the checkerboard template; the black grid has low reflectivity and needs to be aligned with the black grid of the checkerboard template. The higher the degree of alignment. The smaller L 1 is; the L 2 reflects the degree of alignment between point cloud clustering and the overall shape of the template. The higher the degree of alignment, the smaller L 2 is; 根据原始点云P经过最优变换矩阵T得到的点Pi与所有棋盘格模板特征点中最近点Ci的欧式距离d计算得到所述模板的对齐程度L1与所述点云聚类和模板整体形状的对齐程度L2According to the Euclidean distance d between the point Pi obtained by passing the optimal transformation matrix T of the original point cloud P and the nearest point C i among all checkerboard template feature points, the alignment degree L 1 of the template and the point cloud cluster sum are calculated. The degree of alignment of the overall shape of the template L 2 . 6.根据权利要求5所述的利用激光雷达的相机联合外参标定方法,其特征在于,所述利用点云反射强度信息获取棋盘格标定板点云,还包括:6. The camera joint external parameter calibration method using lidar according to claim 5, characterized in that the use of point cloud reflection intensity information to obtain checkerboard calibration plate point cloud further includes: 所述模板的对齐程度L1和所述点云聚类和模板整体形状的对齐程度L2还根据二值函数δin计算得到:其中,当点Pi在模板范围S内时二值函数δin为1,不在模板范围S内二值函数δin为0。The degree of alignment L 1 of the template and the degree of alignment L 2 of the point cloud clustering and the overall shape of the template are also calculated based on the binary function δ in : where, when the point Pi is within the template range S, the binary function δ in is 1, and the binary function δ in is 0 when it is not within the template range S. 7.根据权利要求6所述的利用激光雷达的相机联合外参标定方法,其特征在于,所述计算棋盘格标定板点云和棋盘格模板间的变换矩阵,根据所述变换矩阵获取棋盘格特征点作为棋盘格特征点3D位置,具体为:7. The camera joint external parameter calibration method using lidar according to claim 6, characterized in that the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template is calculated, and the checkerboard is obtained according to the transformation matrix. The feature points are used as the 3D positions of the checkerboard feature points, specifically: 获取所述变换矩阵的逆矩阵;Obtain the inverse matrix of the transformation matrix; 加载所述棋盘格模板特征点2D位置对应的2D坐标和棋盘格的点云数据;Load the 2D coordinates corresponding to the 2D positions of the feature points of the checkerboard template and the point cloud data of the checkerboard; 使用所述逆矩阵对棋盘格模板的特征点进行投影变换;在投影变换过程中,将每个特征点坐标作为齐次坐标的第一行,然后将其乘以变换矩阵的逆矩阵,得到投影后的齐次坐标,通过提取投影后齐次坐标的前三个元素,得到特征点的三维位置。Use the inverse matrix to perform projection transformation on the feature points of the checkerboard template; during the projection transformation process, the coordinates of each feature point are used as the first row of homogeneous coordinates, and then multiplied by the inverse matrix of the transformation matrix to obtain the projection The three-dimensional position of the feature point is obtained by extracting the first three elements of the homogeneous coordinates after projection. 8.根据权利要求7所述的利用激光雷达的相机联合外参标定方法,其特征在于,所述步骤S4,具体为:8. The camera joint external parameter calibration method using lidar according to claim 7, characterized in that the step S4 is specifically: 所述相机的位姿信息包括相机的旋转矩阵和平移向量;The pose information of the camera includes the rotation matrix and translation vector of the camera; 根据所述棋盘格特征点2D位置和棋盘格特征点3D位置,利用PnP算法获取所述相机的位姿信息实现相机的外参标定。According to the 2D position of the checkerboard feature point and the 3D position of the checkerboard feature point, the PnP algorithm is used to obtain the pose information of the camera to implement the external parameter calibration of the camera. 9.根据权利要求1-8任一所述的利用激光雷达的相机联合外参标定方法的系统,其特征在于,所述系统包括:9. The system using lidar camera combined with external parameter calibration method according to any one of claims 1 to 8, characterized in that the system includes: 获取模块,计算模块,判断模块和处理模块;Acquisition module, calculation module, judgment module and processing module; 所述获取模块用于获取图像信息、点云数据、棋盘格标定板真实尺寸和每个格子尺寸;The acquisition module is used to acquire image information, point cloud data, the real size of the checkerboard calibration plate and the size of each grid; 所述计算模块至少包括第一计算模块、第二计算模块、第三计算模块和第四计算模块;The computing module at least includes a first computing module, a second computing module, a third computing module and a fourth computing module; 所述第一计算模块用于将图像信息转换为灰度图,提取棋盘格标定板上的所有角点;The first calculation module is used to convert the image information into a grayscale image and extract all corner points on the checkerboard calibration plate; 所述第二计算模块用于使用点云数据构建KD树结构,计算每个点周围的点云密度;The second calculation module is used to construct a KD tree structure using point cloud data and calculate the point cloud density around each point; 所述第三计算模块用于根据所述候选点云聚类和棋盘格模板,利用点云反射强度信息获取棋盘格标定板点云,计算棋盘格标定板点云和棋盘格模板间的变换矩阵;The third calculation module is used to obtain the checkerboard calibration plate point cloud based on the candidate point cloud clustering and the checkerboard template using the point cloud reflection intensity information, and calculate the transformation matrix between the checkerboard calibration plate point cloud and the checkerboard template. ; 所述第四计算模块用于获取所述变换矩阵的逆矩阵得到棋盘格特征点3D位置;The fourth calculation module is used to obtain the inverse matrix of the transformation matrix to obtain the 3D position of the checkerboard feature point; 所述判断模块用于筛选点云密度大于所述点云第一密度阈值的点云;The judgment module is used to screen point clouds whose point cloud density is greater than the first density threshold of the point cloud; 所述处理模块用于根据相机的位姿信息实现相机的外参标定。The processing module is used to implement external parameter calibration of the camera based on the pose information of the camera. 10.一种存储介质,为计算机可读存储介质中的一种,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-8任一所述的利用激光雷达的相机联合外参标定方法。10. A storage medium, which is one of computer-readable storage media, characterized in that a computer program is stored thereon, and when the computer program is executed by a processor, the computer program implements any one of claims 1-8 A camera joint external parameter calibration method using lidar.
CN202311244551.8A 2023-09-25 2023-09-25 Camera combined external parameter calibration method and system using laser radar and storage medium Pending CN117252931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311244551.8A CN117252931A (en) 2023-09-25 2023-09-25 Camera combined external parameter calibration method and system using laser radar and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311244551.8A CN117252931A (en) 2023-09-25 2023-09-25 Camera combined external parameter calibration method and system using laser radar and storage medium

Publications (1)

Publication Number Publication Date
CN117252931A true CN117252931A (en) 2023-12-19

Family

ID=89130790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311244551.8A Pending CN117252931A (en) 2023-09-25 2023-09-25 Camera combined external parameter calibration method and system using laser radar and storage medium

Country Status (1)

Country Link
CN (1) CN117252931A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118011365A (en) * 2023-12-25 2024-05-10 中国矿业大学 Camera and laser radar calibration device and method for tunnel boring machine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118011365A (en) * 2023-12-25 2024-05-10 中国矿业大学 Camera and laser radar calibration device and method for tunnel boring machine

Similar Documents

Publication Publication Date Title
CN113012234B (en) High-precision camera calibration method based on plane transformation
CN107155341B (en) Three-dimensional scanning system and frame
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
WO2025039766A1 (en) Real-time online cargo volume recognition method, system, and device, and medium
CN114494039A (en) A method for geometric correction of underwater hyperspectral push-broom images
CN117541537A (en) Space-time difference detection method and system based on all-scenic-spot cloud fusion technology
CN117252931A (en) Camera combined external parameter calibration method and system using laser radar and storage medium
CN116205993A (en) A high-precision calibration method for bi-telecentric lens for 3D AOI
Cui et al. ACLC: Automatic calibration for nonrepetitive scanning LiDAR-camera system based on point cloud noise optimization
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN116894907B (en) RGBD camera texture mapping optimization method and system
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN112419427A (en) Methods for improving the accuracy of time-of-flight cameras
CN209279912U (en) A kind of object dimensional information collecting device
CN114742705B (en) An image stitching method based on halcon
CN117333367A (en) Image stitching method, system, medium and device based on image local features
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN117671159A (en) Three-dimensional model generation method and device, equipment and storage medium
CN117830385A (en) Material pile volume measurement method, device, electronic equipment and storage medium
CN116091610B (en) Combined calibration method of radar and camera based on three-dimensional tower type checkerboard
CN114581533A (en) Combined calibration method and device
CN113048899A (en) Thickness measuring method and system based on line structured light
CN118112543B (en) A two-dimensional laser radar and camera calibration method based on dual-line structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Floor 1-5, Building 2, No. 171 Hele Er Street, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu City, Sichuan Province, 610000

Applicant after: Chengdu Desai Xiweika Frog Technology Co.,Ltd.

Address before: Floor 1-5, Building 2, No. 171 Hele Er Street, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu City, Sichuan Province, 610000

Applicant before: Chengdu kafrog Technology Co.,Ltd.

Country or region before: China