[go: up one dir, main page]

CN118429437A - Laser radar and depth camera calibration method, system, equipment and storage medium - Google Patents

Laser radar and depth camera calibration method, system, equipment and storage medium Download PDF

Info

Publication number
CN118429437A
CN118429437A CN202410590766.3A CN202410590766A CN118429437A CN 118429437 A CN118429437 A CN 118429437A CN 202410590766 A CN202410590766 A CN 202410590766A CN 118429437 A CN118429437 A CN 118429437A
Authority
CN
China
Prior art keywords
plane
image
depth camera
laser radar
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410590766.3A
Other languages
Chinese (zh)
Inventor
汤伟杰
黄龙祥
杨煦
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Chongqing Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Guangjian Aoshen Technology Co ltd, Shenzhen Guangjian Technology Co Ltd filed Critical Chongqing Guangjian Aoshen Technology Co ltd
Priority to CN202410590766.3A priority Critical patent/CN118429437A/en
Publication of CN118429437A publication Critical patent/CN118429437A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A laser radar and depth camera calibration method, system, equipment and storage medium, the method includes: step S1: obtaining a first image using a lidar and obtaining a second image using a depth camera; step S2: extracting a first plane under a first coordinate system generated by the point cloud of the first image, and extracting a second plane under a second coordinate system generated by the point cloud of the second image; step S3: calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane; step S4: matching the point clouds outside the first plane and the second plane to obtain a matching result; step S5: and combining the rotation vector R, the translation vector T and the matching result, and performing nonlinear iterative optimization on the matching relation to obtain a final external parameter T n l. The invention effectively solves the problem of registration between the laser radar and the depth camera.

Description

Laser radar and depth camera calibration method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of laser radar and depth camera calibration, in particular to a laser radar and depth camera calibration method, a laser radar and depth camera calibration system, laser radar and depth camera calibration equipment and a storage medium.
Background
The joint calibration of the lidar and the depth camera is a key step for ensuring that the two sensors can accurately co-operate to provide accurate three-dimensional spatial information. Through joint calibration, the laser radar and the depth camera can work cooperatively to provide more accurate and rich three-dimensional space information, and support is provided for application in the fields of automatic driving, robot navigation and the like.
The laser radar and depth camera joint calibration is to find the spatial conversion relation between the two so as to convert the data from one coordinate system to the other coordinate system. This process typically involves the following steps:
Radar camera external parameter calibration: the purpose of the extrinsic calibration is to determine a rotation matrix R and a translation matrix T between the lidar and the camera, so that the coordinates of the object points detected by the lidar can be converted into the camera coordinate system, or vice versa.
Feature extraction and matching: in the non-deep learning method, corresponding features (such as semantic information and edges) are required to be extracted from a laser radar and a camera respectively, then the features of the laser radar are projected under an image plane through initial external parameters, and an objective function is designed to evaluate the matching degree between the projected features and actual image features.
Using specialized software or tools: some software or tools are dedicated to the combined calibration of lidar and camera, such as Autoware, which provides a set of procedures to assist the user in performing the calibration.
On-line joint calibration: the method based on deep learning can realize the online joint calibration of the laser radar and the camera, and the method generally utilizes a neural network to learn the mapping relation between the laser radar and the camera, so that data fusion is carried out in real time.
From the above, in the calibration of Lidar and depth camera, point cloud data is generally used for data registration to obtain external parameters between the Lidar and the depth camera, and then the registration method is a key to obtaining accurate and stable errors. Because of the gap between the field of view of the point cloud and the density of the point cloud observed by the lidar and the limited field of view depth camera, accurate registration of the two point clouds has a certain challenge.
In the prior art, specific articles such as a calibration plate are often needed for calibration, and the calibration can be performed in factory leaving by using information (such as size, relative position relation, center point position and the like) known by the calibration articles. However, in the implementation process, the relative positions of two different cameras may change, which often results in poor data matching effect obtained by different depth sensors after a period of use.
The foregoing background is only for the purpose of providing an understanding of the inventive concepts and technical aspects of the present application and is not necessarily prior art to the present application and is not intended to be used as an aid in the evaluation of the novelty and creativity of the present application in the event that no clear evidence indicates that such is already disclosed at the date of filing of the present application.
Disclosure of Invention
Therefore, the invention can effectively solve the registration problem between the laser radar and the depth camera by fully utilizing the geometric features and the matching algorithm of the point cloud data, realizes accurate external parameter calibration under the condition of different fields of view and point cloud densities, and provides accurate external parameter results.
In a first aspect, the present invention provides a method for calibrating a laser radar and a depth camera, which is characterized by comprising:
Step S1: obtaining a first image using a lidar and obtaining a second image using a depth camera; wherein the first image and the second image both comprise the ground or the right facing wall surface;
Step S2: extracting a first plane under a first coordinate system generated by the point cloud of the first image, and extracting a second plane under a second coordinate system generated by the point cloud of the second image;
Step S3: calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane;
step S4: matching the point clouds outside the first plane and the second plane to obtain a matching result;
step S5: combining the rotation vector R, the translation vector t and the matching result, carrying out nonlinear iterative optimization on the matching relation to obtain a final external parameter
Optionally, the method for calibrating the laser radar and the depth camera is characterized in that step S1 includes:
Step S11: fixing the relative positions of the laser radar and the depth camera, synchronously shooting a first photo, and identifying the ground in the first photo; if the ground is identified, marking a first photo shot by the laser radar as a first image, and marking the first photo shot by the depth camera as a second image; otherwise, executing the next step;
Step S12: rotating the laser radar and the depth camera for one circle, continuously taking a second photo in the rotating process, extracting a plane from the second photo, marking the second photo obtained at the moment of obtaining the right wall surface as a first image and a second image respectively if the second photo is obtained, otherwise, executing the next step;
Step S13: and identifying the ground in the second photos, and selecting the second photo with the largest ground area, wherein the second photo is marked as a first image and a second image respectively.
Optionally, the method for calibrating the laser radar and the depth camera is characterized in that step S3 includes:
Step S31: calculating a rotation axis n=n 1×n2 to obtain a rotation matrix R1 of the rotation angle θ= arccos (n ln2), the first plane and the second plane; wherein n 1 is the normal vector of the first plane and n 2 is the normal vector of the second plane;
Step S32: two orthogonal vectors of n 2 are obtained, n 2 and the two orthogonal vectors are sequentially marked as mu 1, mu 2 and mu 3, and a translation vector t is marked as: t=k1μ1+k2μ2+k3μ3, where k1=d2-d 1, d1 is the plane parameter of the second plane and d2 is the plane parameter of the first plane.
Optionally, in the method for calibrating the laser radar and the depth camera, in step S4, the point clouds outside the first plane and the second plane are matched according to the distance, the normal vector and the curvature, so as to obtain a matching result.
Optionally, the method for calibrating the laser radar and the depth camera is characterized in that step S4 includes:
Step S41: comparing the point clouds of the first image and the second image, comparing the density of the point clouds, marking the point clouds with lower density as first characteristic points, and marking the point clouds with higher density as target points;
Step S42: screening the target points according to the first characteristic points and the height threshold value to obtain matchable points;
Step S43: constructing a first characteristic histogram for the first characteristic points and constructing a second characteristic histogram for the matchable points; the characteristic histogram comprises height and direction parameters of adjacent points;
step S44: and matching according to the first characteristic histogram and the second characteristic histogram to obtain a pairing point set of the first characteristic point and the matchable point.
Optionally, the laser radar and depth camera calibration method is characterized by further comprising:
step S45: and removing mismatching of the matching point pairs through a random sample consistency algorithm.
Optionally, the method for calibrating the laser radar and the depth camera is characterized in that in step S5, the method is performed according to the formula
Nonlinear optimization is carried out to obtain the final external parametersWherein pi represents a second plane point set under the second coordinate system, pi 'represents a first plane point set under the first coordinate system, pi and pi' represent points on the second plane and the first plane respectively; i represents a point cloud set except a second plane under a second coordinate system; i' represents a set of point clouds in the first coordinate system except for the first plane. n (Rp+t) +d is a point-to-plane distance formula. n 1、d1 is a parameter of the second plane and n 2、d2 is a parameter of the first plane.
In a second aspect, the present invention provides a laser radar and depth camera calibration system, configured to implement a laser radar and depth camera calibration method according to any one of the preceding claims, where the system is characterized by comprising:
The shooting module is used for obtaining a first image by using a laser radar and obtaining a second image by using a depth camera; wherein the first image and the second image both comprise the ground or the right facing wall surface;
the extraction module is used for extracting a first plane under a first coordinate system generated by the point cloud of the first image and extracting a second plane under a second coordinate system generated by the point cloud of the second image;
the first calculation module is used for calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane;
the matching module is used for matching the point clouds outside the first plane and the second plane to obtain a matching result;
The second calculation module is used for combining the rotation vector R, the translation vector t and the matching result to perform nonlinear iterative optimization on the matching relation so as to obtain a final external parameter
In a third aspect, the present invention provides a laser radar and depth camera calibration apparatus, comprising:
A processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the lidar and depth camera calibration method of any of the preceding claims via execution of the executable instructions.
In a fourth aspect, the present invention provides a computer readable storage medium storing a program, where the program when executed implements the steps of the laser radar and depth camera calibration method according to any one of the preceding claims.
Compared with the prior art, the invention has the following beneficial effects:
According to the invention, by fully utilizing the geometric features of the point cloud data and the matching algorithm, the registration problem between the laser radar and the depth camera under different visual angles can be effectively solved, an accurate external reference result is provided, and the accurate external reference calibration can be realized under the conditions of different visual fields and point cloud densities.
The invention provides an effective solution for joint calibration between the laser radar and the depth camera, does not depend on articles such as a calibration plate, ensures that the calibration process is more efficient and easy to implement, reduces the complexity of the calibration process on the premise of ensuring accuracy, can be applied to various fields such as automatic driving, robot navigation, three-dimensional reconstruction and the like, can realize accurate sensor data fusion and scene reconstruction, and provides a more reliable basis for related applications.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart showing steps of a method for calibrating a laser radar and a depth camera according to an embodiment of the present invention;
FIG. 2 is a schematic view of a laser radar and a depth camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first coordinate system and a second coordinate system according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps for obtaining a first image and a second image according to an embodiment of the present invention;
FIG. 5 is a flowchart showing steps for calculating a rotation vector R and a translation vector t according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating steps for calculating a matching result according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating another step of calculating a matching result according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a laser radar and depth camera calibration system according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a laser radar and depth camera calibration apparatus according to an embodiment of the present invention; and
Fig. 10 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a method for calibrating a depth module by a laser radar and a depth camera, which aims to solve the problems in the prior art.
Aiming at the laser radar and the depth camera, the invention can obtain accurate calibration results through in-situ rotation of the laser radar and the depth camera close to the ground or around the wall surface. Firstly, matching geometric plane characteristics between two observed 3D point clouds to establish initial constraint, carrying out external parameter estimation on the residual point clouds by utilizing the initial constraint, finally, according to the estimated external parameters with all degrees of freedom, taking the external parameters as relatively accurate initial values, and solving a final calibration result by establishing a least square problem which simultaneously gives consideration to the point cloud plane characteristics and the residual point cloud characteristics
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
According to the invention, the geometric features of the point cloud data and the matching algorithm are fully utilized, so that the registration problem between the laser radar and the depth camera can be effectively solved, accurate external parameter calibration is realized under the condition of different fields of view and point cloud densities, and an accurate external parameter result is provided.
Fig. 1 is a flowchart illustrating steps of a method for calibrating a laser radar and a depth camera according to an embodiment of the present invention. As shown in fig. 1, the steps of a laser radar and depth camera calibration method in an embodiment of the invention include:
step S1: a first image is obtained using a lidar and a second image is obtained using a depth camera.
In this step, a first image is obtained by a LiDAR (LiDAR), typically three-dimensional data in the form of a point cloud, and the distance between the object and the radar can be measured very accurately. The depth camera obtains a second image, also providing three-dimensional data, but typically measures depth based on optical principles (such as structured light or time of flight) and converts the data into a point cloud for storage. Both the first image and the second image contain the ground or the facing wall surface for facilitating the planar detection and feature matching in the subsequent processing. The relative positions of the lidar and the depth camera are fixed and facing the same. The laser radar and the depth camera can be installed on various intelligent devices such as robots, unmanned aerial vehicles, intelligent automobiles and the like and used for acquiring information. The laser radar and the depth camera in the step can be tested or actually operated for a period of time, namely the embodiment can be used for joint calibration when the laser radar and the depth camera leave the factory or recalibration when the intelligent equipment operates. It should be noted that, although the present invention is described with reference to matching between lidar and depth camera, those skilled in the art will understand that the method of the present invention is also applicable to between lidar of different view angles, between depth camera of different view angles, between lidar of different point cloud densities, and between depth camera of different point cloud densities. These different hardware devices also fall within the scope of the present invention when the present method is applied.
As shown in fig. 2, the view angles of lidar and depth cameras are typically not the same. The view angle of the lidar may be greater than or less than the view angle of the depth camera. The point cloud density generated by the laser radar is also different from that generated by the depth camera. The point cloud generated by the laser radar and the point cloud generated by the depth camera cannot realize one-to-one correspondence.
Step S2: and extracting a first plane under a first coordinate system generated by the point cloud of the first image, and extracting a second plane under a second coordinate system generated by the point cloud of the second image.
In this step, a plane representing the ground or wall is extracted from the point cloud using RANSAC (Random Sample Consensus) or other plane fitting algorithm in the point cloud data of the lidar. This plane will be defined in a first coordinate system, typically the coordinate system of a lidar. Similarly, in the point cloud generated by the depth camera (converted from the depth map), the planes of the same ground or wall are also extracted using the corresponding algorithm. This plane will be defined under a second coordinate system (the coordinate system of the depth camera). In fig. 3, a is a first coordinate system in which the forward direction is the positive x-axis direction, the upward direction is the positive z-axis direction, and the leftward direction is the positive y-axis direction; b is a second coordinate system in which the forward direction is the positive z-axis direction, the downward direction is the positive y-axis direction, and the rightward direction is the positive x-axis direction. When the wall surface is extracted, a plane with the smallest normal vector n and the smallest z-axis included angle and closest to the center of the depth camera is selected as a matching plane in the second coordinate system, and a plane with the smallest normal vector and the smallest x-axis included angle and closest to the center of the laser radar is selected as a matching plane in the first coordinate system, namely, a corresponding frame of the right-facing wall surface and a plane to be matched are respectively found.
Step S3: and calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane.
In this step, the rotation from the first plane to the second plane can be calculated by comparing the plane parameters (e.g., normal vector and intercept) in the two coordinate systems. This rotation may be represented by a rotation vector R1. Since the first plane and the second plane are on the same physical plane, the transformation relationship (rotation and translation) between them should be consistent. Thus, this constraint can be used to solve for the complete transformation relationship between the two coordinate systems, namely the rotation vector R and translation vector t.
Step S4: and matching the point clouds outside the first plane and the second plane to obtain a matching result.
In this step, the point clouds outside the first plane and the second plane are further matched according to R having 3 constraints (2R, 1 t) obtained in the previous step, and t as an initial value. After the plane is extracted and the preliminary transformation relationship is calculated, other point clouds under two coordinate systems need to be matched. This step is achieved by comparing the similarity or distance between the point clouds. The result of the matching will be a set of point pairs that correspond to the same physical point in both coordinate systems. The matching error of the traditional ICP (iterative closest point) is larger, and NICP can be adopted for matching calculation. NICP considers curvature and normal vector in error judgment and error term based on ICP, and fully considers geometric characteristics of out-of-plane point cloud, so that registration result is more accurate. In this step, besides matching by adopting the method NICP, other methods can be adopted for matching.
Step S5: combining the rotation vector R, the translation vector t and the matching result, carrying out nonlinear iterative optimization on the matching relation to obtain a final external parameter
In this step, global optimization is performed. After nonlinear iterative optimization, more accurate rotation vector R and translation vector t will be obtained, which will serve as final external parameters for aligning the data in the two coordinate systems.
The relative accuracy obtained by the above stepsThe least square problem of simultaneously considering the point cloud planar geometrical characteristics and the point cloud characteristics outside the planar characteristics is established as an initial value of the refine stage:
pi represents a second plane point set under a second coordinate system, pi 'represents a first plane point set under a first coordinate system, pi and pi' represent points on the second plane and the first plane respectively; i represents a point cloud set except a second plane under a second coordinate system; i' represents a set of point clouds in the first coordinate system except for the first plane. n (Rp+t) +d is a point-to-plane distance formula. n 1、d1 is a parameter of the second plane and n 2、d2 is a parameter of the first plane.
Solving the nonlinear optimization problem to obtain an optimized external parameter comprehensively considering the plane characteristics and the residual point cloud characteristics
Fig. 4 is a flowchart illustrating steps for obtaining a first image and a second image according to an embodiment of the present invention. As shown in fig. 4, the step of obtaining the first image and the second image in the embodiment of the present invention includes:
Step S11: fixing the relative positions of the laser radar and the depth camera, synchronously shooting a first photo, and identifying the ground in the first photo; if the ground is identified, marking a first photo shot by the laser radar as a first image, and marking the first photo shot by the depth camera as a second image; otherwise, executing the next step.
In this step, the relative position between the lidar and the depth camera is first ensured to be fixed, and both can take pictures simultaneously. This typically involves calibrating the time stamps of the two devices, ensuring that they capture images at the same time. In the first photo captured, the ground is identified using image processing algorithms (e.g., color segmentation, edge detection, etc.). This is typically achieved based on the color, texture or other characteristics of the ground. If the ground is successfully identified, the photo shot by the laser radar is marked as a first image, and the photo shot by the depth camera is marked as a second image. These two images will be used for subsequent processing and analysis. If the ground is not recognized, the next operation is performed.
Step S12: and rotating the laser radar and the depth camera for one circle, continuously taking a second photo in the rotating process, extracting a plane from the second photo, respectively marking the second photo obtained at the moment of obtaining the right wall surface as a first image and a second image if the second photo is obtained, and otherwise, executing the next step.
In this step, the lidar and the depth camera are rotated one revolution about a central axis and pictures are continuously taken during the rotation. This process is typically accomplished by a mechanical device or robotic arm. In the photographs captured during rotation, the wall surface is identified using image processing and plane extraction algorithms. This may be achieved by detecting lines, corner points or other features in the photograph. During rotation, when the device is facing a wall, the obtained photograph will contain clear wall features. These photographs are selected and the photographs taken by the lidar are marked as a first image and the photographs taken by the depth camera are marked as a second image. If a photograph of the facing wall surface is not obtained during the rotation, the next step is performed.
Step S13: and identifying the ground in the second photos, and selecting the second photo with the largest ground area, wherein the second photo is marked as a first image and a second image respectively.
In this step, the ground is continuously identified in all the photographs captured during the rotation, and the area of the ground in each photograph is calculated. The picture with the largest ground area is selected, which generally means that the view of the ground in the picture is wider and contains more ground information. And marking the photo shot by the laser radar as a first image, and marking the photo shot by the depth camera as a second image. These two images will be used for subsequent processing and analysis.
The present embodiment can ensure that an image including the ground or the wall surface can be acquired for calculating the transformation relationship between the two coordinate systems in the subsequent steps, and the ground and the vertical wall surface are selected to be perpendicular to the axes of the coordinate systems, so that more accurate calculation can be performed.
Fig. 5 is a flowchart illustrating a step of calculating the rotation vector R and the translation vector t according to an embodiment of the present invention. As shown in fig. 5, the step of calculating the rotation vector R and the translation vector t according to the embodiment of the present invention includes:
Step S31: the rotation axis n=n 1×n2 is calculated, resulting in a rotation matrix R1 of rotation angle θ= arccos (n ln2), first plane and second plane.
In this step, n 1 is the normal vector of the first plane, and n 2 is the normal vector of the second plane. Since the correspondence relationship between a pair of planes is known, the rotation between the two planes can be found, and the rotation is expressed using a rotation vector in which the rotation axis n=n 1×n2 rotation angle θ= arccos (n ln2). Obviously, since the above is the R between the two planes, the rotation along this plane cannot be determined yet, that is, the R n l between the two modules is obtained, and the rotation angle θ2 (using the normal vector n 2 of the first plane as the rotation axis) after the above plane rotation R1 is also required, that is, a plane matching relationship provides an R1 and an R2 of the fixed rotation axis for the rotation between the two modules, where r=r2r1.
Step S32: two orthogonal vectors of n 2 are obtained, n 2 and the two orthogonal vectors are sequentially marked as mu 1, mu 2 and mu 3, and a translation vector t is marked as: t=k1μ1+k2μ2+k3μ3, where k1=d2-d 1, d1 is the plane parameter of the second plane and d2 is the plane parameter of the first plane.
In this step, the variable to be constrained of the translation vector t is k2, k3.
In the embodiment, the constrained R and t between the first coordinate system and the second coordinate system are calculated by utilizing the constraint relation between the first plane and the second plane, so that an accurate basis is provided for subsequent calculation.
FIG. 6 is a flowchart illustrating steps for calculating a matching result according to an embodiment of the present invention; as shown in fig. 6, the step of calculating the matching result in the embodiment of the present invention includes:
Step S41: and comparing the point clouds of the first image and the second image, comparing the density of the point clouds, marking the point clouds with lower density as first characteristic points, and marking the point clouds with higher density as target points.
In this step, first, a common view angle region in a first image and a second image captured by a laser radar and a depth camera is determined. In this region, the densities of the two point clouds are compared. The point cloud density refers to the number of points in a unit area or volume, reflecting the level of detail of the object surface. And marking the point cloud area with lower density as a first characteristic point. The higher density point cloud area is marked as the target point. The target point refers to all point clouds of higher density and thus can be beyond the common viewing angle area. The matching will be performed subsequently with reference to the first feature point.
Step S42: and screening the target points according to the first characteristic points and the height threshold value to obtain the matchable points.
In this step, the target point is screened using the first feature point and a preset height threshold. The height threshold may be set according to the actual application scenario, for example, if the point cloud on the ground or the wall is of interest, the threshold may be set according to the height range of these faces. Of course, a fixed value may be set as the height threshold according to the characteristics of the lidar or the depth camera. After screening, target points corresponding to the first feature points and meeting the height threshold condition are obtained, and the points are marked as matchable points.
Step S43: constructing a first characteristic histogram for the first characteristic points and constructing a second characteristic histogram for the matchable points; the characteristic histogram comprises the height and the direction parameters of adjacent points.
In this step, the feature histogram is a data structure for describing the features of the point cloud, and it may represent the spatial distribution and local characteristics of the point cloud in the form of a histogram. In the feature histogram, height information and direction parameters are included. The altitude information directly reflects the spatial position of the points, while the direction parameters describe the relative direction relationship between each point and its neighboring points, such as the normal direction of the point cloud surface or the local gradient direction of the point. The direction parameter may also be the value of the angle cos between the line connecting the first characteristic point and the adjacent point and the ground normal vector. For example, when there are 5 bins per feature, there are 5*5 =25 dimensional feature vectors per point. For the first feature point and the matchable point, their height and direction parameters to the neighboring points are calculated, respectively, and these parameters are counted into a feature histogram.
Step S44: and matching according to the first characteristic histogram and the second characteristic histogram to obtain a pairing point set of the first characteristic point and the matchable point.
In this step, the first feature point and the matchable point are matched using the feature histogram constructed in step S43. By comparing the similarity of the feature histograms, a distance value is determined, and a matchable point that best matches each first feature point can be found. And combining the successfully matched first characteristic points and the matchable points into a matching point pair. These matching point pairs will be used in subsequent point cloud registration, coordinate transformation or three-dimensional reconstruction tasks.
The method and the device for matching the point cloud quickly according to the characteristics of the point cloud are suitable for matching the residual point cloud after the plane is removed, and have the advantages of simplicity, high efficiency, robustness and adaptation to the current characteristics.
FIG. 7 is a flowchart illustrating another step of calculating a matching result according to an embodiment of the present invention. As shown in fig. 7, compared with the previous embodiment, the step of calculating the matching result in the embodiment of the present invention further includes:
step S45: and removing mismatching of the matching point pairs through a random sample consistency algorithm.
In this step, after the construction of the matching point pairs is completed, in order to further improve the accuracy and robustness of matching, it is necessary to remove those mismatching point pairs that are caused by noise, occlusion, or other reasons. Various methods may be employed to remove the mismatch. Here we can use the random sample consensus algorithm (RANSAC) as an example for illustration.
A detailed step of removing the mismatching point pairs using the RANSAC algorithm:
Selecting a model and setting a threshold value: first, a mathematical model needs to be determined to describe the relationship between the correct matching point pairs. In the case of point cloud matching, this model may be a rigid body transformation (including rotation and translation) that describes the coordinate transformation from one point cloud to another. Then, a threshold is set for determining whether a matching point pair satisfies the model.
Random sampling and model fitting: a portion of the pairs of points are randomly extracted from all of the pairs of matching points and used to fit the model selected above. This typically involves solving an optimization problem to minimize the error between all matching point pairs and the fitted model.
Calculating an interior point and an exterior point: after fitting the model, the errors of all matching point pairs with the model are calculated. If the error of a certain point pair is less than the set threshold, it is considered an interior point (i.e., the correct matching point pair); otherwise, it is considered an outlier (i.e., a mismatching point pair).
Iteration and optimization: the above-described process of random sampling, model fitting, and interior-exterior point computation is repeated until a certain stopping condition is met (e.g., a maximum number of iterations is reached or a sufficient number of interior points are found). In each iteration, the model can be re-fitted from the found interior points and new interior and exterior points calculated. This process is an iterative optimization process aimed at finding a model that best describes all the correct matching point pairs.
Outputting a result: after the iteration is completed, the model with the most interior points is selected as a final result, and the interior points corresponding to the model are output as a final matching point pair. These matching point pairs have removed most of the mismatching and thus can be used for subsequent three-dimensional reconstruction, scene understanding, etc.
By removing the mismatching point pairs by using the RANSAC algorithm, the accuracy and the robustness of point cloud matching can be remarkably improved, and a more reliable data base is provided for subsequent applications.
Fig. 8 is a schematic structural diagram of a laser radar and depth camera calibration system according to an embodiment of the present invention. As shown in fig. 8, a laser radar and depth camera calibration system according to an embodiment of the present invention includes:
The shooting module is used for obtaining a first image by using a laser radar and obtaining a second image by using a depth camera; wherein the first image and the second image both comprise the ground or the right facing wall surface;
the extraction module is used for extracting a first plane under a first coordinate system generated by the point cloud of the first image and extracting a second plane under a second coordinate system generated by the point cloud of the second image;
the first calculation module is used for calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane;
the matching module is used for matching the point clouds outside the first plane and the second plane to obtain a matching result;
The second calculation module is used for combining the rotation vector R, the translation vector t and the matching result to perform nonlinear iterative optimization on the matching relation so as to obtain a final external parameter
According to the embodiment, the geometric features of the point cloud data and the matching algorithm are fully utilized, so that the registration problem between the laser radar and the depth camera can be effectively solved, accurate external parameter calibration is realized under the condition of different fields of view and point cloud densities, and an accurate external parameter result is provided.
The embodiment of the invention also provides laser radar and depth camera calibration equipment, which comprises a processor. A memory having stored therein executable instructions of a processor. Wherein the processor is configured to perform the steps of a laser radar and depth camera calibration method via execution of executable instructions.
As described above, the embodiment can effectively solve the registration problem between the laser radar and the depth camera by fully utilizing the geometric features and the matching algorithm of the point cloud data, realize accurate external parameter calibration under the condition of different fields of view and point cloud densities, and provide accurate external parameter results.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" platform.
Fig. 9 is a schematic structural diagram of a laser radar and depth camera calibration device according to an embodiment of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 9. The electronic device 600 shown in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 9, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including memory unit 620 and processing unit 610), a display unit 640, etc.
Wherein the memory unit stores program code that can be executed by the processing unit 610, such that the processing unit 610 performs the steps according to various exemplary embodiments of the present invention described in the above-mentioned one of the laser radar and depth camera calibration methods section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a grid environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more grids, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public grid, such as the Internet, through grid adapter 660. Grid adapter 660 may communicate with other modules of electronic device 600 over bus 630. It should be appreciated that although not shown in fig. 9, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
The embodiment of the invention also provides a computer readable storage medium for storing a program, and the steps of the laser radar and depth camera calibration method are realized when the program is executed. In some possible embodiments, the aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the above description of a method for lidar and depth camera calibration, when the program product is run on a terminal device.
As shown above, the embodiment can effectively solve the registration problem between the laser radar and the depth camera by fully utilizing the geometric features and the matching algorithm of the point cloud data, realize accurate external parameter calibration under the condition of different fields of view and point cloud densities, and provide accurate external parameter results.
Fig. 10 is a schematic structural view of a computer-readable storage medium in an embodiment of the present invention. Referring to fig. 10, a program product 800 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of grid, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected over the Internet using an Internet service provider).
According to the embodiment, the geometric features of the point cloud data and the matching algorithm are fully utilized, so that the registration problem between the laser radar and the depth camera can be effectively solved, accurate external parameter calibration is realized under the condition of different fields of view and point cloud densities, and an accurate external parameter result is provided.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.

Claims (10)

1. The laser radar and depth camera calibration method is characterized by comprising the following steps:
Step S1: obtaining a first image using a lidar and obtaining a second image using a depth camera; wherein the first image and the second image both comprise the ground or the right facing wall surface;
Step S2: extracting a first plane under a first coordinate system generated by the point cloud of the first image, and extracting a second plane under a second coordinate system generated by the point cloud of the second image;
Step S3: calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane;
step S4: matching the point clouds outside the first plane and the second plane to obtain a matching result;
Step S5: and combining the rotation vector R, the translation vector T and the matching result, and performing nonlinear iterative optimization on the matching relation to obtain a final external parameter T n l.
2. The method for calibrating a laser radar and a depth camera according to claim 1, wherein step S1 comprises:
Step S11: fixing the relative positions of the laser radar and the depth camera, synchronously shooting a first photo, and identifying the ground in the first photo; if the ground is identified, marking a first photo shot by the laser radar as a first image, and marking the first photo shot by the depth camera as a second image; otherwise, executing the next step;
Step S12: rotating the laser radar and the depth camera for one circle, continuously taking a second photo in the rotating process, extracting a plane from the second photo, marking the second photo obtained at the moment of obtaining the right wall surface as a first image and a second image respectively if the second photo is obtained, otherwise, executing the next step;
Step S13: and identifying the ground in the second photos, and selecting the second photo with the largest ground area, wherein the second photo is marked as a first image and a second image respectively.
3. The method for calibrating a laser radar and a depth camera according to claim 1, wherein step S3 comprises:
Step S31: calculating a rotation axis n=n 1×n2 to obtain a rotation matrix R1 of the rotation angle θ= arccos (n ln2), the first plane and the second plane; wherein n 1 is the normal vector of the first plane and n 2 is the normal vector of the second plane;
Step S32: two orthogonal vectors of n 2 are obtained, n 2 and the two orthogonal vectors are sequentially marked as mu 1, mu 2 and mu 3, and a translation vector t is marked as: t=k1μ1+k2μ2+k3μ3, where k1=d2-d 1, d1 is the plane parameter of the second plane and d2 is the plane parameter of the first plane.
4. The method for calibrating a laser radar and a depth camera according to claim 1, wherein in step S4, the point clouds outside the first plane and the second plane are matched according to the distance, the normal vector and the curvature, so as to obtain a matching result.
5. The method for calibrating a laser radar and a depth camera according to claim 1, wherein step S4 comprises:
Step S41: comparing the point clouds of the first image and the second image, comparing the density of the point clouds, marking the point clouds with lower density as first characteristic points, and marking the point clouds with higher density as target points;
Step S42: screening the target points according to the first characteristic points and the height threshold value to obtain matchable points;
Step S43: constructing a first characteristic histogram for the first characteristic points and constructing a second characteristic histogram for the matchable points; the characteristic histogram comprises height and direction parameters of adjacent points;
step S44: and matching according to the first characteristic histogram and the second characteristic histogram to obtain a pairing point set of the first characteristic point and the matchable point.
6. The method for calibrating a laser radar and a depth camera according to claim 5, further comprising:
step S45: and removing mismatching of the matching point pairs through a random sample consistency algorithm.
7. The method according to claim 1, wherein in step S5, the formula is used for calibrating the laser radar and the depth camera
Nonlinear optimization is carried out to obtain the final external parametersWherein pi represents a second plane point set under the second coordinate system, pi 'represents a first plane point set under the first coordinate system, pi and pi' represent points on the second plane and the first plane respectively; i represents a point cloud set except a second plane under a second coordinate system; i' represents a set of point clouds in the first coordinate system except for the first plane. n (Rp+t) +d is a point-to-plane distance formula. n 1、d1 is a parameter of the second plane and n 2、d2 is a parameter of the first plane.
8. A lidar and depth camera calibration system for implementing the lidar and depth camera calibration method of any of claims 1 to 7, comprising:
The shooting module is used for obtaining a first image by using a laser radar and obtaining a second image by using a depth camera; wherein the first image and the second image both comprise the ground or the right facing wall surface;
the extraction module is used for extracting a first plane under a first coordinate system generated by the point cloud of the first image and extracting a second plane under a second coordinate system generated by the point cloud of the second image;
the first calculation module is used for calculating a rotation vector R and a translation vector t which are constrained between the first coordinate system and the second coordinate system according to the rotation vector R1 of the first plane and the second plane;
the matching module is used for matching the point clouds outside the first plane and the second plane to obtain a matching result;
The second calculation module is used for combining the rotation vector R, the translation vector t and the matching result to perform nonlinear iterative optimization on the matching relation so as to obtain a final external parameter
9. A lidar and depth camera calibration device, comprising:
A processor;
a memory having stored therein executable instructions of the processor;
Wherein the processor is configured to perform the steps of the lidar and depth camera calibration method of any of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium storing a program, wherein the program when executed implements the steps of the lidar and depth camera calibration method of any of claims 1 to 7.
CN202410590766.3A 2024-05-13 2024-05-13 Laser radar and depth camera calibration method, system, equipment and storage medium Pending CN118429437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410590766.3A CN118429437A (en) 2024-05-13 2024-05-13 Laser radar and depth camera calibration method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410590766.3A CN118429437A (en) 2024-05-13 2024-05-13 Laser radar and depth camera calibration method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118429437A true CN118429437A (en) 2024-08-02

Family

ID=92331180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410590766.3A Pending CN118429437A (en) 2024-05-13 2024-05-13 Laser radar and depth camera calibration method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118429437A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118887299A (en) * 2024-09-27 2024-11-01 第六镜视觉科技(西安)有限公司 A joint calibration method, device and equipment for 2D camera and 3D camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118887299A (en) * 2024-09-27 2024-11-01 第六镜视觉科技(西安)有限公司 A joint calibration method, device and equipment for 2D camera and 3D camera

Similar Documents

Publication Publication Date Title
CN112669389B (en) Automatic calibration system based on visual guidance
WO2022120567A1 (en) Automatic calibration system based on visual guidance
CN112183171B (en) Method and device for building beacon map based on visual beacon
David et al. Simultaneous pose and correspondence determination using line features
Singh et al. Bigbird: A large-scale 3d database of object instances
CA2887763C (en) Systems and methods for relating images to each other by determining transforms without using image acquisition metadata
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
CN111127422A (en) Image annotation method, device, system and host
CN109074666B (en) System and method for estimating pose of non-texture object
CN110070615A (en) A kind of panoramic vision SLAM method based on polyphaser collaboration
CN107358629B (en) An indoor mapping and localization method based on target recognition
CN108416385A (en) It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
WO2015154008A1 (en) System and method for extracting dominant orientations from a scene
CN110717861A (en) Image stitching method, apparatus, electronic device and computer-readable storage medium
TW202238449A (en) Indoor positioning system and indoor positioning method
CN116844124A (en) Three-dimensional target detection frame annotation method, device, electronic equipment and storage medium
CN113628284B (en) Methods, devices, systems, electronic equipment and media for generating pose calibration data sets
CN111829522B (en) Instant positioning and map construction method, computer equipment and device
CN118429437A (en) Laser radar and depth camera calibration method, system, equipment and storage medium
Ventura et al. Structure and motion in urban environments using upright panoramas
Thomas et al. A monocular SLAM method for satellite proximity operations
CN119165452A (en) A joint calibration method and device for camera and 4D millimeter wave radar
CN118429438A (en) Laser radar and depth camera combined calibration method, system, equipment and storage medium
CN118429439A (en) Laser radar and depth camera calibration method, system, equipment and storage medium
CN117057086A (en) Three-dimensional reconstruction method, device and equipment based on target identification and model matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination