[go: up one dir, main page]

CN110232715B - Method, device and system for self calibration of multi-depth camera - Google Patents

Method, device and system for self calibration of multi-depth camera Download PDF

Info

Publication number
CN110232715B
CN110232715B CN201910379483.3A CN201910379483A CN110232715B CN 110232715 B CN110232715 B CN 110232715B CN 201910379483 A CN201910379483 A CN 201910379483A CN 110232715 B CN110232715 B CN 110232715B
Authority
CN
China
Prior art keywords
depth
deformed
camera
image
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910379483.3A
Other languages
Chinese (zh)
Other versions
CN110232715A (en
Inventor
许星
郭胜男
刘龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201910379483.3A priority Critical patent/CN110232715B/en
Publication of CN110232715A publication Critical patent/CN110232715A/en
Application granted granted Critical
Publication of CN110232715B publication Critical patent/CN110232715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention is applicable to the technical field of optics and electronics, and provides a method, a device and a system for self-calibration of a multi-depth camera, wherein the method comprises the following steps: receiving a plurality of depth images acquired by a plurality of depth cameras, wherein the plurality of depth images have a common field of view; judging whether each depth image is deformed or not; and updating the camera pose parameters of the depth camera with the deformed depth image according to the judgment result of whether each depth image is deformed. The embodiment of the invention realizes the rapid and accurate calibration of the multi-depth camera in the using process.

Description

Method, device and system for self calibration of multi-depth camera
Technical Field
The invention relates to the technical field of optics and electronics, in particular to a method, a device and a system for self calibration of a multi-depth camera.
Background
The depth camera can acquire a depth image of a target, so that functions of 3D modeling, man-machine interaction, obstacle avoidance navigation, face recognition and the like can be further realized based on the depth image. Accordingly, depth cameras have been widely used in the fields of robotics, consumer electronics, AR/VR, and the like. However, during the use of the depth camera, measurement errors are inevitably generated due to factors, such as temperature variation, structural deformation and the like, which cause the measurement accuracy of the depth camera to be reduced and the measurement result to be unreliable.
For this problem, some corresponding technical solutions have appeared in the prior art, for example, a series of parameters of a depth image with errors are calculated first and then the depth camera is corrected, or the depth camera is corrected according to a pre-stored reference image, and these methods are relatively cumbersome, and if a multi-camera system is used, quick and accurate calibration cannot be achieved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a system for self-calibration of a multi-depth camera, so as to improve efficiency and accuracy of self-calibration of the multi-depth camera.
The invention provides a self-calibration method of a multi-depth camera in a first aspect, which comprises the following steps:
receiving a plurality of depth images acquired by a plurality of depth cameras, wherein the plurality of depth images have a common field of view;
judging whether each depth image is deformed or not;
and updating the camera pose parameters of the depth camera with the deformed depth image according to the judgment result of whether each depth image is deformed.
A second aspect of the present invention provides an apparatus for self-calibration of a multi-depth camera, comprising a memory and a processor, wherein the memory stores a computer program executable on the processor, and the processor implements the steps of the method according to the first aspect when executing the computer program.
A third aspect of the present invention provides a system for self-calibration of a multi-depth camera, comprising: a plurality of depth cameras for acquiring a plurality of depth images, and an apparatus as described in the second aspect.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
The method comprises the steps of receiving a plurality of depth images collected by a plurality of depth cameras, wherein the depth images have a common field of view; then judging whether each depth image is deformed or not; and updating the camera pose parameters of the depth camera with the deformed depth image according to the judgment result of whether each depth image is deformed, thereby realizing the rapid and accurate calibration of the multi-depth camera in the use process.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram illustrating a self-calibration principle of a multi-depth camera according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for self-calibration of a multi-depth camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-depth camera distortion correction according to an embodiment of the present invention;
fig. 4 is a schematic diagram of another multi-depth camera distortion correction provided in the embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In the description of the embodiment of the present invention, the method for solving the self-calibration of the multiple cameras is only described by taking the infrared structured light depth camera as an example, and it should be understood that the present invention can also be applied to the self-calibration of any other depth camera. In addition, "a plurality" means two or more unless specifically limited otherwise.
In the depth image acquired by the depth camera, the value on each pixel represents the distance between the corresponding spatial point and the depth camera, and the quality of the depth image comprises precision and accuracy, wherein the precision refers to the difference between a plurality of acquired depth images when the position between the depth camera and the target is relatively fixed, the smaller the difference is, the higher the precision is, or the measurement consistency and the stability of the depth camera are high, the accuracy refers to the difference between the measured value and a real value, the smaller the difference is, the higher the accuracy is, the measured value is the value displayed on the depth image, and the real value refers to the value represented by the real distance between the target and the depth camera.
In some applications, a plurality of depth cameras are required to be used for achieving depth image acquisition of a larger view field, the depth cameras independently acquire depth images and then splice and fuse the depth images, when the depth cameras deform due to various factors and the like, accuracy of the acquired depth images is reduced, errors in depth value calculation can be caused, and finally a real three-dimensional image of the large view field cannot be obtained.
Taking the structured light depth camera as an example, when the projection module and the collection module are not deformed, the optical axes of the projection module and the collection module are parallel to each other, the relative position relationship can be determined by external calibration parameters (explicit calibration parameters), and when the reference speckle pattern and the actual speckle pattern are subjected to matching calculation, calculation is only required to be performed along a baseline (baseline) direction between the reference speckle pattern and the actual speckle pattern. When the relative position of the projection module and the acquisition module is deformed (including rotational deformation and translational deformation), the deformation between the projection module and the acquisition module can cause the external standard parameters to change, and when matching calculation is carried out, a point with high similarity can not be found along a base line, or a large error exists even if the point is found. Therefore, the technical scheme of the depth camera for solving the problem that the depth map is inaccurate due to deformation (including displacement and/or deflection and the like) between the projection module and the acquisition module caused by factors such as temperature and dropping in the use process is described in the embodiment of the invention.
Fig. 1 is a schematic diagram illustrating a self-calibration principle of multiple depth cameras according to an embodiment of the present invention, as shown in fig. 1, when at least two depth cameras 101 and 102 are used to simultaneously acquire target images of a scene, each of the at least two depth cameras respectively acquires partial depth images 103 and 104 of the target scene, generally, due to a relative position relationship of the multiple depth cameras, and the like, scene images having a common view field are acquired between the depth cameras 101 and 102, that is, the depth images 103 and 104 have a common view field 105. If the depth cameras 101 and 102 are deformed, the depth images 103 and 104 will deviate from the real target scene, and even if there is a common view field part, there is a difference in the depth data of the corresponding common view field 105 in the two depth images.
When the depth cameras 101 and 102 are structured light depth cameras, the projection modules 106 and 107 project infrared structured light to a target scene, the acquisition modules 108 and 109 acquire corresponding speckle patterns 103 and 104, respectively, and a common field of view 105 is provided between the speckle patterns 103 and 104, and the processing device 110 receives the speckle patterns from the acquisition modules and starts to analyze and process the speckle patterns, thereby realizing correction of deformation errors. The processing device 110 may be connected to the depth cameras 101 and 102, respectively, forming a separate processing system, or may be integrated into the depth cameras. It should be noted that, in other embodiments of the present invention, the processing device 110 may be a part of a terminal device, and may also be a terminal device with computing capability.
In the case that the distortion occurs and at least one depth camera is not distorted, the idea of this embodiment is to use an undistorted depth image as a reference image according to the feature pixel points of the common view field 105, and correct the pose parameters of the depth camera that has the distortion by using the reference image, so that the corrected pose parameters are used to perform correct depth calculation on the subsequent frame image. The pose parameters are generally position matrix parameters inside the camera, such as rotation and translation matrices. When the depth of the target is measured, the world coordinates need to be converted into camera coordinates, the depth value of the target can be calculated, conversion can be achieved through internal and external parameters of the camera, the internal parameters are parameters of internal optical elements and the like of the camera, and the external parameters are rotation R and translation T matrixes. Thus, the self-calibration of the multi-camera, i.e. the correction of the rotation and translation matrix, is made.
Fig. 2 is a flowchart of a method for self-calibration of a multi-depth camera according to an embodiment of the present invention, where the method is executed by the processing device 110 shown in fig. 1. As shown in fig. 2, the method of correcting the deformation error includes:
step 201: a plurality of depth images acquired by a plurality of depth cameras is received.
The method comprises the steps that each depth camera collects a depth image, the received depth image is the depth image of each depth camera, and the depth images collected by the depth cameras have a common view field.
As an embodiment of the present invention, referring to FIG. 1, depth images 103 and 104 may be acquired by two depth cameras 101 and 102, which may or may not be adjacent, but the acquired depth images should have a common field of view.
In other embodiments of the present invention, the number of depth cameras may be greater than 2, each depth camera collects at least one depth image, each depth camera sends at least one depth image to the processing device, the processing device receives at least one depth image of each depth camera, and the depth images collected by the different depth cameras have a common field of view.
Step 202: and judging whether the plurality of depth images are deformed currently.
After receiving the depth images of the plurality of depth cameras, judging whether the plurality of depth images are deformed currently. That is, the depth image acquired by each depth camera is determined, and whether or not a distortion occurs is determined.
When it is determined that the depth image is deformed, it means that the corresponding depth camera is deformed, that is, the depth camera that acquires the depth image is deformed. When it is determined that none of the depth images is deformed, meaning that none of the depth cameras are deformed, the correction is stopped.
Specifically, feature extraction is performed on a reference image and each depth image, and whether each depth image is deformed or not is judged according to the extracted feature points.
For each depth image, determining that the depth image is deformed if the extracted feature points are not uniformly distributed, or the number of holes exceeds a first preset threshold, or an average similarity value obtained by performing matching calculation is lower than a second preset threshold, or a similarity change value is lower than a third preset threshold.
As an embodiment of the present invention, with continued reference to fig. 1, after receiving the depth images 103 and 104, it may be determined whether there is a distortion in the currently acquired depth image by analyzing a feature point difference between the depth image and the reference speckle image.
The method comprises the steps of extracting features of spots of a reference speckle pattern and an actually acquired speckle pattern, and judging whether a projection module and an acquisition module in the current depth camera deform or not according to the extracted feature points. The current depth map, i.e. the depth camera, is assumed to be distorted when one or more of the following features occur:
a: the actually acquired speckle pattern has uneven spatial density distribution, if the depth camera is not deformed, the acquired speckle pattern should be uniformly distributed, and when the speckle pattern is deformed, the speckle pattern distribution becomes uneven, so that whether the speckle pattern is deformed or not can be judged by measuring the spatial density distribution of the speckle pattern.
b: matching calculation is performed along the baseline direction, and if the number of holes or noise (the point corresponding to a hole is a point where the similarity satisfying the condition cannot be found, and the noise is generally generated by interference such as ambient light) exceeds a certain threshold, deformation is considered to occur.
c, performing matching calculation along the baseline direction, calculating the average similarity value or the similarity change value of the matched feature points, and if the average similarity value is lower than a set threshold value or the similarity change value is lower than the set threshold value, determining that deformation occurs.
d: matching calculations are performed along the width centered on the baseline, and deformation is considered to have occurred when points outside the baseline are found to be more similar.
Step 203: and updating the camera pose parameters of the depth camera with the deformed depth image according to the judgment result of whether each depth image is deformed.
In step 202, the deformed and undeformed depth images are determined, and based on the determination result, the camera pose parameters of the deformed depth camera are updated.
As an embodiment of the present invention, step 203 includes: and if at least one depth image in the plurality of depth images is judged not to be deformed and at least one depth image is deformed, setting the undeformed depth image as a reference image, and performing deformation error correction on each deformed depth image based on the reference image so as to update the camera pose parameters of the corresponding depth camera.
Illustratively, referring to fig. 1, in the acquired first frame image, the depth image 103 is deformed, and the depth image 104 is not deformed, the depth image 104 is set as a reference map, and the deformation error calculation is performed on the depth image 103 with the reference map, so that the camera pose parameters R and T of the depth camera 101 corresponding to the depth image 103 are updated in real time.
Since the depth image 104 is used as the reference map, it is necessary to ensure that the region acquired by the deformed depth camera is at least partially identical to the reference map, so that error correction can be performed by comparing the difference between the two, i.e., calculating new external calibration parameters reflecting the translational and rotational orientations. A particular implementation may be calculated by minimizing a cost function that reflects the difference between the known coordinates and the calculated coordinates.
Therefore, the error calculation needs to be performed based on the depth information of the pixel point cloud in the common view field 105, that is, first, the common view field of the multiple depth images needs to be found according to the relative position relationship of the multiple depth cameras or by using a three-dimensional point cloud registration algorithm (ICP), where the relative position relationship may be a setting position relationship (including an adjacent or non-adjacent position relationship) of the multiple depth cameras, a baseline position relationship, or the like, and these position relationships may all ensure that the multiple acquired depth images have an overlapping portion, that is, the common view field. The spatial transformation relation, namely the rotation and translation vectors, of the two groups of point cloud data sets can be found out according to an ICP algorithm, and then the two groups of point cloud data are transformed to the same coordinate system, so that intersection areas between the two groups of point cloud data are overlapped, and the common view field of the depth images is determined.
And after the common view field is determined, taking the depth image which belongs to the common view field and is not deformed as a reference image, and further calculating to obtain the camera pose parameters.
As another embodiment of the present invention, step 203 comprises: if the plurality of depth images are deformed, searching a common view field of the plurality of depth images according to the relative position relation of the plurality of depth cameras or by using a three-dimensional point cloud registration algorithm; and according to the information of the depth images of the common view field, minimizing the cost function value so as to update the camera pose parameter of each depth camera.
Compared with the situation that at least one depth map is not deformed, the situation is slightly complex, the cost function is minimized according to the depth information measured by the deformed depth camera, a plurality of camera pose parameters can be obtained through calculation, and the parameters can be optimized, selected and utilized according to the actual situation.
Optionally, in other embodiments of the present invention, after step 203, step 204 is further included: and calculating the depth information of the subsequent frame image according to the updated camera pose parameter.
For example, after the depth camera calibration is performed based on the first frame depth image of each depth camera, the next second frame image is a new depth image obtained after error correction, and the depth information of the depth image is calculated according to the updated camera pose parameters.
The following will focus on the specific scheme of deformation error correction by the embodiments.
Fig. 3 is a schematic diagram illustrating deformation correction of a multi-depth camera according to an embodiment of the present invention. This corresponds to at least one undistorted condition in the depth image. As shown in fig. 3, if the reference image captured by the undistorted depth camera is 310, the depth image 311 captured by the undistorted depth camera is deviated from the reference image 310. The depth value measured by the undeformed depth camera is Z0The depth value measured by the deformation depth camera is Z1
According to the thought that depth images acquired by different depth cameras in a common view field need to be as consistent as possible, namely the thought that the difference is minimum, in the embodiment, the distortion error correction of the depth camera specifically calculates the pose parameters of the camera by minimizing the cost function value, so that the self-calibration of multiple cameras is realized.
In particular, the depth value Z according to the reference image0The computation is such that the cost function J,
Figure BDA0002052879160000081
is the highest value ofDepth value Z of depth image deformed within said common field of view1Obtaining the camera pose parameters of the corresponding depth camera; wherein k is a deformation coefficient, i is the number of pixel nodes in the common view field, and Z1F (R, T), wherein R and T are camera pose parameters of the depth camera corresponding to the depth image with deformation in the common view field.
Wherein the cost function
Figure BDA0002052879160000082
In the above description, the distortion coefficient k is a constant coefficient, and generally takes any value from 0.3 to 0.6, preferably 0.5, i is the number of pixel nodes in the common view field, and m is generally not more than the number of all nodes in the common view field. According to Z0Calculating the appropriate Z1And minimizing the value of the cost function J, so as to obtain updated camera pose parameters R 'and T', and in the next frame image, correctly calculating the depth by the deformed depth camera according to the R 'and T'. The solving method for minimizing the value of the cost function J includes a gradient descent method, a newton iteration method, a normal equations method (normal equations), and the like, and the specific solution method is not described herein again.
Fig. 4 is a schematic diagram illustrating another multi-depth camera distortion correction according to an embodiment of the present invention. This situation corresponds to the situation where the depth image is all sent out as a warp. Let the depth images captured by the depth cameras, both yielding deformations, be 321 and 322, and both deviate from the real image 320. There is a difference Δ Z between the depth images 321 and 322 in their common field of view region. In one embodiment, m pixels in the common-view region are selected, the value of m should be smaller than the total number of pixels in the common-view region, and the actually measured depth value is substituted into the cost function to minimize the value of the cost function, thereby performing error correction.
Wherein the cost function is:
Figure BDA0002052879160000091
wherein k is a deformation coefficient, i is the number of pixel nodes in the common view field, and the value of m is smaller than the total number of pixel nodes in the common view field; depth deviation value
Figure BDA0002052879160000092
R1And T1Camera pose parameters, Z, for a depth camera corresponding to a distorted depth image1As depth value of the depth image, Z1=f(R1,T1);R2And T2Camera pose parameters, Z, of the depth camera corresponding to the other deformed depth image2As depth value of the depth image, Z2=f(R2,T2)。
In addition, Z is1、Z2For the current actual depth value measured by the warped depth camera, the current depth value Z is used1And Z2And calculating a depth deviation value delta Z to minimize the value of the cost function J, thereby obtaining a camera pose parameter. The method for solving the minimum value of the cost function J includes a gradient descent method, a newton iteration method, a normal equations method (normal equations), and the like, and the specific solution method is not described herein.
The implementation process of the method can be completed by a processor, and can also be completed by a terminal device comprising the processor. The multi-depth camera distortion error correction system will include a depth camera and a processor, wherein the depth camera includes a projection module and a collection module for acquiring a depth image. Of course, in addition to the processor, a memory is included, which is used for storing the deformation coefficient k, etc. in addition to the reference image, e.g. the reference speckle image, and which may be a computer readable storage medium, which stores a computer program, which when executed by the processor may implement the steps of the method as described above.
The processor receives the current spot image transmitted by the acquisition module, performs matching calculation with the reference spot image, judges whether the current depth images are deformed, if the depth images are deformed and at least one of the depth images is not deformed, sets the undeformed depth image as a reference image, performs deformation error correction based on the reference image, and minimizes a cost function value by comparing the difference between the deformed depth image and the reference image based on the principle of minimizing the difference, thereby updating the pose parameter of the camera; or if the current depth map is judged to be deformed, the difference minimization processing is still carried out on the multiple depth maps, so that the cost function value is minimum, the camera pose parameter is updated, and in addition, the real depth information of the subsequent frame image is calculated according to the updated new camera pose parameter, so that the correction of the deformation error is realized. The depth computation processor may be integrated in the depth camera or may reside in other computing devices independent of the depth camera.
The system for correcting the distortion error of the depth camera can further comprise a correction engine which is arranged in the processor, so that the processor does not need to output the depth image to a separate correction engine to realize correction, and can directly start to correct after receiving the depth image.
The Processor may include one or a combination of a Digital Signal Processor (DSP), an Application Processor (MAP), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), and the like, and the Memory may include one or a combination of a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash), and the like. The control and data processing instructions executed by the processing device may be stored in the memory in the form of software, firmware, etc. and called by the processor when necessary, or may be directly solidified into a circuit to form a dedicated circuit (or a dedicated processor) to execute the corresponding instructions, or may be implemented in the form of a combination of software and dedicated circuit. The processing device may also include an input/output interface, and/or a network interface to support network communications. In some embodiments of the present invention, the processed data is transmitted to other devices or other units in the system, such as a display unit, or an external terminal device, etc., through the interface. In other embodiments of the invention, the display unit may also be combined with one or more processors in the processing device.
In the method, the specific error adopted for constructing the mathematical model is simplified, and the corresponding error in practical application is relatively complex. By applying the method disclosed by the invention to a specific complex scene, the method can be directly applied or reasonably changed and applied on the basis of the idea of the invention, and the accuracy of a depth camera can be improved to a certain extent. Reasonable variations based on specific application scenarios based on the idea of the present invention should be considered as the protection scope of the present invention.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (10)

1. A method of self-calibration of a multi-depth camera, comprising:
receiving a plurality of depth images acquired by a plurality of depth cameras, wherein the plurality of depth images have a common field of view;
judging whether each depth image is deformed or not;
updating the camera pose parameters of the depth camera with the deformed depth image according to the judgment result of whether each depth image is deformed;
the determining whether each depth image is deformed includes:
extracting features of the reference image and each depth image, and judging whether each depth image is deformed or not according to the extracted feature points;
the extracting features of the reference image and each depth image, and judging whether each depth image is deformed according to the extracted feature points, includes:
extracting the features of the spots of the reference speckle pattern and the actually acquired speckle pattern, and judging whether a projection module and an acquisition module in the current depth camera deform or not according to the extracted feature points;
the speckle through the speckle to the speckle pattern that consults the speckle pattern and actually acquire carries out feature extraction, judges whether projection module takes place to warp with the collection module among the present degree of depth camera according to the characteristic point of extracting, includes:
the number of holes or noises which appear by measuring the uneven distribution of the spatial density of the speckle pattern or performing matching calculation along the baseline direction exceeds a certain threshold value; or when the average similarity value of the matched feature points obtained by matching along the baseline direction is lower than a set threshold value or the similarity change value is lower than a set threshold value; or along a width centered on the baseline, and when points outside the baseline are found to be more similar, then a deformation is considered to have occurred.
2. The method of claim 1, wherein updating the camera pose parameters of the depth camera with the deformed depth images according to the determination of whether each depth image is deformed comprises:
and if at least one depth image in the plurality of depth images is judged not to be deformed and at least one depth image is judged to be deformed, setting the depth image which is not deformed as a reference image, and correcting deformation errors of each deformed depth image based on the reference image so as to update the camera pose parameters of the corresponding depth camera.
3. The method of claim 1, wherein updating the camera pose parameters of the depth camera with the deformed depth images according to the determination of whether each depth image is deformed comprises:
if at least one depth image in the plurality of depth images is not deformed and at least one depth image is deformed, searching a common view field of the plurality of depth images according to the relative position relation of the plurality of depth cameras or by using a three-dimensional point cloud registration algorithm;
and taking the depth image which belongs to the common view field and is not deformed as a reference image, and carrying out deformation error correction on the depth image which is deformed in the common view field based on the reference image so as to update the camera pose parameters of the corresponding depth camera.
4. The method of claim 3, wherein finding a common field of view for a plurality of the depth images according to relative positional relationships of a plurality of the depth cameras or using a three-dimensional point cloud registration algorithm comprises:
and determining a common field of view of the plurality of depth images according to the adjacent or non-adjacent position relationship of the plurality of depth cameras and the baseline position relationship.
5. The method of multi-depth camera self-calibration of claim 3, wherein the warping error correcting the warped depth images within the common field of view based on the reference image to update camera pose parameters of the corresponding depth cameras comprises:
according to the depth value Z of the reference image0The computation is such that the cost function J,
Figure FDA0003140788090000021
is smallest, the depth value Z of the depth image distorted within the common field of view1Obtaining the camera pose parameters of the corresponding depth camera; wherein k is a deformation coefficient, i is the number of pixel nodes in the common view field, and the value of m is smaller than the total number of pixel nodes in the common view field; z1F (R, T), R and T being a rotation R matrix and a translation T matrix of the depth camera corresponding to the deformed depth image within the common field of view.
6. The method of claim 1, wherein updating the camera pose parameters of the depth camera with the deformed depth images according to the determination of whether each depth image is deformed comprises:
if the depth images are deformed, finding a common view field of the depth images according to the relative position relation of the depth cameras or by using a three-dimensional point cloud registration algorithm;
and according to the information of the depth image of the common view field, minimizing the cost function value so as to update the camera pose parameter of each depth camera.
7. The method of claim 6, wherein the cost function is:
Figure FDA0003140788090000031
wherein k is a deformation coefficient, i is the number of pixel nodes in the common view field, and the value of m is smaller than the total number of pixel nodes in the common view field; depth deviation value Δ Z ═ Δ Z1+ΔZ2
Figure FDA0003140788090000032
R1And T1For a distorted depth image, corresponding to the rotation R matrix and translation T matrix, Z of the depth camera1As depth value of the depth image, Z1=f(R1,T1);R2And T2The rotating R matrix and the translation T matrix of the depth camera corresponding to the other deformed depth image, Z2As depth value of the depth image, Z2=f(R2,T2),ΔR1And Δ T1The amount of change in the rotational R matrix and the amount of change in the translational T matrix, Δ R, of a corresponding depth camera for a warped depth image2And Δ T2The amount of change in the rotational R matrix and the amount of change in the translational T matrix of the corresponding depth camera for another warped depth image.
8. An apparatus for self-calibration of a multi-depth camera, comprising a memory and a processor, the memory having stored therein a computer program operable on the processor, wherein the processor, when executing the computer program, performs the steps of the method according to any one of claims 1 to 7.
9. A system for self-calibration of a multi-depth camera, comprising: a plurality of depth cameras for acquiring a plurality of depth images, and the apparatus of claim 8.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201910379483.3A 2019-05-08 2019-05-08 Method, device and system for self calibration of multi-depth camera Active CN110232715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910379483.3A CN110232715B (en) 2019-05-08 2019-05-08 Method, device and system for self calibration of multi-depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910379483.3A CN110232715B (en) 2019-05-08 2019-05-08 Method, device and system for self calibration of multi-depth camera

Publications (2)

Publication Number Publication Date
CN110232715A CN110232715A (en) 2019-09-13
CN110232715B true CN110232715B (en) 2021-11-19

Family

ID=67861169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910379483.3A Active CN110232715B (en) 2019-05-08 2019-05-08 Method, device and system for self calibration of multi-depth camera

Country Status (1)

Country Link
CN (1) CN110232715B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028294B (en) * 2019-10-20 2024-01-16 奥比中光科技集团股份有限公司 Multi-distance calibration method and system based on depth camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064780A (en) * 2006-04-30 2007-10-31 台湾新力国际股份有限公司 Image stitching accuracy improvement method and device using lens distortion correction
CN107079141A (en) * 2014-09-22 2017-08-18 三星电子株式会社 Image stitching for 3D video
CN107730561A (en) * 2017-10-17 2018-02-23 深圳奥比中光科技有限公司 The bearing calibration of depth camera temperature error and system
CN108447097A (en) * 2018-03-05 2018-08-24 清华-伯克利深圳学院筹备办公室 Depth camera scaling method, device, electronic equipment and storage medium
CN108780504A (en) * 2015-12-22 2018-11-09 艾奎菲股份有限公司 Three mesh camera system of depth perception

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960020A (en) * 2017-05-27 2018-12-07 富士通株式会社 Information processing method and information processing equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064780A (en) * 2006-04-30 2007-10-31 台湾新力国际股份有限公司 Image stitching accuracy improvement method and device using lens distortion correction
CN107079141A (en) * 2014-09-22 2017-08-18 三星电子株式会社 Image stitching for 3D video
CN108780504A (en) * 2015-12-22 2018-11-09 艾奎菲股份有限公司 Three mesh camera system of depth perception
CN107730561A (en) * 2017-10-17 2018-02-23 深圳奥比中光科技有限公司 The bearing calibration of depth camera temperature error and system
CN108447097A (en) * 2018-03-05 2018-08-24 清华-伯克利深圳学院筹备办公室 Depth camera scaling method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110232715A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN109920011B (en) External parameter calibration method, device and equipment for laser radar and binocular camera
CN110689581B (en) Structured light module calibration method, electronic device, and computer-readable storage medium
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
US9858684B2 (en) Image processing method and apparatus for calibrating depth of depth sensor
CN106548489B (en) A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image
CN107730561B (en) Depth camera temperature error correction method and system
CN107657635B (en) Depth camera temperature error correction method and system
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
CN112184824B (en) Camera external parameter calibration method and device
CN113494893B (en) Calibration method and device of three-dimensional laser scanning system and computer equipment
CN112070845A (en) Calibration method, device and terminal equipment for binocular camera
CN113409391B (en) Visual positioning method and related device, equipment and storage medium
CN107808398B (en) Camera parameter calculation device, calculation method, program, and recording medium
CN102750697A (en) Parameter calibration method and device
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN109887002A (en) Image feature point matching method and device, computer equipment and storage medium
JP5998532B2 (en) Correction formula calculation method, correction method, correction apparatus, and imaging apparatus
CN114926538B (en) External parameter calibration method and device for monocular laser speckle projection system
CN111915681B (en) External parameter calibration method, device, storage medium and equipment for multi-group 3D camera group
CN115797466A (en) Rapid three-dimensional space calibration method
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
KR102265081B1 (en) System for determining position and attitude of camera using the inner product of vectors and three-dimensional coordinate transformation
CN112419427A (en) Methods for improving the accuracy of time-of-flight cameras
JP2002109518A (en) Three-dimensional shape restoring method and system therefor
CN112305524A (en) Ranging method, ranging system, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant