Multi-camera three-dimensional system and calibration method thereof
Technical Field
The invention relates to the field of industrial three-dimensional vision, in particular to a multi-camera three-dimensional system and a calibration method thereof.
Background
The existing vision technology in the industrial field mainly uses a monocular, binocular or structural optical module, monocular structural light lacks information in the depth direction, a measurement blind area is easy to appear in the binocular vision and structural optical module, and measurement with large range, high precision and complete information degree is difficult to realize. Panoramic reconstruction in non-industrial fields, mainly SFM (motion structure reconstruction), is not suitable for industrial fields. Most of the conventional multi-vision is matched with a motion link mechanism, and the dynamic error of the link mechanism has the characteristics of multiple degrees of freedom, real variability, transmissibility and the like, so that the measurement accuracy of the multi-vision is greatly limited.
The monocular vision principle is that small holes are imaged, an object in a three-dimensional space is projected onto a phase plane, detection and positioning are carried out according to the surface profile characteristics of the object, and the monocular vision principle is mainly applied to surface detection of objects, coarse positioning and measurement of a matched tool; the principal principle of binocular vision is a triangulation method, and the prior knowledge of using speckle light spots as corner points is matched with the triangular light spots to calculate depth and space coordinates. The main principle of the structured light module is the coupling between the laser or coded light curtain and the imaging of the camera. Both for three-dimensional localization of general objects and for detection of scenes.
The prior art has the following defects: 1. monocular structured light lacks depth direction information, and binocular and structured light modules are prone to measurement dead zones;
2. SFM panoramic reconstruction is time-consuming, the precision is difficult to guarantee, and the SFM panoramic reconstruction is not suitable for industrial scenes;
3. the prior multi-vision is limited to a connecting rod mechanism, and connecting rod errors which cannot be controlled exist.
Therefore, it is necessary to invent a multi-camera three-dimensional system and a calibration method thereof.
Disclosure of Invention
Therefore, the embodiment of the invention provides a multi-view camera three-dimensional system and a calibration method thereof, and provides a multi-view vision scheme which is adjustable in structure, controllable in precision and capable of realizing single reconstruction time within 100ms by providing a set of high-precision calibration scheme, so that the problem of depth information loss caused by a visual field blind area in the traditional three-dimensional vision scheme is solved.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions: a multi-camera three-dimensional system and a calibration method thereof comprise
A camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images;
an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced;
and the camera calibration module is used for: the three-dimensional coordinate system is used for obtaining parameters of the cameras through the relation between the image coordinate system of the cameras and the three-dimensional coordinate system of the space object, when one object only appears in the two cameras, depth calculation is carried out according to the binocular vision system, and when the object appears in the range of more cameras, three-dimensional vision is taken as a basic unit, and the three-dimensional coordinate of the target point can be expressed as follows:
x=cotα 1 ·(cotα 1 +cotα 2 )·1/2d
y=(cotα 1 +cotα 2 )/2d
wherein: the camera is the position of the optical center of the three cameras, P is the position of the measured Du Xen, pxy is the projection of the object on the xy plane, the included angle between the camera1 and the x axis is defined as alpha 1, the included angle between the camera2 and the x axis is defined as alpha 2, and the included angle between the camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
and a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object from the multi-viewpoint two-dimensional image, and reconstructing the space point according to the corresponding coordinates of the space point in the plurality of images and the parameter matrix of the camera.
Preferably, the specific reconstruction method of the three-dimensional reconstruction module is as follows:
s1, loading calibrated system parameters, enabling objects to enter a measurement area, and triggering a camera to shoot;
s2: storing and recording interesting corner points of the speckle structure light;
s3: generating point cloud data according to the corner points in the S2, wherein the multi-view corner points generate the point cloud data according to a beam adjustment method and camera calibration parameters, and the dual-view corner points generate the point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the poisson reconstruction principle;
s5: and (5) matching the process requirements, outputting related data results, and ending.
Preferably, the system further comprises an optimization module, wherein the optimization module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinates and the internal and external parameters of the camera and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a three-dimensional system of a multi-view camera comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the calibration plate for a plurality of times;
s2: calibrating internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera groups, and ending.
The embodiment of the invention has the following advantages:
1. the position and posture relation between the cameras can be adjusted according to the actual application scene;
2. after the camera groups are calibrated, a reconstruction result can be obtained in about 100ms compared with the calibration parameters;
3. the blind area and the information loss condition of the existing 3D visual module are avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a calibration flow chart provided by the present invention;
FIG. 2 is a view of a camera and a straight bar provided by the present invention;
FIG. 3 is a schematic diagram of the three-dimensional vision basic provided by the invention;
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
1-3 of the accompanying drawings, the multi-camera three-dimensional system and the calibrating method thereof of the embodiment comprise
A camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images;
an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced;
and the camera calibration module is used for: for obtaining parameters of the camera by the relation between the image coordinate system of the camera and the three-dimensional coordinate system of the object in space, when one object only appears inside two cameras, depth calculation is performed according to the binocular vision system, and when the object appears in more camera ranges, three-eye vision is taken as a basic unit, and the three-dimensional coordinate of the target point can be expressed as (as shown in fig. 3):
x=cotα 1 ·(cotα 1 +cotα 2 )·1/2d
y=(cotα 1 +cotα 2 )/2d
wherein: the camera is the position of the optical center of the three cameras, P is the position of the measured Du Xen, pxy is the projection of the object on the xy plane, the included angle between the camera1 and the x axis is defined as alpha 1, the included angle between the camera2 and the x axis is defined as alpha 2, and the included angle between the camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
for multi-view vision, if there are M scene points Xi (i=1, 2, m.), M cameras M j (j=1, 2, once again m), the projection of scene points to camera images satisfiesWherein->The ith image point is at the jth image, for the whole reconstruction process, the scene point Xi is determined by the image shooting itself, the parameter external parameters between the camera groups can roughly determine the positions of the common scene points at different images, and for a plurality of camera common areas, the X is solved in the re-projection process i And M j There will be a much larger number of corresponding points in the common region than is needed, so that it is desirable to minimize the reprojection error, i.e
According to the existing basic parameters of the camera, carrying out optimization solution on an initial estimation by using a nonlinear least square method (Levenberg-Marquart algorithm), and solving to obtain a parameter matrix;
and a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object from the multi-viewpoint two-dimensional image, and reconstructing the space point according to the corresponding coordinates of the space point in the plurality of images and the parameter matrix of the camera.
Further, the specific reconstruction method of the three-dimensional reconstruction module comprises the following steps:
s1, loading calibrated system parameters, enabling objects to enter a measurement area, and triggering a camera to shoot;
s2: storing and recording interesting corner points of the speckle structure light;
s3: generating point cloud data according to the corner points in the S2, wherein the multi-view corner points generate the point cloud data according to a beam adjustment method and camera calibration parameters, and the dual-view corner points generate the point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the poisson reconstruction principle;
s5: and (5) matching the process requirements, outputting related data results, and ending.
Further, the device also comprises an optimizing module, wherein the optimizing module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinates and the internal and external parameters of the camera and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a three-dimensional system of a multi-view camera comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the calibration plate for a plurality of times;
s2: calibrating internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera groups, and ending.
The implementation scene is specifically as follows: the invention solves the problem of depth information missing caused by a visual field blind area in the traditional three-dimensional visual scheme by providing a set of high-precision calibration scheme and a multi-visual scheme which is adjustable in structure, controllable in precision and capable of realizing single reconstruction time within 100 ms.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.