[go: up one dir, main page]

CN111553955B - Multi-camera three-dimensional system and calibration method thereof - Google Patents

Multi-camera three-dimensional system and calibration method thereof Download PDF

Info

Publication number
CN111553955B
CN111553955B CN202010360630.5A CN202010360630A CN111553955B CN 111553955 B CN111553955 B CN 111553955B CN 202010360630 A CN202010360630 A CN 202010360630A CN 111553955 B CN111553955 B CN 111553955B
Authority
CN
China
Prior art keywords
camera
module
cameras
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010360630.5A
Other languages
Chinese (zh)
Other versions
CN111553955A (en
Inventor
黄兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hangda Qingyun Technology Co ltd
Original Assignee
Beijing Hangda Qingyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hangda Qingyun Technology Co ltd filed Critical Beijing Hangda Qingyun Technology Co ltd
Priority to CN202010360630.5A priority Critical patent/CN111553955B/en
Publication of CN111553955A publication Critical patent/CN111553955A/en
Application granted granted Critical
Publication of CN111553955B publication Critical patent/CN111553955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a multi-camera three-dimensional system and a calibration method thereof, in particular to the technical field of thermometers, comprising a camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods; and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images; an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced; the invention solves the problem of depth information missing caused by a visual field blind area in the traditional three-dimensional visual scheme by providing a set of high-precision calibration scheme and a multi-visual scheme which is adjustable in structure, controllable in precision and capable of realizing single reconstruction time within 100 ms.

Description

Multi-camera three-dimensional system and calibration method thereof
Technical Field
The invention relates to the field of industrial three-dimensional vision, in particular to a multi-camera three-dimensional system and a calibration method thereof.
Background
The existing vision technology in the industrial field mainly uses a monocular, binocular or structural optical module, monocular structural light lacks information in the depth direction, a measurement blind area is easy to appear in the binocular vision and structural optical module, and measurement with large range, high precision and complete information degree is difficult to realize. Panoramic reconstruction in non-industrial fields, mainly SFM (motion structure reconstruction), is not suitable for industrial fields. Most of the conventional multi-vision is matched with a motion link mechanism, and the dynamic error of the link mechanism has the characteristics of multiple degrees of freedom, real variability, transmissibility and the like, so that the measurement accuracy of the multi-vision is greatly limited.
The monocular vision principle is that small holes are imaged, an object in a three-dimensional space is projected onto a phase plane, detection and positioning are carried out according to the surface profile characteristics of the object, and the monocular vision principle is mainly applied to surface detection of objects, coarse positioning and measurement of a matched tool; the principal principle of binocular vision is a triangulation method, and the prior knowledge of using speckle light spots as corner points is matched with the triangular light spots to calculate depth and space coordinates. The main principle of the structured light module is the coupling between the laser or coded light curtain and the imaging of the camera. Both for three-dimensional localization of general objects and for detection of scenes.
The prior art has the following defects: 1. monocular structured light lacks depth direction information, and binocular and structured light modules are prone to measurement dead zones;
2. SFM panoramic reconstruction is time-consuming, the precision is difficult to guarantee, and the SFM panoramic reconstruction is not suitable for industrial scenes;
3. the prior multi-vision is limited to a connecting rod mechanism, and connecting rod errors which cannot be controlled exist.
Therefore, it is necessary to invent a multi-camera three-dimensional system and a calibration method thereof.
Disclosure of Invention
Therefore, the embodiment of the invention provides a multi-view camera three-dimensional system and a calibration method thereof, and provides a multi-view vision scheme which is adjustable in structure, controllable in precision and capable of realizing single reconstruction time within 100ms by providing a set of high-precision calibration scheme, so that the problem of depth information loss caused by a visual field blind area in the traditional three-dimensional vision scheme is solved.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions: a multi-camera three-dimensional system and a calibration method thereof comprise
A camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images;
an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced;
and the camera calibration module is used for: the three-dimensional coordinate system is used for obtaining parameters of the cameras through the relation between the image coordinate system of the cameras and the three-dimensional coordinate system of the space object, when one object only appears in the two cameras, depth calculation is carried out according to the binocular vision system, and when the object appears in the range of more cameras, three-dimensional vision is taken as a basic unit, and the three-dimensional coordinate of the target point can be expressed as follows:
x=cotα 1 ·(cotα 1 +cotα 2 )·1/2d
y=(cotα 1 +cotα 2 )/2d
wherein: the camera is the position of the optical center of the three cameras, P is the position of the measured Du Xen, pxy is the projection of the object on the xy plane, the included angle between the camera1 and the x axis is defined as alpha 1, the included angle between the camera2 and the x axis is defined as alpha 2, and the included angle between the camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
and a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object from the multi-viewpoint two-dimensional image, and reconstructing the space point according to the corresponding coordinates of the space point in the plurality of images and the parameter matrix of the camera.
Preferably, the specific reconstruction method of the three-dimensional reconstruction module is as follows:
s1, loading calibrated system parameters, enabling objects to enter a measurement area, and triggering a camera to shoot;
s2: storing and recording interesting corner points of the speckle structure light;
s3: generating point cloud data according to the corner points in the S2, wherein the multi-view corner points generate the point cloud data according to a beam adjustment method and camera calibration parameters, and the dual-view corner points generate the point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the poisson reconstruction principle;
s5: and (5) matching the process requirements, outputting related data results, and ending.
Preferably, the system further comprises an optimization module, wherein the optimization module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinates and the internal and external parameters of the camera and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a three-dimensional system of a multi-view camera comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the calibration plate for a plurality of times;
s2: calibrating internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera groups, and ending.
The embodiment of the invention has the following advantages:
1. the position and posture relation between the cameras can be adjusted according to the actual application scene;
2. after the camera groups are calibrated, a reconstruction result can be obtained in about 100ms compared with the calibration parameters;
3. the blind area and the information loss condition of the existing 3D visual module are avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a calibration flow chart provided by the present invention;
FIG. 2 is a view of a camera and a straight bar provided by the present invention;
FIG. 3 is a schematic diagram of the three-dimensional vision basic provided by the invention;
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
1-3 of the accompanying drawings, the multi-camera three-dimensional system and the calibrating method thereof of the embodiment comprise
A camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images;
an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced;
and the camera calibration module is used for: for obtaining parameters of the camera by the relation between the image coordinate system of the camera and the three-dimensional coordinate system of the object in space, when one object only appears inside two cameras, depth calculation is performed according to the binocular vision system, and when the object appears in more camera ranges, three-eye vision is taken as a basic unit, and the three-dimensional coordinate of the target point can be expressed as (as shown in fig. 3):
x=cotα 1 ·(cotα 1 +cotα 2 )·1/2d
y=(cotα 1 +cotα 2 )/2d
wherein: the camera is the position of the optical center of the three cameras, P is the position of the measured Du Xen, pxy is the projection of the object on the xy plane, the included angle between the camera1 and the x axis is defined as alpha 1, the included angle between the camera2 and the x axis is defined as alpha 2, and the included angle between the camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
for multi-view vision, if there are M scene points Xi (i=1, 2, m.), M cameras M j (j=1, 2, once again m), the projection of scene points to camera images satisfiesWherein->The ith image point is at the jth image, for the whole reconstruction process, the scene point Xi is determined by the image shooting itself, the parameter external parameters between the camera groups can roughly determine the positions of the common scene points at different images, and for a plurality of camera common areas, the X is solved in the re-projection process i And M j There will be a much larger number of corresponding points in the common region than is needed, so that it is desirable to minimize the reprojection error, i.e
According to the existing basic parameters of the camera, carrying out optimization solution on an initial estimation by using a nonlinear least square method (Levenberg-Marquart algorithm), and solving to obtain a parameter matrix;
and a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object from the multi-viewpoint two-dimensional image, and reconstructing the space point according to the corresponding coordinates of the space point in the plurality of images and the parameter matrix of the camera.
Further, the specific reconstruction method of the three-dimensional reconstruction module comprises the following steps:
s1, loading calibrated system parameters, enabling objects to enter a measurement area, and triggering a camera to shoot;
s2: storing and recording interesting corner points of the speckle structure light;
s3: generating point cloud data according to the corner points in the S2, wherein the multi-view corner points generate the point cloud data according to a beam adjustment method and camera calibration parameters, and the dual-view corner points generate the point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the poisson reconstruction principle;
s5: and (5) matching the process requirements, outputting related data results, and ending.
Further, the device also comprises an optimizing module, wherein the optimizing module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinates and the internal and external parameters of the camera and optimizing the re-projection error and the internal and external parameters of the camera.
A calibration method of a three-dimensional system of a multi-view camera comprises the following specific calibration steps:
s1: firstly, adjusting the position relation between cameras through a straight rod, and putting the calibration plate for a plurality of times;
s2: calibrating internal parameters of the cameras, and performing binocular calibration between every two groups of cameras;
s3: outputting the relation between every two camera groups, carrying out nonlinear optimization, and solving a relation matrix between the systems;
s4: and optimizing the relation among the camera groups, and ending.
The implementation scene is specifically as follows: the invention solves the problem of depth information missing caused by a visual field blind area in the traditional three-dimensional visual scheme by providing a set of high-precision calibration scheme and a multi-visual scheme which is adjustable in structure, controllable in precision and capable of realizing single reconstruction time within 100 ms.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (3)

1. A multi-view camera three-dimensional system, characterized by: comprising
A camera module: the camera module comprises cameras and straight rods used for adjusting the cameras, and the cameras are fixed through the adjustable straight rods;
and an image acquisition module: the method comprises the steps of acquiring internal and external parameters of each camera in a multi-view stereoscopic video acquisition system, and acquiring basic data of images by acquiring the images;
an image preprocessing module: the image preprocessing module is used for preprocessing the image in the image acquisition module, so that the signal-to-noise ratio of the image is improved, and the later processing pressure is reduced;
and the camera calibration module is used for: the three-dimensional coordinate system is used for obtaining parameters of the cameras through the relation between the image coordinate system of the cameras and the three-dimensional coordinate system of the space object, when one object only appears in the two cameras, depth calculation is carried out according to the binocular vision system, and when the object appears in the range of more cameras, three-dimensional vision is taken as a basic unit, and the three-dimensional coordinate of the target point can be expressed as follows:
x=cotα 1 ·(cotα 1 +cotα 2 )·1/2d
y=(cotα 1 +cotα 2 )/2d
wherein: the camera is the position of the optical center of the three cameras, P is the position of the measured object, pxy is the projection of the object on the xy plane, the included angle between the camera1 and the x axis is defined as alpha 1, the included angle between the camera2 and the x axis is defined as alpha 2, and the included angle between the camera3 and the xoy plane is defined as alpha 3;
acquiring a parameter matrix of the camera through the coordinates, and further acquiring calibrated parameters by solving the parameter matrix;
and a three-dimensional reconstruction module: the method is used for recovering the geometric information of the space object from the multi-viewpoint two-dimensional image, and reconstructing the space point according to the corresponding coordinates of the space point in the plurality of images and the parameter matrix of the camera.
2. A multi-camera three-dimensional system according to claim 1, wherein: the specific reconstruction method of the three-dimensional reconstruction module comprises the following steps:
s1, loading calibrated system parameters, enabling objects to enter a measurement area, and triggering a camera to shoot;
s2: storing and recording interesting corner points of the speckle structure light;
s3: generating point cloud data according to the corner points in the S2, wherein the multi-view corner points generate the point cloud data according to a beam adjustment method and camera calibration parameters, and the dual-view corner points generate the point cloud data according to a general binocular measurement principle and respectively transmit the point cloud data to the next step;
s4: the missing points are subjected to dense speech according to the poisson reconstruction principle;
s5: and (5) matching the process requirements, outputting related data results, and ending.
3. A multi-camera three-dimensional system according to claim 1, wherein: the system further comprises an optimizing module, wherein the optimizing module is used for obtaining a re-projection error according to the three-dimensional space point cloud coordinates and the internal and external parameters of the camera and optimizing the re-projection error and the internal and external parameters of the camera.
CN202010360630.5A 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof Active CN111553955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010360630.5A CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010360630.5A CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Publications (2)

Publication Number Publication Date
CN111553955A CN111553955A (en) 2020-08-18
CN111553955B true CN111553955B (en) 2024-03-15

Family

ID=72000374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010360630.5A Active CN111553955B (en) 2020-04-30 2020-04-30 Multi-camera three-dimensional system and calibration method thereof

Country Status (1)

Country Link
CN (1) CN111553955B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 Device and method for pose recognition and grasping based on binocular vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4230525B2 (en) * 2005-05-12 2009-02-25 有限会社テクノドリーム二十一 Three-dimensional shape measuring method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 Device and method for pose recognition and grasping based on binocular vision

Also Published As

Publication number Publication date
CN111553955A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN109238235B (en) Method for realizing rigid body pose parameter continuity measurement by monocular sequence image
CN108171758B (en) Multi-camera calibration method based on minimum optical path principle and transparent glass calibration plate
CN108489395A (en) Vision measurement system structural parameters calibration and affine coordinate system construction method and system
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN105698699A (en) A binocular visual sense measurement method based on time rotating shaft constraint
CN108288291A (en) Polyphaser calibration based on single-point calibration object
CN113175899B (en) Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN106981083A (en) The substep scaling method of Binocular Stereo Vision System camera parameters
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
CN111009030A (en) A multi-view high-resolution texture image and binocular 3D point cloud mapping method
JP2023505891A (en) Methods for measuring environmental topography
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
CN112229323A (en) Six degrees of freedom measurement method of checkerboard cooperation target based on monocular vision of mobile phone and its application
CN106709955A (en) Space coordinate system calibrate system and method based on binocular stereo visual sense
CN113592721A (en) Photogrammetry method, apparatus, device and storage medium
CN111583388A (en) Scanning method and device of three-dimensional scanning system
CN108180888A (en) A kind of distance detection method based on rotating pick-up head
CN104794718A (en) Single-image CT (computed tomography) machine room camera calibration method
CN113240749B (en) A long-distance dual target determination and ranging method for UAV recovery on offshore ship platforms
JP7033294B2 (en) Imaging system, imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240205

Address after: 86-A3101, Wanxing Road, Changyang, Fangshan District, Beijing, 102400

Applicant after: Beijing Hangda Qingyun Technology Co.,Ltd.

Country or region after: China

Address before: Room 504, Science and Technology Square, Qianjin East Road, Kunshan Economic Development Zone, Suzhou City, Jiangsu Province, 215323

Applicant before: Suzhou Longtou Intelligent Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant