[go: up one dir, main page]

CN114549651A - Method and equipment for calibrating multiple 3D cameras based on polyhedral geometric constraint - Google Patents

Method and equipment for calibrating multiple 3D cameras based on polyhedral geometric constraint Download PDF

Info

Publication number
CN114549651A
CN114549651A CN202111463851.6A CN202111463851A CN114549651A CN 114549651 A CN114549651 A CN 114549651A CN 202111463851 A CN202111463851 A CN 202111463851A CN 114549651 A CN114549651 A CN 114549651A
Authority
CN
China
Prior art keywords
camera
calibration
determining
corner
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111463851.6A
Other languages
Chinese (zh)
Other versions
CN114549651B (en
Inventor
刘元伟
陈春朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202111463851.6A priority Critical patent/CN114549651B/en
Publication of CN114549651A publication Critical patent/CN114549651A/en
Application granted granted Critical
Publication of CN114549651B publication Critical patent/CN114549651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a multi-camera calibration technology, and particularly discloses a method and equipment for calibrating a plurality of 3D cameras based on polyhedral geometric constraint, wherein the method comprises the following steps: acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras respectively under different visual angles, and extracting point cloud data sets corresponding to the depth images respectively; determining a reference corner included in the RGB image for any one 3D camera; determining a reference coordinate of a reference corner point under a camera coordinate system based on a point cloud data set acquired by a 3D camera and the reference corner point; determining an angular point coordinate set under a camera coordinate system corresponding to the 3D camera according to the reference coordinates and the edge length of the calibration object; and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras. Simplify the polyphaser calibration process, improve the calibration accuracy.

Description

Method and equipment for calibrating multiple 3D cameras based on polyhedral geometric constraint
Technical Field
The invention relates to the technical field of multi-camera calibration, in particular to a method and equipment for calibrating a plurality of 3D cameras based on polyhedral geometric constraint.
Background
In a system of multiple cameras, each camera has an independent coordinate system, referred to as the camera coordinate system. The camera takes an image with the camera coordinate system as the origin of coordinates. When images acquired by each camera in the system are processed, a plurality of independent camera coordinate systems need to be aligned to a common coordinate system to improve the convenience of image processing, and the process of converting the independent camera coordinate systems to the common coordinate system is called multi-camera calibration.
The existing multi-3D camera calibration method is mostly carried out by taking the idea of stereoscopic vision calibration as reference, the stereoscopic vision refers to a system in which two common RGB cameras are arranged at a certain distance, and is also called a system consisting of 2D cameras, and the transformation relation between the two cameras is constructed by identifying a checkerboard or a circular calibration plate. During specific operation, the calibration plate is controlled to move for multiple times in the public vision of every two cameras by hands or by an automatic turntable, and finally the transformation relation between the cameras is determined.
Calibration based on calibration plates presents several problems:
the calibration plate needs to change more postures in space to establish a constraint relation, and the calibration process is complicated; only one side of the calibration plate is effective generally, when cameras are arranged, the condition that any one camera has a common view with one camera in the system can be ensured, the plane of the calibration plate can be shot, and the arrangement of the cameras is limited; the angular point recognition effect of the calibration plate is relatively large in relation to the illumination and the position of the calibration plate in the visual field, so that the problem of poor recognition accuracy and even recognition failure is easy to occur.
Disclosure of Invention
The exemplary embodiment of the invention provides a method and a device for calibrating a plurality of 3D cameras based on polyhedral geometric constraint, which are used for simplifying the process of calibrating the plurality of 3D cameras and improving the calibration precision.
According to a first aspect of exemplary embodiments, there is provided a method for calibrating a plurality of 3D cameras based on polyhedral geometric constraints, the method comprising:
acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras under different viewing angles respectively, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is a polyhedron, the number of effective corner points of the calibration object is greater than or equal to that of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set under a camera coordinate system corresponding to the 3D camera according to the reference coordinates and the edge length of the calibration object;
and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
In some exemplary embodiments, the determining the reference corner point included in the RGB image according to the at least one marker includes:
determining at least one marking plane of the calibration object where the at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in the mark object preset in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
In some exemplary embodiments, the polyhedron is a regular polyhedron, and the determining of the reference coordinates of the reference corner point under the camera coordinate system based on the point cloud data set acquired by the 3D camera and the reference corner point includes:
constructing three reference planes of the calibration object with the reference corner point as the center based on the point cloud data set acquired by the 3D camera;
and determining normal vectors of the three reference planes in the camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
In some exemplary embodiments, the determining, according to the reference coordinates and the edge length of the calibration object, a set of corner coordinates in a camera coordinate system corresponding to the 3D camera includes:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in the camera coordinate system;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining direction parameters of the other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of each other corner point under the camera coordinate system based on the direction parameters and the coordinates of the reference corner point so as to determine a corner point coordinate set under the camera coordinate system corresponding to the 3D camera.
In some exemplary embodiments, the determining a calibration matrix between each two 3D cameras based on the determined corner coordinate set in the camera coordinate system corresponding to each 3D camera includes:
for any first 3D camera and any second 3D camera in the 3D cameras, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera, and a corner point coordinate set in a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first center point coordinate is obtained according to a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera; and the second center point coordinate is determined according to the corner point coordinate set in the camera coordinate system corresponding to the second 3D camera.
In some exemplary embodiments, the determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a set of corner point coordinates in a camera coordinate system corresponding to the first 3D camera, and a set of corner point coordinates in a camera coordinate system corresponding to the second 3D camera includes:
adjusting the corner coordinate set under the camera coordinate system corresponding to the first 3D camera by using the first central coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second midpoint coordinate.
In some exemplary embodiments, if the number of effective angle points of the calibration object is less than the number of the 3D cameras, the method further includes:
rotating the calibration object to enable the target 3D camera to shoot any first effective angular point, and enabling any associated 3D camera except the target 3D camera to shoot any second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In some exemplary embodiments, an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
According to a second aspect of the exemplary embodiments, there is provided a multi-3D camera calibration device based on polyhedral geometric constraints, the device comprising a processor, a memory and at least one external communication interface, the processor, the memory and the external communication interface being connected by a bus;
the external communication interface is configured to receive RGB images and depth images of the calibration object respectively acquired at different viewing angles;
the memory having stored therein a computer program, the processor being configured to perform the following operations based on the computer program:
acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras under different viewing angles respectively, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is a polyhedron, the number of effective corner points of the calibration object is greater than or equal to that of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set of the 3D camera under the camera coordinate system according to the reference coordinates and the edge length of the calibration object;
and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
In some exemplary embodiments, the processor is configured to perform:
determining at least one marking plane of the calibration object where the at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in the mark object preset in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
In some exemplary embodiments, the polyhedron is a regular polyhedron, and the processor is configured to perform:
constructing three reference planes of the calibration object with the reference corner point as the center based on the point cloud data set acquired by the 3D camera;
and determining normal vectors of the three reference planes in the camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
In some exemplary embodiments, the processor is configured to perform:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in the camera coordinate system;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining direction parameters of the other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of each other corner point under the camera coordinate system based on the direction parameters and the coordinates of the reference corner point so as to determine a corner point coordinate set under the camera coordinate system corresponding to the 3D camera.
In some exemplary embodiments, the processor is configured to perform:
for any first 3D camera and any second 3D camera in the 3D cameras, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera, and a corner point coordinate set in a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first center point coordinate is obtained according to a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera; and the second center point coordinate is determined according to the corner point coordinate set in the camera coordinate system corresponding to the second 3D camera.
In some exemplary embodiments, the processor is configured to perform:
adjusting a corner coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first central coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
In some exemplary embodiments, if the number of effective angle points of the calibration object is smaller than the number of 3D cameras, the processor is further configured to perform:
rotating the calibration object to enable the target 3D camera to shoot any first effective angular point, and enabling any associated 3D camera except the target 3D camera to shoot any second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In some exemplary embodiments, an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
According to a third aspect of the exemplary embodiments, there is provided a plurality of 3D camera calibration apparatuses based on polyhedral geometric constraints, the apparatus including:
the data acquisition module is used for acquiring RGB images and depth images of calibration objects acquired by the plurality of 3D cameras under different viewing angles respectively and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is a polyhedron, the number of effective corner points of the calibration object is greater than or equal to the number of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
the coordinate determination module is used for identifying at least one marker in the RGB images acquired by the 3D camera aiming at any one 3D camera and determining a reference corner point included in the RGB images according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set of the 3D camera under the camera coordinate system according to the reference coordinates and the edge length of the calibration object;
and the calibration module is used for determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
In some exemplary embodiments, the coordinate determination module is specifically configured to:
determining at least one marking plane of the calibration object where the at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in the mark object preset in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
In some exemplary embodiments, the polyhedron is a regular polyhedron, and the coordinate determination module is specifically configured to:
constructing three reference planes of the calibration object with the reference corner point as the center based on the point cloud data set acquired by the 3D camera;
and determining normal vectors of the three reference planes in the camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
In some exemplary embodiments, the coordinate determination module is specifically configured to:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in the camera coordinate system;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining the direction parameters of the other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of each other corner point under the camera coordinate system based on the direction parameters and the coordinates of the reference corner point so as to determine a corner point coordinate set under the camera coordinate system corresponding to the 3D camera.
In some exemplary embodiments, the calibration module is specifically configured to:
for any first 3D camera and any second 3D camera in the 3D cameras, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera, and a corner point coordinate set in a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first center point coordinate is obtained according to a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera; and the second center point coordinate is determined according to the corner point coordinate set in the camera coordinate system corresponding to the second 3D camera.
In some exemplary embodiments, the calibration module is specifically configured to:
adjusting a corner coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first central coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
In some exemplary embodiments, if the number of effective angle points of the calibration object is smaller than the number of the 3D cameras, the calibration module is further configured to:
rotating the calibration object to enable the target 3D camera to shoot any first effective angular point, and enabling any associated 3D camera except the target 3D camera to shoot any second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In some exemplary embodiments, an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
According to a fourth aspect of the exemplary embodiments, there is provided a computer storage medium having stored therein computer program instructions, which when run on a computer, cause the computer to perform a method of multi-3D camera calibration based on polyhedral geometric constraints as described in the first aspect.
The embodiment of the application has the following beneficial effects:
acquiring GRB images and depth images of polyhedral calibration objects acquired by a plurality of 3D cameras under different viewing angles, and extracting point cloud data sets corresponding to the depth images respectively; in this way, for any 3D camera, a corner coordinate set of a calibration object of the 3D camera in a camera coordinate system may be determined based on the RGB image and the point cloud data set; because the calibration object is provided with the markers, the reference corner points included in the RGB image can be determined according to at least one marker in the identified RGB image, then the reference coordinates of the reference corner points in the camera coordinate system are determined according to the point cloud data set and the reference corner points, and because other corner points and the reference corner points have relative position relations, the coordinates of other corner points can be determined according to the reference coordinates and the edge length of the calibration object, and then the corner point coordinate set of the 3D camera in the camera coordinate system is obtained. Therefore, a corner coordinate set of each 3D camera in the corresponding camera coordinate system is obtained, and then a calibration matrix between every two 3D cameras is determined according to each corner coordinate set to complete combined calibration. Because the calibration object of the polyhedron has a spatial structure, common view among the 3D cameras is easy to form, the position is not required to be changed once when any two 3D cameras are calibrated, the change relation among the 3D cameras is conveniently obtained through the geometric constraint provided by the polyhedron, the operation is simple, and the layout limitation is less; in addition, the calibration process utilizes the coordinates of each corner point of the polyhedron calibration object, and the calibration precision is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram illustrating an application scenario of a calibration method for multiple 3D cameras based on polyhedral geometric constraints according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an exemplary cubic calibration object provided by an embodiment of the invention;
FIG. 3 is a schematic diagram illustrating another cube calibration object provided by an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating the marking of each corner of a cube according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for calibrating a plurality of 3D cameras based on polyhedral geometric constraints according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a positional relationship between a multi-3D camera and a calibration object provided by an embodiment of the invention;
FIG. 7 is a schematic diagram illustrating a process for determining a set of corner coordinates provided by an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a normal vector when the corner 0 is a reference corner according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a normal vector when the corner 1 is a reference corner according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating a process for determining a calibration matrix according to an embodiment of the invention;
FIG. 11 is a schematic diagram illustrating a positional relationship of a calibration object before rotation according to an embodiment of the present invention;
FIG. 12 is a schematic diagram illustrating a position relationship of a calibration object after rotation according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram illustrating a plurality of 3D camera calibration apparatuses based on polyhedral geometric constraint according to an embodiment of the present invention;
fig. 14 schematically illustrates a structural diagram of multiple 3D camera calibration devices based on polyhedral geometric constraints, according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present application will be described in detail and removed with reference to the accompanying drawings. Wherein in the description of the embodiments of the present application, "/" indicates an OR meaning unless otherwise specified, for example, A/B may indicate A or B; the "and/or" in the text is only an association relation describing the association object, and indicates that three relations may exist, for example, a and/or B may indicate: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
For the purpose of clearly describing the embodiments of the present application, explanations are given below to terms in the present application.
(1) A 3D camera: the system comprises a color camera and a depth camera, wherein the color camera can acquire RGB images, and the depth camera can recover a space three-dimensional point cloud with the center of the depth camera as a coordinate origin through infrared images.
(2) Camera intrinsic parameters: the camera is determined by the hardware of the camera, and is not related to the placement position of the camera. Mainly comprises (f (f, s)x,sy,u0,v0,k1,k2,k3,p1,p2). Wherein f represents a lens focal length; (u)0,v0) The projection position of the optical axis on the imaging chip is represented, namely the coordinate of the optical axis in a pixel coordinate system; (s)x,sy) Represents the physical size of a single pixel of the camera chip in pixels/mm; (k)1,k2,k3) The lens radial imaging error caused by lens processing and installation is shown as radial distortion; (p)1,p2) Representing tangential distortion, lens tangential imaging errors due to lens machining and installation. Calibrating camera internal parameters, mainly completing distortion correction of the camera to ensure that images shot by the camera cannot be deformed.
(3) Camera external parameters: including rotation matrices and translation vectors. The rotation matrix describes the directions of the coordinate axes of the camera coordinate system relative to the coordinate axes of the camera coordinate system, and the translation matrix describes the position of the origin of the camera coordinate system in the camera coordinate system. The rotation matrix and the translation vector together describe a transformation relationship of the camera coordinate system and the camera coordinate system.
The idea of the embodiments of the present application is summarized below.
Based on the imaging condition of the 3D camera, a multi-camera system can be formed by a plurality of 3D cameras and is used for three-dimensional reconstruction, motion capture and other applications. The plurality of 3D cameras in such a system are generally uniformly distributed in the environment, enclosing a working space in which the object is photographed, and in order to be able to sufficiently cover the scene around the object, the number of cameras is generally 4 or more, and there is a part of common view between any adjacent 3D cameras for redundancy.
The imaging of the camera is based on a pinhole imaging principle and a lens distortion model, and an object in a real space is projected onto an imaging chip in a projection mode, so that the pixel coordinates of a point in a world coordinate system in an image coordinate system are obtained. In the imaging system of the camera, there are mainly the following coordinate systems: world coordinate system, camera coordinate system, image coordinate system. The world coordinate system describes the position of the target object in the real world, is defined by a user, and can be superposed with one of the camera coordinate systems in the multi-camera system. The camera coordinate system is a coordinate system generated in the pinhole imaging model, defined approximately at the focal point of the lens, and is a reference that describes the position of an object at the camera. The image coordinate system describes the position coordinates of the imaging points on the imaging plane, and the coordinates corresponding to the physical color values of the digital image can be obtained by combining the position coordinates with the size information of the imaging chip.
According to the imaging principle, the image or point cloud collected by each camera takes the center of the camera as the origin of coordinates, and the two images or point clouds are not related to each other. In order to unify a plurality of cameras into one system to work together, the coordinate transformation relationship between the cameras must be known. The process of obtaining the coordinate transformation relationship is called external parameter calibration of multiple cameras, or simply called multiple camera calibration, that is, aligning the data collected by each camera together.
The calibration mode based on the calibration plate mainly aims at the condition that the internal parameters of the multiple cameras are unknown. If camera intrinsic parameters are lacked and the camera intrinsic parameters are influenced by distortion and other reasons, images shot by the camera are obviously deformed, and further the calibration accuracy of the multi-phase external parameters is influenced. The calibration plate is used for calibration, the calibration plate needs to be controlled to move among different cameras, the visual angle is converted, and the calibration process is complicated; in addition, the characteristic corner identification effect of the calibration plate image has a large position relation with the illumination and the calibration plate in the visual field, which causes low precision of corner detection or failure of detection, and affects the calibration result. Still other solutions have been to mount a turntable under the calibration plate to automatically switch between multiple cameras, but at increased calibration costs.
Based on the limitation of calibration of a calibration board, the embodiment of the application provides a method for calibrating a plurality of 3D cameras based on polyhedral geometric constraint, a machined polyhedral tool is placed in a scene surrounded by the plurality of 3D cameras as a calibration object, each face of a polyhedron is distinguished through color blocks or two-dimensional codes and the like, the orientation of the polyhedron is adjusted, each 3D camera shoots the position of an angular point where any three faces intersect as far as possible, and an RGB camera in the 3D cameras distinguishes the shot angular points through marks on identification faces. For example, the coordinate system of one of the cameras is used as the world coordinate system of the whole multi-camera system, the positions and postures of the other cameras under the coordinate system of the camera are adjusted, and finally point clouds of all the cameras are converged together to form a complete spatial polyhedron; the transformation matrix between every two 3D cameras can also be calculated and finally transformed into the same coordinate system by matrix transformation, for example, into the coordinate system of one of the cameras. Therefore, on the premise that the size of the polyhedron is known in advance, the transformation matrix among the cameras can be obtained through reverse calculation. In addition, the calibration object of the polyhedron has space geometric constraint, so that the common view among cameras is easy to form, and the layout limitation is less.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram schematically illustrating an application scenario of a multi-camera calibration method based on polyhedral geometric constraints, in which a multi-camera calibration system is composed of a plurality of 3D cameras (including but not limited to 101, 102, 103, and 104), the plurality of 3D cameras are respectively deployed at a plurality of viewing angles of a calibration object 20, and the calibration object 20 should occupy more than 1/5 of a field of view of each 3D camera, so as to ensure that sufficient point cloud data is contained in a depth image of the calibration object 20 acquired by the 3D camera at each viewing angle.
Based on the system architecture shown in fig. 1, the coordinate systems of the plurality of 3D cameras may be set to be in the coordinate system of any one of the plurality of 3D cameras, that is, the coordinate system of any one of the 3D cameras may be taken as the world coordinate system.
It should be noted that fig. 1 is only an example, and the number and type of the 3D cameras are not limited in the embodiments of the present application, and may be two binocular cameras, one structured light camera, and one TOF camera, for example.
The calibration object used in the embodiment of the application may be a standard polyhedron tool, such as a pentagonal prism, a hexagonal prism, or other standard geometric bodies, or a non-standard geometric body with a known dimensional relationship, and may provide a constraint relationship during calibration, so as to achieve the purpose of calibration. The calibration object has at least the following characteristics:
(1) the calibration object can be directly manufactured by cutting, and also can be formed by respectively processing a plurality of surfaces to be spliced, and no matter what method is used for obtaining the calibration object, the rigidity of the polyhedron is ensured, and the whole size is ensured not to change.
(2) The plane of the calibration object needs to be ensured to be flat without projections, depressions or foreign matters; the interior is not required, and the solid or hollow can be adopted.
(3) The material for manufacturing the calibration object needs to be a matte material and does not absorb infrared light, so that each camera cannot acquire effective point clouds due to mirror reflection or light absorption of the material.
(4) The size of the calibration object is determined according to the space region where the system works, the imaging of the tool in each camera is ensured to occupy more than 1/5 of the visual field, and enough point cloud data can ensure the stability of algorithm identification.
Taking a cube calibration object as an example, when the calibration object is placed in a scene, there are five visible faces, the way of distinguishing the five visible faces may be to paste color blocks or two-dimensional codes on each plane, and the position of the paste may be to paste one color block at the center of each plane or 3 color blocks at each corner, refer to fig. 2 and 3.
Referring to fig. 4, according to the requirement for the calibration object, each surface is distinguished by a color block or a two-dimensional code, and in a specific example, referring to fig. 4, a face and a fixed point of a cube are defined as follows:
the front, back, left and right surfaces of the definition cube are A, B, C, D in sequence, the top surface is E, and the bottom surface is invisible and is not required to be defined. Defining A, D, E three-face intersection point as 0, A, B, E three-face intersection point as 1, B, C, E three-face intersection point as 2, C, D, E three-face intersection point as 3, four points corresponding to 0, 1, 2 and 3 in sequence on the lower surface of the cube as 4, 5, 6 and 7.
Considering a system consisting of 4 3D cameras and assuming that the 4 cameras are uniformly distributed, as shown in fig. 4, assume that the serial numbers of the cameras are Cam0, Cam1, Cam2, and Cam3 in this order. In a specific example, the camera coordinate system of Cam0 may be used as the camera coordinate system of the whole system, and the calibration matrix may be solved sequentially. For example, the calibration matrices T01, T02 and T03 of Cam1, Cam2 and Cam3 relative to Cam0 are solved. In addition, a calibration matrix between Cam0 and Cam1, a calibration matrix between Cam1 and Cam2, and a calibration matrix between Cam2 and Cam3 can be solved. No matter which type of calibration matrix is adopted, the joint calibration of a plurality of 3D cameras can be realized.
Based on the calibration object, fig. 5 exemplarily shows a method for calibrating a plurality of 3D cameras based on polyhedral geometric constraints, which is provided by the embodiment of the present application, and as shown in fig. 5, the process is executed by a calibration apparatus, and the method mainly includes the following steps:
s501, acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras under different viewing angles respectively, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is in the shape of a polyhedron, the number of effective angular points of the calibration object is greater than or equal to the number of the 3D cameras, and the effective angular points are angular points formed by at least three visible surfaces of the calibration object.
S502, aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on a point cloud data set acquired by the 3D camera and the reference corner points; and determining a corner point coordinate set of the 3D camera under a camera coordinate system according to the reference coordinates and the edge length of the calibration object.
S503, determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera, so as to complete the combined calibration between the 3D cameras.
In the embodiment of the application, GRB images and depth images of polyhedral calibration objects acquired by a plurality of 3D cameras under different viewing angles are acquired, and point cloud data sets corresponding to the depth images are respectively extracted; in this way, for any 3D camera, a corner coordinate set of a calibration object of the 3D camera in a camera coordinate system may be determined based on the RGB image and the point cloud data set; because the calibration object is provided with the markers, the reference corner points included in the RGB image can be determined according to at least one marker in the identified RGB image, then the reference coordinates of the reference corner points in the camera coordinate system are determined according to the point cloud data set and the reference corner points, and because other corner points and the reference corner points have relative position relations, the coordinates of other corner points can be determined according to the reference coordinates and the edge length of the calibration object, and then the corner point coordinate set of the 3D camera in the camera coordinate system is obtained. Therefore, a corner coordinate set of each 3D camera under the corresponding camera coordinate system is obtained, and then a calibration matrix between every two 3D cameras is determined according to each corner coordinate set to complete combined calibration. Because the polyhedral calibration object has a spatial structure, common view among the 3D cameras is easy to form, the position is changed once without calibrating any two 3D cameras, the change relation among the 3D cameras is conveniently obtained through geometric constraint provided by the polyhedron, the operation is simple, and the limitation on the layout is less; in addition, the calibration process utilizes the coordinates of each corner point of the polyhedron calibration object, and the calibration precision is high.
S501 is related, a calibration object takes a cube as an example, when the cube is placed in a scene, one face is invisible, the number of visible faces is 5, and an angular point formed by any three visible faces is an effective angular point, in this example, the number of effective angular points is 4, so that when the number of 3D cameras is 1-4, the number of effective points of the calibration object is greater than or equal to the number of 3D cameras. In addition, if the calibration object is other polyhedral, one corner point may be formed by 4 or more facets, and the description will be made when the calibration object is a cube and the 3D camera is 4 without being discussed here.
The orientation of the cube is adjusted so that a certain point of the cube is oriented to Cam0, and Cam0 can photograph the three planes constituting the point, and the position of the cube placed in the space is appropriately adjusted, and considering that 4 cameras are uniformly distributed, it is easy to find a position so that the other three cameras can photograph the remaining 3 points in turn. Referring to fig. 6, Cam0 can capture a corner 0, Cam1 can capture a corner 1, Cam2 can capture a corner 2, and Cam3 can capture a corner 3, and the placing relationship is only an example and does not form a specific limitation.
Triggering 4 3D cameras to shoot simultaneously, acquiring 4 RGB images and depth images of a calibration object, and respectively extracting a point cloud data set corresponding to the depth images aiming at each depth image to obtain 4 groups of point cloud data, wherein each group of point cloud data is a point cloud data set. In the calibration process, the point cloud data of the 3D camera is utilized, the influence of illumination is small, the operation is simple and convenient, and the identification is stable.
Referring to S502, after RGB images and point cloud data sets of calibration objects acquired by each 3D camera at different viewing angles are obtained, an angle point coordinate set in a camera coordinate system corresponding to each 3D camera is determined. For each 3D camera, the process of determining the corner coordinate set of the calibration object is the same, and here, the Cam0 is taken as an example for description, and the process of determining the corner coordinate set mainly includes three steps S5021 to S2053, see fig. 7.
S5021, at least one marker in the RGB image collected by the 3D camera is identified, and a reference corner point included in the RGB image is determined according to the at least one marker.
This process can be implemented as follows: determining at least one marking plane of a calibration object where at least one marker is located; determining corner point identifications included in the RGB image according to the relation between the corner point identifications and the marking plane identifications in the marking object preset in the calibration object; and determining reference corner points included in the RGB image according to the corner point identification.
Specifically, in the manufacturing process of the calibration object, in order to distinguish each marking plane, color blocks or two-dimensional codes which can be used for distinguishing each plane are pasted on different planes. For example, referring to fig. 3, as a corner point 0 is formed by the mark planes A, D and E, and the relative positional relationship between the marks 31, 32, and 33 is combined to determine that the corner point included in the RGB image acquired by Cam0 is identified as 0, and further, it can be determined that the reference corner point included in the RGB image acquired by Cam0 is the corner point No. 0 of a cube, and then the corner point is used as a reference corner point, and the coordinates of the reference corner point in the camera coordinate system and the coordinates of other corner points in the camera coordinate system are sequentially calculated.
S5022, determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points.
This process can be implemented as follows:
constructing three reference planes of a calibration object with a reference corner as a center based on a point cloud data set acquired by a 3D camera; and determining normal vectors of the three reference planes in a camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
Specifically, three planes in the space are fitted by applying a point cloud processing algorithm based on a point cloud data set acquired by a 3D camera, and since the point cloud data set is obtained by shooting a calibration object, a reference plane obtained by fitting is a reference plane of the calibration object, and the fitting process takes a reference corner point as a center, which can be described with reference to fig. 8. Since the reference points are the corner points labeled 0, the three reference planes are A, B, D, respectively. Since the camera coordinate system of Cam0 is taken as the world coordinate system in the embodiment of the present application, normal vectors of the three reference planes in the camera coordinate system can be determined, and the normal orientation of reference plane a is defined as the Z-axis, the normal orientation of plane D is defined as the Y-axis, and the normal orientation of plane E is defined as the Z-axis.
And if the determined normal vector points to the interior of the cube, inverting the normal vector, and adjusting the normal vector from the three reference planes to the outside of the cube. Since any plane can be expressed by a plane equation, for example, ax + by + cz ═ d, and a + b + c ═ 1, the vector (a, b, c) represents the normal vector of the plane. Thus, the equation for the reference plane A is a1x+b1y+c1z=d1The equation of the reference plane D is a2x+b2y+c2z=d2And the reference plane E is given by the equation a3x+b3y+c3z=d3
Solving the space coordinates of the corner points of the three reference planes as P by constructing a linear equation system0(x0,y0,z0) This is achieved by solving the following system of equations:
Figure BDA0003390482740000131
the result of the solution is the reference corner point P shot by Cam00Has a reference coordinate of (x)0,y0,z0)。
And S5023, determining a corner point coordinate set of the 3D camera in a camera coordinate system according to the reference coordinates and the edge length of the calibration object.
This process can be implemented as follows:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in a camera coordinate system; for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes; determining direction parameters of other corner points according to the direction vectors corresponding to the other corner points and the edge length of the polyhedron; and determining coordinates of other corner points in a camera coordinate system based on the direction parameters and the coordinates of the reference corner points to determine a corner point coordinate set in the camera coordinate system corresponding to the 3D camera.
Specifically, since a cube includes 8 corner points, in this example, the corner point 0 is a reference corner point, in order to calculate coordinates of other corner points, a coordinate system attached to the reference corner point is defined, a coordinate axis X is obtained by cross-multiplication of a normal X of the plane a and a normal Z of the plane E, a coordinate axis Y is obtained by cross-multiplication of a normal Z of the plane E and a normal Y of the plane D, and a coordinate axis Z is obtained by cross-multiplication of the coordinate axis X and the coordinate axis Y, which is a coordinate system satisfying definition of a right-handed system.
The edge length of the cube is known and is denoted as L, and the coordinates of the other corner points are obtained as follows:
Figure BDA0003390482740000132
Figure BDA0003390482740000133
to calculate P1The coordinates of the points are given as an example,
Figure BDA0003390482740000134
is P1The direction parameter of the point, thus, may be based on P0Coordinate calculation of (P)1The coordinates of the points. In addition, P2、P5、P7、P6Is calculated mainly by means of the coordinates of the adjacent corner points, the coordinates of which are based on P0And (4) calculating coordinates.
To this end, it is obtained that when the reference corner point identified by Cam0 is a corner point No. 0, a coordinate set of 8 corner points (vertices) of the cube in the Cam0 coordinate system can be specifically expressed as
Figure BDA0003390482740000141
In practice, the reference corner identified by Cam0 is corner point No. 1, and similarly, the coordinates of each of the other corner points can be solved according to the geometric constraint relationship of the cube, and at this time, the coordinates of the other corner points can be determined with reference to fig. 9. At this time, P1Equal to the coordinates of the reference corner point, and the coordinates of other points are respectively:
Figure BDA0003390482740000142
Figure BDA0003390482740000143
similar situation occurs when Cam0 identifies other corner points, and is not described in detail here.
Thus, the coordinates of each corner point of the determined calibration object of each 3D camera in the own camera coordinate system are obtained, and are expressed as follows:
Figure BDA0003390482740000144
Figure BDA0003390482740000145
Figure BDA0003390482740000146
Figure BDA0003390482740000147
and S503, determining a calibration matrix between each two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera, so as to complete the joint calibration between each two 3D cameras.
Since the calibration process aligns the independent camera coordinate systems to a common coordinate system, the common coordinate system may be any coordinate system, such as the coordinate system of any one of the independent cameras, such as the camera coordinate system of Cam0, and it is also understood that the camera coordinate system of Cam0 is taken as the world coordinate system.
Therefore, after obtaining the set of corner point coordinates in the camera coordinate system corresponding to each 3D camera, a calibration matrix T01 of Cam1 relative to Cam0, a calibration matrix T02 of Cam2 relative to Cam0, and a calibration matrix T03 of Cam3 relative to Cam0 can be determined. And the camera coordinate system of Cam3 can be used as a common coordinate system or a world coordinate system to be unified, and then calibration matrixes of Cam0, Cam1 and Cam2 relative to Cam3 can be respectively determined. The coordinate system of which camera is used as the common coordinate system is not limited here.
In the embodiment of the present application, the camera coordinate system of Cam0 is described as a world coordinate system, that is, T01, T02, and T03 are determined in the following.
Since the process of determining the calibration matrix between any two 3D cameras is the same, the following description will be made with respect to the process of determining the calibration matrix between two 3D cameras.
When the corresponding relation of point sets in the two coordinate systems is known, a calibration matrix (a rotation matrix and a translation matrix) between the two coordinate systems can be solved by using the theory of rigid body space transformation. Assume that there are two point sets a and B, and the number of elements of the two point sets are the same and one-to-one. Therefore, the process of determining the calibration matrix is a process of determining a rotation matrix and a translation matrix between two point sets.
Illustratively, the determination process of the calibration matrix between any two 3D cameras is implemented as follows: for any first 3D camera and any second 3D camera in each 3D camera, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera and a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera; the first center point coordinate is obtained according to a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera; the second center point coordinates are determined from a set of corner point coordinates in the camera coordinate system corresponding to the second 3D camera.
Specifically, the determination process of the calibration matrix may be implemented in the following manner, with reference to fig. 10:
s1001, adjusting a corner coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first center coordinate to obtain an adjusted first set corresponding to the first 3D camera; and adjusting the corner point coordinate set under the camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera.
S1002, determining a covariance matrix between the first set and the second set.
S1003, extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode.
And S1004, determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix.
S1005, determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
Specifically, the first 3D camera takes Cam0 as an example, the second 3D camera takes Cam1 as an example, the first center point coordinates of the calibration object in the Cam0 camera coordinate system are determined according to the corner point coordinate set corresponding to Cam0, and the second center point coordinates of the calibration object in the Cam1 camera coordinate system are determined according to the corner point coordinate set corresponding to Cam 1.
In a specific example, the corner point coordinates corresponding to Cam0 are collected
Figure RE-GDA0003579573300000151
Marking the coordinate as a point set A, and collecting the corner point coordinate set corresponding to Cam1
Figure RE-GDA0003579573300000152
As point set B, therefore, the first center point coordinate and the second center point coordinate are calculated, respectively:
Figure BDA0003390482740000153
wherein, muAIs the first center point coordinate, muBAnd for the second central point coordinate, N is the number of data in the point set a and the point set B (the premise of calculating the rotation matrix is that the number of elements in the two point sets is the same and corresponds to each other), and i is the serial number of any data in the point set a or the point set B. In this example, N is 8 and i is 0 to 7.
In S1001, adjusting an angle point coordinate set under a camera coordinate system corresponding to a first 3D camera by using a first center coordinate to obtain an adjusted first set corresponding to the first 3D camera; and adjusting the corner point coordinate set under the camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera.
The adjustment process may, for example, subtract its center coordinates from the data in each point set, with the effect of moving the point set to the origin, generating the first set and the second set.
Figure BDA0003390482740000154
Wherein, A'iIs a first set, B ', corresponding to the adjusted Came 0'iA second set corresponding to adjusted cat 1.
In S1002, a covariance matrix between the first set and the second set is determined. For example, the process of determining the covariance matrix is as follows:
Figure BDA0003390482740000161
where H is a covariance matrix between the first set and the second set.
In S1003, unitary matrices U and V in the covariance matrix are extracted by applying a matrix Singular Value Decomposition (SVD) method, and in the extraction process, a singular value matrix S may also be obtained, where:
SVD(H)=[U,S,V]
in S1004, the rotation matrix R ═ VU is determinedT
In S1005, a translation matrix t may be obtained by R, the first center coordinates, and the second center coordinates:
t=-R×μAB
the calculation process of the calibration matrix is described above by taking Cam0 and Cam1 as examples, and T01 is obtained.
Figure BDA0003390482740000162
And
Figure BDA0003390482740000163
Figure BDA0003390482740000164
and
Figure BDA0003390482740000165
Figure BDA0003390482740000166
and
Figure BDA0003390482740000167
t02, T03 and T04 were obtained, respectively.
In order to improve the calibration precision, the cube can be placed at different positions and heights in the space on the premise that the camera can shoot a cube calibration object, the corresponding relation of 8 vertexes of the cube under different cameras is obtained for multiple times, and a calibration matrix is optimized through a least square method after multiple shooting.
The situation that only 4 cameras are arranged in the system and each camera shoots at one time exactly one corner point is introduced, if the situation that all the cameras shoot at one time exactly the corner point area formed by three surfaces can not be guaranteed, step-by-step calibration can be carried out, after calibration of part of the cameras is completed, the cube is rotated to enable the corner points to face the cameras which are not calibrated, a transformation relation between the camera and any other calibrated camera is built, transformation of the camera under a world coordinate system is obtained through a transformation transmission mode, and calibration is completed.
For example, when the number of cameras in the system is not 4 or 4 corner points cannot be captured in one shooting, the calibration can be completed step by step in a matrix transfer manner as follows:
in this case, it is necessary to make each camera shoot at least one corner point by rotating the calibration object, and for convenience of description, the following definitions are made: before the calibration object is rotated, a 3D camera which does not shoot any effective angular point is called as a target 3D camera; before the calibration object is rotated, the 3D camera which shoots any effective angle point is called a related 3D camera.
Rotating the calibration object to enable the target 3D camera to shoot any one first effective angular point, and enabling any one associated 3D camera except the target 3D camera to shoot any one second effective angular point; and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In a specific example, still taking the cubic calibration object as an example, the number of the 3D cameras is 5, referring to fig. 11, calibration of five cameras cannot be completed by one shooting, and the target 3D camera is Cam 3. Firstly, placing a calibration object according to the position shown in fig. 11, at this time, completing calibration of Cam0, Cam1, Cam2 and Cam4, and taking the camera coordinate system of Cam0 as a common coordinate system, so as to obtain calibration matrixes T01, T02 and T04; the calibration object is rotated, the position of the calibration object is placed for the second time, see fig. 12, and at this time, the Cam3 may capture corner points, then according to the method of the foregoing embodiment, the first effective corner point is J1, the associated 3D camera may be Cam2, the second effective corner point may be J2, at this time, T23 is obtained, and T3 may be obtained according to T3 ═ T03 × T23; or the associated 3D camera may be Cam4 and the second valid corner point may be J3, which results in T34, and T3 from T4-T03-T34. It should be noted that the first effective corner points may be the same or different.
In other cases, the calibration can be completed by reasonably placing the position of the calibration object by using a matrix transfer method. The related camera may be a camera at a position adjacent to the target camera, or may be any other camera capable of capturing an angular point, which is not limited herein.
In addition, the calibration object is a cube as an example, in addition, other standard geometric bodies such as a pentagonal prism or a hexagonal prism, or non-standard geometric bodies or polyhedrons with known dimensional relationships can provide a constraint relationship during calibration, so that calibration is realized.
Therefore, even if the calibration of all 3D cameras cannot be completed by one-time shooting, the cameras can be placed only for several times in an imaging space, and compared with a checkerboard or a calibration plate in the prior art, the checkerboard or the calibration plate does not need to be moved in the calibration process between any two cameras, and the operation times are greatly reduced.
As shown in fig. 13, based on the same inventive concept, an embodiment of the present invention provides a target object searching apparatus, which includes a data obtaining module 131 and a coordinate determining module 132 and a calibrating module 133.
The data acquisition module 131 is configured to acquire RGB images and depth images of calibration objects acquired by the plurality of 3D cameras at different viewing angles, and extract point cloud data sets corresponding to the depth images respectively; the calibration object is in the shape of a polyhedron, the number of effective corner points of the calibration object is greater than or equal to the number of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
a coordinate determination module 132, configured to identify, for any one of the 3D cameras, at least one marker in the RGB image acquired by the 3D camera, and determine a reference corner point included in the RGB image according to the at least one marker; determining a reference coordinate of a reference corner point under a camera coordinate system of the 3D camera based on a point cloud data set acquired by the 3D camera and the reference corner point; determining a corner point coordinate set of the 3D camera under a camera coordinate system according to the reference coordinates and the edge length of the calibration object;
the calibration module 133 is configured to determine a calibration matrix between every two 3D cameras based on the determined corner point coordinate set in the camera coordinate system corresponding to each 3D camera, so as to complete joint calibration between the 3D cameras.
In some exemplary embodiments, the coordinate determination module 132 is specifically configured to:
determining at least one marking plane of a calibration object where at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in a preset mark object in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
In some exemplary embodiments, the polyhedron is a regular polyhedron, and the coordinate determination module 132 is specifically configured to:
constructing three reference planes of a calibration object with a reference corner as a center based on a point cloud data set acquired by a 3D camera;
and determining normal vectors of the three reference planes in a camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
In some exemplary embodiments, the coordinate determination module 132 is specifically configured to:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in a camera coordinate system;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining direction parameters of other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of other corner points in a camera coordinate system based on the direction parameters and the coordinates of the reference corner points so as to determine a corner point coordinate set in the camera coordinate system corresponding to the 3D camera.
In some exemplary embodiments, the calibration module 133 is specifically configured to:
for any first 3D camera and any second 3D camera in each 3D camera, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera and a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first central point coordinate is obtained according to a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera; the second center point coordinate is determined according to the angle point coordinate set in the camera coordinate system corresponding to the second 3D camera.
In some exemplary embodiments, the calibration module 133 is specifically configured to:
adjusting a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first center coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting the corner point coordinate set under the camera coordinate system corresponding to the second 3D camera by applying a second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
In some exemplary embodiments, if the number of effective angle points of the calibration object is less than the number of 3D cameras, the calibration module 133 is further configured to:
rotating the calibration object to enable the target 3D camera to shoot any one first effective angular point, and enabling any one associated 3D camera except the target 3D camera to shoot any one second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In some exemplary embodiments, an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
Since the device is the device in the method in the embodiment of the present invention, and the principle of the device for solving the problem is similar to the method, the implementation of the device can be referred to the implementation of the method, and the repeated points are not described redundantly.
As shown in fig. 14, based on the same inventive concept, an embodiment of the present invention provides a calibration apparatus, which includes a processor 141, a memory 142, and at least one external communication interface 143, where the processor 141, the memory 142, and the external communication interface 143 are all connected by a bus 144;
an external communication interface 143 configured to receive RGB images and depth images of the calibration object respectively acquired at different viewing angles;
the memory 142 has stored therein a computer program, and the processor 141 is configured to perform the following operations based on the computer program:
acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras under different viewing angles respectively, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is in the shape of a polyhedron, the number of effective angular points of the calibration object is greater than or equal to that of the 3D cameras, and the effective angular points are angular points formed by at least three visible surfaces of the calibration object;
aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set of the 3D camera in a camera coordinate system according to the reference coordinates and the edge length of the calibration object;
and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
In some exemplary embodiments, the processor 141 is configured to perform:
determining at least one marking plane of a calibration object where at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in a preset mark object in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
In some exemplary embodiments, the polyhedron is a regular polyhedron, and processor 141 is configured to perform:
constructing three reference planes of a calibration object with a reference corner as a center based on a point cloud data set acquired by a 3D camera;
and determining normal vectors of the three reference planes in a camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
In some exemplary embodiments, the processor 141 is configured to perform:
determining direction vectors of the reference corner points on the three reference planes according to normal vectors of the three reference planes in a camera coordinate system;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining direction parameters of other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of other corner points in a camera coordinate system based on the direction parameters and the coordinates of the reference corner points so as to determine a corner point coordinate set in the camera coordinate system corresponding to the 3D camera.
In some exemplary embodiments, the processor 141 is configured to perform:
for any first 3D camera and any second 3D camera in each 3D camera, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera and a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first center point coordinate is obtained according to a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera; the second center point coordinate is determined according to the angle point coordinate set in the camera coordinate system corresponding to the second 3D camera.
In some exemplary embodiments, the processor 141 is configured to perform:
adjusting a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first center coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting the corner point coordinate set under the camera coordinate system corresponding to the second 3D camera by applying a second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
In some exemplary embodiments, if the number of effective angle points of the calibration object is less than the number of 3D cameras, the processor 141 is further configured to perform:
rotating the calibration object to enable the target 3D camera to shoot any one first effective angular point, and enabling any one associated 3D camera except the target 3D camera to shoot any one second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
In some exemplary embodiments, an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
The embodiment of the present invention further provides a computer storage medium, where computer program instructions are stored in the computer storage medium, and when the instructions are run on a computer, the computer is enabled to execute the steps of the above method for calibrating multiple 3D cameras based on polyhedral geometric constraints.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A calibration method for a plurality of 3D cameras based on polyhedral geometric constraint is characterized by comprising the following steps:
acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras respectively under different visual angles, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is a polyhedron, the number of effective corner points of the calibration object is greater than or equal to the number of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set of the 3D camera under the camera coordinate system according to the reference coordinates and the edge length of the calibration object;
and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
2. A method as claimed in claim 1, wherein said determining reference corner points comprised in said RGB image from said at least one marker comprises:
determining at least one marking plane of the calibration object where the at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in the mark object preset in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
3. The method of claim 1, wherein the polyhedron is a regular polyhedron, and the determining of the reference coordinates of the reference corner point under the camera coordinate system of the 3D camera based on the set of point cloud data acquired by the 3D camera and the reference corner point comprises:
constructing three reference planes of the calibration object with the reference corner as the center based on the point cloud data set acquired by the 3D camera;
and determining normal vectors of the three reference planes in the camera coordinate system, and determining reference coordinates of the reference corner points in the camera coordinate system according to the normal vectors.
4. The method according to claim 3, wherein determining the set of corner point coordinates of the 3D camera in the camera coordinate system according to the reference coordinates and the edge length of the calibration object comprises:
determining direction vectors of the reference corner points on the three reference planes according to the normal vectors of the three reference planes;
for each other corner point except the reference corner point, determining a direction vector corresponding to each other corner point according to the position relation of the other corner points relative to the reference corner point and the direction vectors on the three reference planes;
determining direction parameters of the other angular points according to the direction vectors corresponding to the other angular points and the edge length of the polyhedron;
and determining the coordinates of each other corner point in the camera coordinate system based on the coordinates of each direction parameter and the reference corner point so as to determine a corner point coordinate set of the 3D camera in the camera coordinate system.
5. The method according to claim 1, wherein the determining a calibration matrix between each two 3D cameras based on the determined set of corner point coordinates in the camera coordinate system corresponding to each 3D camera comprises:
for any first 3D camera and any second 3D camera in the 3D cameras, determining a calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera, and a corner point coordinate set in a camera coordinate system corresponding to the second 3D camera;
determining a calibration matrix between each two 3D cameras;
the first center point coordinate is obtained according to a corner point coordinate set in a camera coordinate system corresponding to the first 3D camera; and the second center point coordinate is determined according to the corner point coordinate set in the camera coordinate system corresponding to the second 3D camera.
6. The method according to claim 5, wherein determining the calibration matrix between the first 3D camera and the second 3D camera according to a first center point coordinate of the first 3D camera, a second center point coordinate of the second 3D camera, a set of corner point coordinates in a camera coordinate system corresponding to the first 3D camera, and a set of corner point coordinates in a camera coordinate system corresponding to the second 3D camera comprises:
adjusting a corner point coordinate set under a camera coordinate system corresponding to the first 3D camera by using the first center coordinate to obtain an adjusted first set corresponding to the first 3D camera; adjusting a corner point coordinate set under a camera coordinate system corresponding to the second 3D camera by using the second center coordinate to obtain an adjusted second set corresponding to the second 3D camera;
determining a covariance matrix between the first set and the second set;
extracting a U matrix and a V matrix in the covariance matrix by applying a matrix singular value decomposition mode;
determining a rotation matrix in the calibration matrix according to the U matrix and the V matrix;
and determining a translation matrix in the calibration matrix according to the rotation matrix, the first central point coordinate and the second central point coordinate.
7. The method of claim 1, wherein if the number of valid corners of the calibration object is less than the number of 3D cameras, the method further comprises:
rotating the calibration object to enable a target 3D camera to shoot any first effective angular point, and enable any associated 3D camera except the target 3D camera to shoot any second effective angular point; the target 3D camera is a 3D camera which does not shoot any effective corner point before the calibration object is rotated, and the associated 3D camera is a 3D camera which shoots any effective corner point before the calibration object is rotated;
and determining a calibration matrix between the target 3D camera and the associated 3D camera according to the first effective corner point and the second effective corner point.
8. The method according to any one of claims 1 to 7, wherein an imaging area of the calibration object in any one of the 3D cameras occupies a preset proportion of or above an imaging field of view of any one of the 3D cameras.
9. A plurality of 3D camera calibration devices based on polyhedral geometric constraint is characterized by comprising a processor, a memory and at least one external communication interface, wherein the processor, the memory and the external communication interface are all connected through a bus;
the external communication interface is configured to receive RGB images and depth images of the calibration object respectively acquired at different viewing angles;
the memory having stored therein a computer program, the processor being configured to perform the following operations based on the computer program:
acquiring RGB (red, green and blue) images and depth images of calibration objects acquired by a plurality of 3D cameras under different viewing angles respectively, and extracting point cloud data sets corresponding to the depth images respectively; the calibration object is a polyhedron, the number of effective corner points of the calibration object is greater than or equal to the number of the 3D cameras, and the effective corner points are corner points formed by at least three visible surfaces of the calibration object;
aiming at any one 3D camera, identifying at least one marker in an RGB image acquired by the 3D camera, and determining a reference corner point included in the RGB image according to the at least one marker; determining reference coordinates of the reference corner points under a camera coordinate system of the 3D camera based on the point cloud data set acquired by the 3D camera and the reference corner points; determining a corner point coordinate set of the 3D camera under the camera coordinate system according to the reference coordinates and the edge length of the calibration object;
and determining a calibration matrix between every two 3D cameras based on the determined corner point coordinate set under the camera coordinate system corresponding to each 3D camera so as to complete the combined calibration between the 3D cameras.
10. The device of claim 9, wherein the processor is configured to perform:
determining at least one marking plane of the calibration object where the at least one marker is located;
determining corner marks included in the RGB image according to a relation between corner marks and mark plane marks in the mark object preset in the calibration object;
and determining reference corner points included in the RGB image according to the corner point identification.
CN202111463851.6A 2021-12-03 2021-12-03 Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint Active CN114549651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111463851.6A CN114549651B (en) 2021-12-03 2021-12-03 Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111463851.6A CN114549651B (en) 2021-12-03 2021-12-03 Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint

Publications (2)

Publication Number Publication Date
CN114549651A true CN114549651A (en) 2022-05-27
CN114549651B CN114549651B (en) 2024-08-02

Family

ID=81669883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111463851.6A Active CN114549651B (en) 2021-12-03 2021-12-03 Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint

Country Status (1)

Country Link
CN (1) CN114549651B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082640A (en) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 Single image-based 3D face model texture reconstruction method and equipment
CN115661269A (en) * 2022-11-18 2023-01-31 深圳市智绘科技有限公司 External parameter calibration method and device for camera and laser radar and storage medium
CN116205994A (en) * 2023-03-10 2023-06-02 深圳扬奇医芯智能科技有限公司 A 3D point cloud camera calibration method applied in radiotherapy room
CN117381798A (en) * 2023-12-11 2024-01-12 法奥意威(苏州)机器人系统有限公司 Hand-eye calibration method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110823252A (en) * 2019-11-06 2020-02-21 大连理工大学 Automatic calibration method for multi-line laser radar and monocular vision
CN110842901A (en) * 2019-11-26 2020-02-28 广东技术师范大学 Robot hand-eye calibration method and device based on novel three-dimensional calibration block
CN111486864A (en) * 2019-01-28 2020-08-04 北京工商大学 Joint calibration method of multi-source sensor based on stereo regular octagonal structure
WO2020233443A1 (en) * 2019-05-21 2020-11-26 菜鸟智能物流控股有限公司 Method and device for performing calibration between lidar and camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111486864A (en) * 2019-01-28 2020-08-04 北京工商大学 Joint calibration method of multi-source sensor based on stereo regular octagonal structure
WO2020233443A1 (en) * 2019-05-21 2020-11-26 菜鸟智能物流控股有限公司 Method and device for performing calibration between lidar and camera
CN110823252A (en) * 2019-11-06 2020-02-21 大连理工大学 Automatic calibration method for multi-line laser radar and monocular vision
CN110842901A (en) * 2019-11-26 2020-02-28 广东技术师范大学 Robot hand-eye calibration method and device based on novel three-dimensional calibration block

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙士杰;宋焕生;张朝阳;张文涛;王璇;: "点云下地平面检测的RGB-D相机外参自动标定", 中国图象图形学报, no. 06, 16 June 2018 (2018-06-16) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082640A (en) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 Single image-based 3D face model texture reconstruction method and equipment
CN115661269A (en) * 2022-11-18 2023-01-31 深圳市智绘科技有限公司 External parameter calibration method and device for camera and laser radar and storage medium
CN115661269B (en) * 2022-11-18 2023-03-10 深圳市智绘科技有限公司 External parameter calibration method and device for camera and laser radar and storage medium
CN116205994A (en) * 2023-03-10 2023-06-02 深圳扬奇医芯智能科技有限公司 A 3D point cloud camera calibration method applied in radiotherapy room
CN117381798A (en) * 2023-12-11 2024-01-12 法奥意威(苏州)机器人系统有限公司 Hand-eye calibration method and device
CN117381798B (en) * 2023-12-11 2024-04-12 法奥意威(苏州)机器人系统有限公司 Hand-eye calibration method and device

Also Published As

Publication number Publication date
CN114549651B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN114549651B (en) Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint
CN110044300B (en) Amphibious three-dimensional vision detection device and detection method based on laser
CN109816703B (en) A Point Cloud Registration Method Based on Camera Calibration and ICP Algorithm
Kumar et al. Simple calibration of non-overlapping cameras with a mirror
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
CN113205603B (en) A 3D point cloud stitching and reconstruction method based on a rotating stage
CN108288292A (en) A kind of three-dimensional rebuilding method, device and equipment
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
US10447999B2 (en) Alignment of images of a three-dimensional object
KR20190021342A (en) Improved camera calibration system, target and process
CN111981982A (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN112734863B (en) A calibration method of cross-type binocular camera based on automatic positioning
TWI587241B (en) Method, device and system for generating two - dimensional floor plan
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
JP2010276433A (en) Imaging device, image processing device, and distance measuring device
CN110798677A (en) Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN111107337A (en) Depth information complementing method and device, monitoring system and storage medium
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
CN111179347B (en) Positioning method, positioning equipment and storage medium based on regional characteristics
CN101923730A (en) 3D Reconstruction Method Based on Fisheye Camera and Multiplane Mirror Device
JP2007024647A (en) Distance calculation device, distance calculation method, structure analysis device, and structure analysis method.
CN114494458B (en) Camera calibration system and method
Petković et al. A note on geometric calibration of multiple cameras and projectors
CN108205799B (en) Image splicing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant