CN115719387A - 3D camera calibration method, point cloud image acquisition method and camera calibration system - Google Patents
3D camera calibration method, point cloud image acquisition method and camera calibration system Download PDFInfo
- Publication number
- CN115719387A CN115719387A CN202211483388.6A CN202211483388A CN115719387A CN 115719387 A CN115719387 A CN 115719387A CN 202211483388 A CN202211483388 A CN 202211483388A CN 115719387 A CN115719387 A CN 115719387A
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate system
- position coordinates
- position points
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 96
- 238000006073 displacement reaction Methods 0.000 claims abstract description 119
- 239000011159 matrix material Substances 0.000 claims abstract description 94
- 238000005259 measurement Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 46
- 238000005192 partition Methods 0.000 claims description 40
- 230000008569 process Effects 0.000 claims description 11
- 238000000638 solvent extraction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 230000036544 posture Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a 3D camera calibration method, a point cloud image acquisition method and a camera calibration system, and relates to the technical field of industrial cameras, wherein the 3D camera calibration method comprises the following steps: acquiring measurement position coordinates of a plurality of position points of a calibration plate under a camera coordinate system; determining an initial pose of the 3D camera under a displacement table coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. The method and the device can determine the compensation matrix for the camera, so that the position coordinates of the surface position point of the target object under the camera coordinate system can be more accurately obtained according to the compensation matrix, and a three-dimensional point cloud image with higher precision can be obtained.
Description
Technical Field
The disclosure relates to the technical field of industrial cameras, in particular to a 3D camera calibration method, a point cloud image acquisition method and a camera calibration system.
Background
In recent years, machine vision technology has become widely used. For example, machine vision techniques may be used to identify objects to be detected. Before identifying an object to be detected, a three-dimensional (3D) point cloud image of the object to be detected is acquired, and the object to be detected is identified according to the 3D point cloud image.
In the related art, a 3D point cloud image of an object to be detected is acquired according to the following method: the method comprises the steps of collecting position coordinates of a plurality of position points on the surface of an object to be detected in a camera coordinate system through a 3D camera with fixed parameters, and generating a three-dimensional point cloud image of the object to be detected according to the position coordinates.
However, the position coordinates of the multiple position points on the surface of the object to be detected acquired in the above manner under the camera coordinate system are not accurate enough, so that the generated three-dimensional point cloud image has low precision.
Disclosure of Invention
The invention provides a 3D camera calibration method, a point cloud image acquisition method and a camera calibration system, which are used for solving the problem that the position coordinates of a plurality of position points on the surface of an object to be detected, which are acquired by a correlation technique, under a camera coordinate system are not accurate enough, so that the generated three-dimensional point cloud image is low in precision.
In a first aspect, the present disclosure provides a 3D camera calibration method, including:
acquiring measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system;
determining an initial pose of the 3D camera under a displacement table coordinate system;
and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
In a second aspect, the present disclosure provides a point cloud image obtaining method, including:
acquiring position coordinates of a surface position point of a target object under a camera coordinate system;
compensating the position coordinates according to a compensation matrix to obtain compensated position coordinates, wherein the compensation matrix is determined according to the measured position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system;
and generating a point cloud image corresponding to the target object according to the position coordinates after the compensation processing.
In a third aspect, the present disclosure provides a camera calibration system, including a 3D camera and a displacement stage;
the 3D camera is used for acquiring the measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system; determining an initial pose of the 3D camera under a displacement table coordinate system; determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system;
and the displacement table is used for driving the calibration plate to move through the base station.
In a fourth aspect, the present disclosure provides a 3D camera calibration apparatus, comprising:
the acquisition module is used for acquiring the measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system;
the first determination module is used for determining the initial pose of the 3D camera under a displacement table coordinate system;
and the second determination module is used for determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
In a fifth aspect, the present disclosure provides a point cloud image acquiring apparatus, including:
the acquisition module is used for acquiring the position coordinates of the surface position point of the target object in a camera coordinate system;
the processing module is used for performing compensation processing on the position coordinates according to a compensation matrix to obtain compensated position coordinates, and the compensation matrix is determined according to the measured position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system;
and the generating module is used for generating a point cloud image corresponding to the target object according to the position coordinates after the compensation processing.
In a sixth aspect, the present disclosure provides an electronic device comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored in the memory to implement the 3D camera calibration method according to the first aspect of the present disclosure or the point cloud image acquisition method according to the second aspect of the present disclosure.
In a seventh aspect, the present disclosure provides a computer-readable storage medium, in which computer program instructions are stored, and when the computer program instructions are executed by a processor, the 3D camera calibration method according to the first aspect of the present disclosure or the point cloud image acquisition method according to the second aspect of the present disclosure is implemented.
In an eighth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the 3D camera calibration method according to the first aspect of the present disclosure or the point cloud image acquisition method according to the second aspect of the present disclosure.
According to the 3D camera calibration method, the point cloud image acquisition method and the camera calibration system, the measurement position coordinates of a plurality of position points of the calibration plate under a camera coordinate system are acquired; determining an initial pose of the 3D camera under a displacement table coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. According to the method, when the compensation matrix for the 3D camera is determined, the displacement table is used, the displacement table is high-precision equipment, and errors caused by the displacement table do not need to be considered, so that the compensation matrix for the 3D camera can be determined, the precision of the 3D camera is improved according to the compensation matrix, the position coordinates of the surface position point of the target object in a camera coordinate system can be accurately obtained according to the compensation matrix, and a three-dimensional point cloud image with high precision can be obtained. In addition, since the position coordinates of the surface position point of the target object in the camera coordinate system can be compensated by the compensation matrix, the requirement for the accuracy of the initial position coordinates of the surface position point of the target object photographed by the 3D camera in the camera coordinate system can be reduced, and thus the precision requirement for the 3D camera can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a 3D camera calibration method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a 3D camera calibration method according to an embodiment of the disclosure;
fig. 3 is a flowchart of a 3D camera calibration method according to another embodiment of the disclosure;
fig. 4 is a schematic diagram of a spatial region segmentation provided in an embodiment of the present disclosure;
fig. 5 is a flow chart of point cloud image acquisition provided by an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a 3D camera calibration apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a point cloud image obtaining apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a camera calibration system provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device provided in the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related information such as financial data or user data and the like all accord with the regulations of related laws and regulations, and do not violate the public order and customs.
First, some technical terms related to the present disclosure are explained:
in camera parameters, an image measuring process and machine vision application, in order to determine the mutual relation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in an image, a geometric model of camera imaging needs to be established, and the geometric model parameters are camera parameters; the camera parameters comprise internal parameters and external parameters of the camera;
camera calibration, i.e. the process of determining geometric model parameters (camera parameters) of camera imaging;
the calibration plate can be used for correcting lens distortion and the like in machine vision, image measurement, photogrammetry, three-dimensional reconstruction and other applications.
In some application scenarios, in the process of acquiring the position coordinates of the plurality of position points of the object surface in the camera coordinate system by the 3D camera, the internal parameters and the external parameters of the 3D camera directly influence the accuracy of the position coordinates of the plurality of position points of the object surface in the camera coordinate system.
In the related art, a plurality of position points of a 3D point cloud image are obtained by the same camera internal reference, however, after the 3D camera is calibrated, the internal reference is not suitable for all the position points in the whole space due to different universality of the position points of the object surface at different depths from the 3D camera, and if the 3D point cloud image is obtained by the same camera internal reference, the accuracy of the 3D point cloud image in a preset interval and the accuracy of the 3D point cloud image outside the preset interval are different, that is, the accuracy of the obtained 3D point cloud image is low. Therefore, how to determine the accurate position coordinates of a plurality of position points on the surface of the object in the camera coordinate system is a technical problem to be solved by the present disclosure.
Based on the above problems, the present disclosure provides a 3D camera calibration method, a point cloud image acquisition method, and a camera calibration system, in which a compensation matrix is determined according to measurement position coordinates of a plurality of position points of a calibration plate in a camera coordinate system and an initial pose of a 3D camera in a displacement table coordinate system, and position coordinates of surface position points of a target object in the camera coordinate system are compensated, so that position coordinates of the surface position points of the target object in the camera coordinate system can be more accurately obtained, and a three-dimensional point cloud image with higher precision can be obtained.
Hereinafter, an application scenario of the scheme provided by the present disclosure is first exemplified.
Fig. 1 is a schematic view of an application scenario of a 3D camera calibration method according to an embodiment of the present disclosure. As shown in fig. 1, in the application scenario, the front end of the displacement table fixes the calibration plate through the base station, the displacement table can drive the calibration plate to move through the base station so as to change the position, and the 3D camera is fixed at the preset position. The setting of the position of the 3D camera may have various schemes in implementation, for example, the 3D camera may be fixed on a bracket outside the displacement table, and may also be fixed on the displacement table. Specifically, the displacement table drives the calibration plate to move to a plurality of spatial positions through the base table, and the 3D camera acquires a plurality of images of the corresponding calibration plate at different spatial positions, a plurality of first poses of the base table under a coordinate system of the displacement table and a plurality of second poses of the calibration plate under a coordinate system of the camera; the 3D camera determines a compensation matrix for the 3D camera from the plurality of images, the plurality of first poses, and the plurality of second poses.
It should be noted that, in the process that the displacement table drives the calibration plate to move to a plurality of spatial positions through the base, the calibration plate can be moved along the z-axis direction of the camera coordinate system, or the calibration plate is moved in a plane formed by the x-axis and the y-axis of the camera coordinate system, or the calibration plate is rotated along the x-axis or rotated along the y-axis, and the like; the origin of the camera coordinate system is the optical center of the 3D camera, the x axis and the y axis of the camera coordinate system are respectively parallel to the x axis and the y axis of the calibration plate image, and the z axis of the camera coordinate system is the optical axis of the 3D camera.
In addition, the embodiment of the disclosure can be applied to a scene of point cloud image acquisition.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided by the embodiment of the present disclosure, and the embodiment of the present disclosure does not limit the devices included in fig. 1, nor does it limit the positional relationship between the devices in fig. 1.
Hereinafter, the technical solution of the present disclosure will be described in detail by specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flowchart of a 3D camera calibration method according to an embodiment of the disclosure. As shown in fig. 2, the method of the embodiment of the present disclosure includes:
s201, measuring position coordinates of a plurality of position points of the calibration plate in a camera coordinate system are obtained.
Exemplarily, referring to fig. 1, when the displacement table drives the calibration plate to move to a certain spatial position through the base station, the calibration plate image can be directly obtained through shooting by the 3D camera, and then the measurement position coordinates of a plurality of position points of the calibration plate under the camera coordinate system are obtained according to the calibration plate image.
S202, determining an initial pose of the 3D camera in a displacement table coordinate system.
In this step, referring to fig. 1, for example, the poses of the abutment in the displacement table coordinate system and the poses of the position points on the calibration plate in the camera coordinate system may be collected for multiple times, so as to determine the initial pose of the 3D camera in the displacement table coordinate system. For how to specifically determine the initial pose of the 3D camera in the coordinate system of the displacement table, reference may be made to subsequent embodiments, which are not described herein again.
S203, determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
In the step, after the measurement position coordinates of the plurality of position points of the calibration plate in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system are obtained, a compensation matrix for the 3D camera can be determined according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. It can be understood that the compensation matrix includes compensation for the internal reference of the 3D camera, the internal reference of the 3D camera affects the accuracy of the position coordinates of the position points acquired by the 3D camera, and the error caused by the internal reference deviation of the 3D camera can be compensated through the compensation matrix. For example, the plurality of position points of the calibration plate may be divided into a plurality of groups of position points according to different spatial regions in the camera coordinate system, and the compensation matrix for the 3D camera may be determined according to the measured position coordinates of the groups of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. By dividing the position points according to the space areas under the camera coordinate system, different space depths and areas can be enabled to have corresponding compensation matrixes, and therefore the accuracy of the position point coordinates under the camera coordinate system can be improved. For how to specifically determine the compensation matrix for the 3D camera, reference may be made to subsequent embodiments, which are not repeated herein.
After the compensation matrix for the 3D camera is determined, the position coordinates of the position points in the camera coordinate system can be compensated according to the compensation matrix, so that more accurate position coordinates can be obtained, and further a point cloud image with higher precision can be obtained.
According to the 3D camera calibration method provided by the embodiment of the disclosure, measurement position coordinates of a plurality of position points of a calibration plate under a camera coordinate system are obtained; determining an initial pose of the 3D camera under a displacement table coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. According to the embodiment of the disclosure, when the compensation matrix for the 3D camera is determined, the displacement table is used, and the displacement table is a device with higher precision, and the error caused by the displacement table itself can not be considered, so that the compensation matrix for the 3D camera itself can be determined, the precision of the 3D camera is improved according to the compensation matrix, that is, the position coordinates of the surface position point of the target object under the camera coordinate system can be more accurately obtained according to the compensation matrix, and further, a three-dimensional point cloud image with higher precision can be obtained. In addition, since the position coordinates of the surface position point of the target object in the camera coordinate system can be compensated by the compensation matrix, the requirement for the accuracy of the initial position coordinates of the surface position point of the target object photographed by the 3D camera in the camera coordinate system can be reduced, and thus the precision requirement for the 3D camera can be reduced.
Fig. 3 is a flowchart of a 3D camera calibration method according to another embodiment of the disclosure. On the basis of the above embodiments, the embodiments of the present disclosure further explain a 3D camera calibration method. As shown in fig. 3, a method of an embodiment of the present disclosure may include:
in the embodiment of the present disclosure, the step S201 in fig. 2 may further include the following two steps S301 and S302:
s301, responding to the fact that the calibration plate moves to a plurality of space positions, and acquiring a plurality of calibration plate images.
Illustratively, the displacement platform drives the calibration plate to move through the base station, adjusts the position of calibration plate for the calibration plate can move to a plurality of spatial positions. It can be understood that, because the internal parameters of the 3D camera are different in universality in the whole space region, when the position of the calibration board is adjusted, the calibration board can be controlled to change at multiple positions, so that the 3D camera can acquire the measured position coordinates of multiple position points at different space positions in the camera coordinate system. In this step, in response to the movement of the calibration plate to a plurality of spatial positions, the 3D camera may directly capture a plurality of calibration plate images.
S302, according to the images of the calibration plates, measuring position coordinates of a plurality of position points of the calibration plates in a camera coordinate system are obtained.
In this step, after obtaining the plurality of calibration plate images, the measurement position coordinates of the plurality of position points of the calibration plate in the camera coordinate system may be obtained according to the plurality of calibration plate images.
In the embodiment of the present disclosure, the step S202 in fig. 2 may further include the following two steps S303 and S304:
s303, in the process that the displacement table drives the calibration plate to move through the base table, first poses of the base tables under a coordinate system of the displacement table and second poses of the calibration plate under a coordinate system of the camera are obtained.
In this step, the base is, for example, a shaft. It can be understood that when the displacement table drives the calibration plate to move to a plurality of spatial positions through the base table, a first pose of the base table under a coordinate system of the displacement table and a second pose of the calibration plate under a coordinate system of the camera can be obtained when the calibration plate moves to a certain spatial position each time, wherein the first pose of the base table under the coordinate system of the displacement table is a known quantity and can be measured and determined through a sensor on the displacement table; the second position of the calibration plate in the camera coordinate system can be acquired by a 3D camera.
And S304, determining the initial pose of the 3D camera under the coordinate system of the displacement table according to any one group of the first pose and the second pose.
For example, after obtaining the first poses of the base stations in the platform coordinate system and the second poses of the calibration plates in the camera coordinate system, any one set of the first poses and the second poses can be selected, and the initial poses of the 3D camera in the platform coordinate system are obtained according to the following formula one:
wherein,representing an initial pose of the 3D camera under a displacement table coordinate system;representing the pose of the base station under a coordinate system of the displacement table;the pose of the calibration plate relative to the base station is represented, and when the initial pose of the 3D camera under the coordinate system of the displacement table is obtained, the initial pose of the 3D camera is obtainedAs a known amount, useAn initial value of (1);representing the pose of the calibration plate relative to the camera coordinate system.
In the embodiment of the present disclosure, the step S203 in fig. 2 may further include the following two steps S305 and S306:
and S305, dividing a plurality of position points of the calibration plate into a plurality of groups of position points according to different space areas in a camera coordinate system.
It can be understood that the position points are divided according to the space regions under the camera coordinate system, so that different space depths and regions can have corresponding compensation matrixes, and the accuracy of the position point coordinates under the camera coordinate system can be improved. For example, a plurality of position points may be divided according to different segmentation areas in space under a camera coordinate system, specifically, the position points may be divided into a plurality of layers according to heights, and then each layer is divided into a plurality of partitions according to a set area, so as to obtain a plurality of groups of position points.
Further, optionally, dividing the plurality of position points into a plurality of groups of position points according to the difference of the spatial regions in the camera coordinate system, including: dividing a space into a plurality of layers according to different heights under a camera coordinate system, wherein a plurality of partitions are divided in each layer; and dividing the position points in the same partition in the same layer into a group of position points according to the position coordinates of the plurality of position points in the camera coordinate system.
Exemplarily, fig. 4 is a schematic diagram of a space region segmentation principle provided by an embodiment of the present disclosure, and as shown in fig. 4, a plurality of position points are divided according to different segmentation regions in a space region (i.e., a region indicated by 401 in fig. 4) in a camera coordinate system, specifically, the position points may be divided into a plurality of layers according to heights, and then each layer is divided into a plurality of partitions according to a set region, so as to obtain a plurality of groups of position points.
Optionally, the pose of the calibration plate relative to the base station can be determined according to the initial pose of the 3D camera in the coordinate system of the displacement table, the measurement position coordinates of the position point in the coordinate system of the camera, and the pose of the base station in the coordinate system of the displacement table; furthermore, theoretical position coordinates of a plurality of position points on the calibration plate in a camera coordinate system can be determined according to the position coordinates of the position points in the calibration plate coordinate system, the pose of the calibration plate relative to the base station and the pose of the base station in the displacement table coordinate systemWherein,a matrix formed by theoretical position coordinates of a plurality of position points on the calibration board in a camera coordinate system, namely the theoretical coordinates corresponding to the n position pointsCan be expressed as:
s306, determining a compensation matrix for the 3D camera according to the measurement position coordinates of each group of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
In this step, after the plurality of position points of the calibration plate are divided into the plurality of groups of position points, a compensation matrix for the 3D camera can be determined according to the measurement position coordinates of each group of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
Further, optionally, determining a compensation matrix for the 3D camera according to the measured position coordinates and the initial pose of each group of position points in the camera coordinate system, including: determining initial theoretical position coordinates of each group of position points in a camera coordinate system according to the initial pose of the 3D camera in the displacement table coordinate system; fitting the measurement position coordinates and the initial theoretical position coordinates of each group of position points in a camera coordinate system, and determining an initial compensation matrix; acquiring the adjusted positions and postures of a plurality of 3D cameras under a displacement table coordinate system, and determining the theoretical position coordinates of a plurality of groups of position points after the 3D cameras are adjusted; and adjusting the initial compensation matrix according to the current poses returned after the plurality of 3D cameras are adjusted in the displacement table coordinate system, the measurement position coordinates of the plurality of groups of position points in the camera coordinate system and the current position coordinates after the 3D cameras are adjusted until the Euclidean distance of the error between the measurement position coordinates of the groups of position points in the camera coordinate system and the current theoretical position coordinates is smaller than a preset threshold value and/or the adjustment times reach a preset number, and determining the compensation matrix as the current compensation matrix.
Illustratively, based on the above examples, there may be various implementations for determining the initial theoretical position coordinates of the respective sets of position points in the camera coordinate system according to the initial pose of the 3D camera in the translation stage coordinate system. For example, the initial theoretical position coordinates of each set of position points in the camera coordinate system may be determined by finding an intermediate between each set of position points and the camera coordinate system, thereby introducing the position points in the calibration plate into the camera coordinate system. Determining initial theoretical position coordinates of each group of position points in a camera coordinate system according to the initial pose of the 3D camera in the displacement table coordinate system, and the method can further comprise the following steps: acquiring the poses of the base station corresponding to the position points in each group in a displacement table coordinate system and the poses of the calibration plate in a camera coordinate system; determining the pose of the calibration plate relative to the base station according to the initial pose of the 3D camera in the coordinate system of the displacement table, the pose of the base station in the coordinate system of the displacement table and the pose of the calibration plate in the coordinate system of the camera; determining the position coordinates of each group of position points in the coordinate system of the displacement table according to the position coordinates of each group of position points in the coordinate system of the calibration plate, the pose of the calibration plate relative to the base station and the pose of the base station in the coordinate system of the displacement table; and determining initial theoretical position coordinates of the groups of position points in a camera coordinate system according to the position coordinates and the initial poses of the groups of position points in a displacement table coordinate system.
In this embodiment, after determining the initial theoretical position coordinates of each set of position points in the camera coordinate system, the measured position coordinates and the initial theoretical position coordinates of each set of position points in the camera coordinate system may be fitted, and when determining the initial compensation matrix for the 3D camera, there may be various embodiments, for example, the fitting may be performed by a least square method. Specifically, the measurement position coordinates and the initial theoretical position coordinates of each group of position points in the camera coordinate system can be fitted according to the following formula two, and an initial compensation matrix for the 3D camera is determined:
wherein,error of position coordinate of the camera under the coordinate system of the displacement table is represented;representing theoretical position coordinates;the coordinate of the measurement position is represented and can be directly shot by a 3D camera; m denotes a compensation matrix.
In specific implementation, the pose of the 3D camera under the coordinate system of the displacement table is adjusted for multiple times according to the measured position coordinates and the theoretical position coordinates corresponding to the multiple groups of position points respectively and the second formula, namely the pose of the 3D camera under the coordinate system of the displacement table is adjusted for multiple timesAnd obtaining a compensation matrix M aiming at the 3D camera until the Euclidean distance of the error between the measured position coordinate and the current theoretical position coordinate of each group of position points in the camera coordinate system is smaller than a preset threshold value and/or the adjustment reaches a preset number of times. Wherein the compensation matrix packetIncluding the amount of compensation for the location corresponding to each location area in each layer.
According to the 3D camera calibration method provided by the embodiment of the disclosure, a plurality of calibration plate images are obtained by responding to the movement of the calibration plate to a plurality of spatial positions; according to the images of the calibration plates, measuring position coordinates of a plurality of position points of the calibration plates under a camera coordinate system are obtained; obtaining measurement position coordinates of a plurality of position points of the calibration plate under a camera coordinate system according to the plurality of calibration plate images; in the process that the displacement table drives the calibration plates to move through the base table, acquiring first poses of the base tables under a coordinate system of the displacement table and second poses of the calibration plates under a coordinate system of the camera; determining an initial pose of the 3D camera under a displacement table coordinate system according to any one group of the first pose and the second pose; dividing a plurality of position points of the calibration plate into a plurality of groups of position points according to different space areas under a camera coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates of each group of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system. According to the embodiment of the disclosure, when the compensation matrix for the 3D camera is determined, the displacement table is used, and the displacement table is a device with higher precision, and errors caused by the displacement table do not need to be considered, so that the compensation matrix for the 3D camera can be determined, and the precision of the 3D camera can be improved according to the compensation matrix; the calibration plate comprises a calibration plate body, a camera coordinate system and a compensation matrix, wherein a plurality of position points of the calibration plate body are divided according to spatial regions under the camera coordinate system, different spatial depths and regions can be enabled to have corresponding compensation matrixes, the accuracy of position point coordinates under the camera coordinate system can be effectively improved, the position coordinates of surface position points of a target object under the camera coordinate system can be more accurately obtained according to the compensation matrixes, and then a three-dimensional point cloud image with higher precision can be obtained.
On the basis of the above embodiments, fig. 5 is a flowchart of point cloud image acquisition according to an embodiment of the present disclosure. As shown in fig. 5, the method of the embodiment of the present disclosure includes:
s501, acquiring position coordinates of the surface position point of the target object in a camera coordinate system.
For example, an image of the target object may be directly captured by a 3D camera, and then position coordinates of the surface position point of the target object in the camera coordinate system may be obtained according to the image of the target object.
S502, compensating the position coordinates according to a compensation matrix to obtain the position coordinates after compensation, wherein the compensation matrix is determined according to the measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system.
Illustratively, the compensation process may be performed on the position coordinates according to the following formula three:
formula III of M.A = A
Wherein M represents a compensation matrix; a represents the position coordinates of the surface position point of the target object in the camera coordinate system; a' represents the position coordinates of the surface position point of the target object after the compensation processing in the camera coordinate system. Further, a' may be normalized to obtain the position coordinates after the compensation process.
Further, optionally, performing compensation processing on the position coordinate according to the compensation matrix to obtain the position coordinate after the compensation processing, including: determining the layering and the partition of the surface position point according to the position coordinate of the surface position point of the target object in a camera coordinate system and the internal parameters of the 3D camera; and according to the compensation matrix corresponding to the layering and the partition where the surface position point is located, performing compensation processing on the position coordinate of the surface position point in the camera coordinate system to obtain the position coordinate after the compensation processing.
For example, the z-axis coordinate, the x-axis coordinate, and the y-axis coordinate of the position coordinate of the surface position point of the target object in the camera coordinate system may be extracted, respectively, to determine the layer and the partition where the surface position point of the target object is located. Further, optionally, determining the hierarchy and the partition where the surface position point is located according to the position coordinate of the surface position point of the target object in the camera coordinate system and the internal reference of the 3D camera, may include: determining the layering of the surface position points according to the z-axis coordinates of the surface position points in a camera coordinate system; determining pixel coordinates corresponding to the surface position points according to the x-axis coordinates and the y-axis coordinates of the surface position points in a camera coordinate system and internal parameters of the 3D camera; and determining the partition where the surface position point is located according to the pixel coordinates.
Because different layers and partitions correspond to different compensation matrixes, the position coordinates of the surface position points in the camera coordinate system can be compensated according to the compensation matrixes corresponding to the layers and the partitions where the surface position points are located through the formula III, and the position coordinates after compensation are obtained.
Since the surface position point of the target object may be between the layers or the partitions in practice, and does not completely belong to any layer or partition, in order to improve the compensation accuracy of the surface position point across layers or partitions, the compensating the position coordinates of the surface position point of the target object in the camera coordinate system may further include the following steps: when the surface position point of the target object is in the cross-layering and/or cross-partitioning area, determining the weight of each cross-layering and/or partitioning area according to the distance between the surface position point and the cross-layering and/or partitioning area; respectively compensating the position coordinates of the surface position points under the camera coordinate system according to the compensation matrixes of the cross-layers and/or the partitions to obtain a plurality of compensation position coordinates of the surface position points; and carrying out weighted summation on a plurality of compensation position coordinates of the surface position point according to the weight of each cross-layering and/or partitioning, and obtaining the position coordinate after the compensation processing as the compensation position coordinate after the weighted summation processing.
Exemplarily, when the surface position point belongs to different partitions of the same hierarchy, the compensation results of the surface position point are weighted and summed according to the distance between the surface position point and the different partitions and the weight corresponding to the distance between the surface position point and the different partitions, so as to obtain the compensation position coordinate after weighted summation corresponding to the surface position point. And under the condition that the surface position points belong to different layers, carrying out weighted summation on the compensation results of the surface position points according to the distances between the surface position points and the different layers and the weights corresponding to the distances between the surface position points and the different layers to obtain the compensation position coordinates corresponding to the surface position points after weighted summation. And under the condition that the surface position point belongs to different layers and different partitions, carrying out weighted summation on the compensation result of the surface position point according to the distance between the surface position point and the different layers and the distance between the surface position point and the different partitions to obtain a compensation position coordinate corresponding to the surface position point after weighted summation. Therefore, by performing weighted summation of the compensation results of the surface position points located in the boundary region, the accuracy of the obtained coordinate positions of the boundary surface position points can be improved.
And S503, generating a point cloud image corresponding to the target object according to the position coordinates after the compensation processing.
In this step, after the position coordinates after the compensation processing are obtained, a point cloud image corresponding to the target object with higher precision can be generated according to the position coordinates after the compensation processing.
According to the point cloud image acquisition method provided by the embodiment of the disclosure, the position coordinates of the surface position point of the target object under a camera coordinate system are acquired; compensating the position coordinates according to a compensation matrix to obtain compensated position coordinates, wherein the compensation matrix is determined according to the measured position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system; and generating a point cloud image corresponding to the target object according to the position coordinates after the compensation processing. In the embodiment of the disclosure, the position coordinates are compensated according to the compensation matrix, wherein the compensation matrix is determined according to the measurement position coordinates of the plurality of position points of the calibration plate in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system, and the displacement table is a device with higher precision, and an error caused by the displacement table itself does not need to be considered, so that the compensation matrix for the 3D camera itself can be determined, the precision of the 3D camera is improved according to the compensation matrix, that is, the position coordinates of the surface position point of the target object in the camera coordinate system can be more accurately obtained according to the compensation matrix, and further, a three-dimensional point cloud image with higher precision can be obtained.
On the basis of the above embodiment, optionally, the compensation matrix in the point cloud image obtaining method is obtained by the 3D camera calibration method in any one of the above method embodiments.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 6 is a schematic structural diagram of a 3D camera calibration apparatus according to an embodiment of the disclosure, and as shown in fig. 6, the 3D camera calibration apparatus 600 according to the embodiment of the disclosure includes: an acquisition module 601, a first determination module 602, and a second determination module 603. Wherein:
the obtaining module 601 is configured to obtain measurement position coordinates of a plurality of position points of the calibration board in a camera coordinate system.
A first determining module 602, configured to determine an initial pose of the 3D camera in a displacement table coordinate system.
A second determining module 603, configured to determine a compensation matrix for the 3D camera according to the measured position coordinates of the multiple position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
In some embodiments, the second determining module 603 may be specifically configured to: dividing a plurality of position points into a plurality of groups of position points according to the difference of space areas under a camera coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates and the initial pose of each group of position points in the camera coordinate system.
Optionally, when the second determining module 603 is configured to divide the plurality of position points into a plurality of groups of position points according to different spatial regions in the camera coordinate system, the second determining module may be specifically configured to: dividing a space into a plurality of layers according to different heights under a camera coordinate system, wherein a plurality of partitions are divided in each layer; and dividing the position points in the same partition in the same layer into a group of position points according to the position coordinates of the plurality of position points in the camera coordinate system.
Optionally, when the second determining module 603 is configured to determine the compensation matrix for the 3D camera according to the measured position coordinates and the initial pose of each group of position points in the camera coordinate system, the second determining module may specifically be configured to: determining initial theoretical position coordinates of each group of position points in a camera coordinate system according to the initial pose; fitting the measurement position coordinates and the initial theoretical position coordinates of each group of position points in a camera coordinate system, and determining an initial compensation matrix; acquiring the adjusted positions and postures of a plurality of 3D cameras under a displacement table coordinate system, and determining the theoretical position coordinates of a plurality of groups of position points after the 3D cameras are adjusted; and adjusting the initial compensation matrix according to the current poses returned by the plurality of 3D cameras after being adjusted in the displacement table coordinate system, the measurement position coordinates of the plurality of groups of position points in the camera coordinate system and the current position coordinates of the plurality of groups of position points after being adjusted in the 3D camera coordinate system until the Euclidean distance of the error between the measurement position coordinates of the groups of position points in the camera coordinate system and the current theoretical position coordinates is smaller than a preset threshold value and/or the adjustment reaches a preset number of times, and determining the compensation matrix as the current compensation matrix.
Optionally, when the second determining module 603 is configured to determine the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the initial pose, the second determining module may be specifically configured to: acquiring the poses of the base station corresponding to the position points in each group in a displacement table coordinate system and the poses of the calibration plate in a camera coordinate system; determining the pose of the calibration plate relative to the base station according to the initial pose of the 3D camera in the coordinate system of the displacement table, the pose of the base station in the coordinate system of the displacement table and the pose of the calibration plate in the coordinate system of the camera; determining the position coordinates of each group of position points under the coordinate system of the displacement table according to the position coordinates of each group of position points under the coordinate system of the calibration plate, the pose of the calibration plate relative to the base station and the pose of the base station under the coordinate system of the displacement table; and determining the initial theoretical position coordinates of the position points of each group in the camera coordinate system according to the position coordinates and the initial pose of the position points of each group in the displacement table coordinate system.
In some embodiments, the obtaining module 601 may be specifically configured to: acquiring a plurality of calibration plate images in response to the calibration plate moving to a plurality of spatial positions; and acquiring the measurement position coordinates of the plurality of position points in the camera coordinate system according to the plurality of calibration plate images.
In some embodiments, the first determining module 602 may be specifically configured to: in the process that the displacement table drives the calibration plate to move through the base table, acquiring first poses of the base tables under a coordinate system of the displacement table and second poses of the calibration plate under a coordinate system of the camera; and determining the initial pose of the 3D camera under the coordinate system of the displacement table according to any one group of the first pose and the second pose.
The apparatus of this embodiment may be configured to execute the technical solution of the 3D camera calibration method in any of the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 7 is a schematic structural diagram of a point cloud image obtaining apparatus according to an embodiment of the present disclosure, and as shown in fig. 7, a point cloud image obtaining apparatus 700 according to an embodiment of the present disclosure includes: an obtaining module 701, a processing module 702 and a generating module 703. Wherein:
an obtaining module 701, configured to obtain position coordinates of a surface position point of the target object in a camera coordinate system.
The processing module 702 is configured to perform compensation processing on the position coordinates according to a compensation matrix to obtain position coordinates after the compensation processing, where the compensation matrix is determined according to the measured position coordinates of the plurality of position points of the calibration plate in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
The generating module 703 is configured to generate a point cloud image corresponding to the target object according to the compensated position coordinate.
Optionally, the compensation matrix is obtained by the 3D camera calibration method in any one of the above method embodiments.
In some embodiments, the processing module 702 may be specifically configured to: determining the layering and the partition of the surface position point according to the position coordinate of the surface position point of the target object in a camera coordinate system and the internal parameters of the 3D camera; and according to the compensation matrix corresponding to the layering and the partition where the surface position point is located, performing compensation processing on the position coordinate of the surface position point in the camera coordinate system to obtain the position coordinate after the compensation processing.
Optionally, when the processing module 702 is configured to determine the layer and the partition where the surface position point is located according to the position coordinate of the surface position point of the target object in the camera coordinate system and the internal reference of the 3D camera, the processing module may be specifically configured to: determining the layering of the surface position points according to the z-axis coordinates of the surface position points in a camera coordinate system; determining pixel coordinates corresponding to the surface position points according to the x-axis coordinates and the y-axis coordinates of the surface position points in a camera coordinate system and internal parameters of the 3D camera; and determining the partition where the surface position point is located according to the pixel coordinates.
Optionally, when the processing module 702 is configured to perform compensation processing on the position coordinates of the surface position point in the camera coordinate system according to the compensation matrix corresponding to the layering and the partitioning where the surface position point is located, to obtain the position coordinates after the compensation processing, the processing module may specifically be configured to: determining a weight for each cross-hierarchy and/or partition based on a distance between the surface location point and the cross-hierarchy and/or partition when the surface location point is in the cross-hierarchy and/or cross-partition; respectively compensating the position coordinates of the surface position points under the camera coordinate system according to the compensation matrixes of the cross-layers and/or the partitions to obtain a plurality of compensation position coordinates of the surface position points; and carrying out weighted summation on a plurality of compensation position coordinates of the surface position point according to the weight of each cross-layering and/or partitioning, and obtaining the position coordinate after the compensation processing as the compensation position coordinate after the weighted summation processing.
The apparatus of this embodiment may be configured to implement the technical solution of the point cloud image obtaining method in any one of the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, fig. 8 is a schematic diagram of a camera calibration system according to an embodiment of the present disclosure, and as shown in fig. 8, a camera calibration system 800 according to an embodiment of the present disclosure includes: a 3D camera 801 and a translation stage 802.
The 3D camera 801 is used for acquiring measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system; determining an initial pose of the 3D camera under a displacement table coordinate system; and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
And the displacement table 802 is used for driving the calibration plate to move through the base station.
Optionally, the 3D camera 801 may be configured to execute a scheme of the 3D camera calibration method in any one of the above method embodiments, and correspondingly, a structure of the apparatus embodiment in fig. 6 may be adopted, which has similar implementation principles and technical effects, and is not described herein again.
Fig. 9 is a schematic structural diagram of an electronic device according to the present disclosure. As shown in fig. 9, the electronic device 900 may include: at least one processor 901 and memory 902.
A memory 902 for storing programs. In particular, the program may include program code comprising computer-executable instructions.
The Memory 902 may include a Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The processor 901 is configured to execute computer-executable instructions stored in the memory 902 to implement the 3D camera calibration method or the point cloud image acquisition method described in the foregoing method embodiments. Processor 901 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present disclosure. Specifically, when the 3D camera calibration method or the point cloud image acquisition method described in the foregoing method embodiment is implemented, the electronic device may be an electronic device with a processing function, such as a 3D camera.
Optionally, the electronic device 900 may also include a communication interface 903. In a specific implementation, if the communication interface 903, the memory 902 and the processor 901 are implemented independently, the communication interface 903, the memory 902 and the processor 901 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. Buses may be classified as address buses, data buses, control buses, etc., but do not represent only one bus or type of bus.
Alternatively, in a specific implementation, if the communication interface 903, the memory 902, and the processor 901 are integrated into a chip, the communication interface 903, the memory 902, and the processor 901 may complete communication through an internal interface.
The present disclosure also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the scheme of the 3D camera calibration method or the scheme of the point cloud image acquisition method is implemented.
The present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the scheme of the 3D camera calibration method or the scheme of the point cloud image acquisition method as above.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A readable storage medium may be any available medium that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and readable storage medium may also reside as discrete components in a 3D camera calibration apparatus or in a point cloud image acquisition apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.
Claims (18)
1. A3D camera calibration method is characterized by comprising the following steps:
acquiring measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system;
determining an initial pose of the 3D camera under a displacement table coordinate system;
and determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
2. The 3D camera calibration method according to claim 1, wherein the determining a compensation matrix for the 3D camera according to the measured position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the translation stage coordinate system comprises:
dividing the plurality of position points into a plurality of groups of position points according to different space areas under the camera coordinate system;
and determining a compensation matrix for the 3D camera according to the measurement position coordinates of each group of position points in the camera coordinate system and the initial pose.
3. The 3D camera calibration method according to claim 2, wherein the dividing the plurality of position points into a plurality of groups of position points according to the difference of the spatial regions in the camera coordinate system comprises:
dividing the space into a plurality of layers according to different heights under the camera coordinate system, wherein a plurality of partitions are divided in each layer;
and dividing the position points in the same partition in the same layer into a group of position points according to the position coordinates of the position points in the camera coordinate system.
4. The 3D camera calibration method according to claim 2, wherein the determining a compensation matrix for the 3D camera according to the measured position coordinates of each group of position points in the camera coordinate system and the initial pose comprises:
determining initial theoretical position coordinates of each group of position points under the camera coordinate system according to the initial pose;
fitting the measurement position coordinates and the initial theoretical position coordinates of each group of position points in the camera coordinate system to determine an initial compensation matrix;
obtaining the adjusted poses of the 3D cameras under the displacement table coordinate system, and determining theoretical position coordinates of the multiple groups of position points after the 3D cameras are adjusted;
and adjusting the initial compensation matrix according to the current poses returned by the plurality of 3D cameras after being adjusted in the displacement table coordinate system, the measurement position coordinates of the plurality of groups of position points in the camera coordinate system and the current position coordinates after being adjusted in the 3D cameras until the Euclidean distance of the error between the measurement position coordinates of the groups of position points in the camera coordinate system and the current theoretical position coordinates is smaller than a preset threshold value and/or the adjustment reaches a preset number of times, and determining the compensation matrix as the current compensation matrix.
5. The 3D camera calibration method according to claim 4, wherein the determining the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the initial pose comprises:
acquiring the positions and postures of the base station corresponding to the groups of position points in the displacement table coordinate system and the calibration plate in the camera coordinate system;
determining the pose of the calibration plate relative to the base station according to the initial pose of the 3D camera in the displacement table coordinate system, the pose of the base station in the displacement table coordinate system and the pose of the calibration plate in the camera coordinate system;
determining the position coordinates of each group of position points under the coordinate system of the displacement table according to the position coordinates of each group of position points under the coordinate system of the calibration plate, the pose of the calibration plate relative to the base station and the pose of the base station under the coordinate system of the displacement table;
and determining the initial theoretical position coordinates of each group of position points in the camera coordinate system according to the position coordinates of each group of position points in the displacement table coordinate system and the initial pose.
6. The 3D camera calibration method according to any one of claims 1 to 5, wherein the obtaining of the measured position coordinates of the plurality of position points of the calibration plate in the camera coordinate system comprises:
obtaining a plurality of calibration plate images in response to the calibration plate moving to a plurality of spatial positions;
and acquiring the measurement position coordinates of the plurality of position points in the camera coordinate system according to the plurality of calibration plate images.
7. The 3D camera calibration method according to any one of claims 1 to 5, wherein the determining of the initial pose of the 3D camera in the displacement table coordinate system comprises:
in the process that the displacement table drives the calibration plate to move through the base table, acquiring first poses of the base tables under a coordinate system of the displacement table and second poses of the calibration plate under a coordinate system of the camera;
according to any one set of the first pose and the second pose, determining an initial pose of the 3D camera in the displacement table coordinate system.
8. A point cloud image acquisition method is characterized by comprising the following steps:
acquiring position coordinates of a surface position point of a target object under a camera coordinate system;
compensating the position coordinates according to a compensation matrix to obtain compensated position coordinates, wherein the compensation matrix is determined according to the measured position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system;
and generating a point cloud image corresponding to the target object according to the position coordinate after the compensation processing.
9. The point cloud image acquisition method of claim 8, wherein the compensation matrix is obtained by the 3D camera calibration method of any one of claims 1 to 7.
10. The point cloud image obtaining method according to claim 8 or 9, wherein the performing compensation processing on the position coordinates according to a compensation matrix to obtain compensated position coordinates includes:
determining the layering and the partition of the surface position point according to the position coordinate of the surface position point of the target object in the camera coordinate system and the internal parameters of the 3D camera;
and according to the compensation matrix corresponding to the layering and the partition of the surface position point, performing compensation processing on the position coordinate of the surface position point in the camera coordinate system to obtain a position coordinate after the compensation processing.
11. The point cloud image acquisition method according to claim 10, wherein the determining the hierarchy and the partition of the surface position point of the target object according to the position coordinates of the surface position point in the camera coordinate system and the internal parameters of the 3D camera comprises:
determining the layering of the surface position points according to the z-axis coordinates of the surface position points in the camera coordinate system;
determining pixel coordinates corresponding to the surface position points according to the x-axis coordinates and the y-axis coordinates of the surface position points in the camera coordinate system and the internal parameters of the 3D camera;
and determining the partition where the surface position point is located according to the pixel coordinates.
12. The point cloud image obtaining method according to claim 10, wherein the obtaining the position coordinates of the surface position points after the compensation processing by performing the compensation processing on the position coordinates of the surface position points in the camera coordinate system according to the compensation matrix corresponding to the layering and the partitioning where the surface position points are located includes:
determining a weight for each cross-tier and/or partition as a function of a distance between the surface location point and the cross-tier and/or partition when the surface location point is in the cross-tier and/or partition;
respectively compensating the position coordinates of the surface position points under the camera coordinate system according to the compensation matrixes of the cross-layers and/or the partitions to obtain a plurality of compensation position coordinates of the surface position points;
and carrying out weighted summation on the plurality of compensation position coordinates of the surface position point according to the weight of each cross-layering and/or partitioning, and obtaining the position coordinate after the compensation processing as the compensation position coordinate after the weighted summation processing.
13. The camera calibration system is characterized by comprising a 3D camera and a displacement table;
the 3D camera is used for acquiring the measurement position coordinates of a plurality of position points of the calibration plate under a camera coordinate system; determining an initial pose of the 3D camera under a displacement table coordinate system; determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system;
and the displacement table is used for driving the calibration plate to move through the base station.
14. The camera calibration system according to claim 13, wherein the 3D camera is configured to perform the 3D camera calibration method according to any one of claims 1 to 7.
15. A3D camera calibration device, comprising:
the acquisition module is used for acquiring the measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system;
the first determination module is used for determining the initial pose of the 3D camera under a displacement table coordinate system;
and the second determination module is used for determining a compensation matrix for the 3D camera according to the measurement position coordinates of the plurality of position points in the camera coordinate system and the initial pose of the 3D camera in the displacement table coordinate system.
16. A point cloud image acquisition apparatus, comprising:
the acquisition module is used for acquiring the position coordinates of the surface position point of the target object in a camera coordinate system;
the processing module is used for performing compensation processing on the position coordinates according to a compensation matrix to obtain position coordinates after compensation processing, wherein the compensation matrix is determined according to the measurement position coordinates of a plurality of position points of the calibration plate in a camera coordinate system and the initial pose of the 3D camera in a displacement table coordinate system;
and the generating module is used for generating a point cloud image corresponding to the target object according to the position coordinates after the compensation processing.
17. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1 to 12.
18. A computer-readable storage medium, in which computer program instructions are stored which, when executed by a processor, implement the method of any one of claims 1 to 12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211483388.6A CN115719387A (en) | 2022-11-24 | 2022-11-24 | 3D camera calibration method, point cloud image acquisition method and camera calibration system |
PCT/CN2023/125286 WO2024109403A1 (en) | 2022-11-24 | 2023-10-18 | 3d camera calibration method, point cloud image acquisition method, and camera calibration system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211483388.6A CN115719387A (en) | 2022-11-24 | 2022-11-24 | 3D camera calibration method, point cloud image acquisition method and camera calibration system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115719387A true CN115719387A (en) | 2023-02-28 |
Family
ID=85256350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211483388.6A Pending CN115719387A (en) | 2022-11-24 | 2022-11-24 | 3D camera calibration method, point cloud image acquisition method and camera calibration system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115719387A (en) |
WO (1) | WO2024109403A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024109403A1 (en) * | 2022-11-24 | 2024-05-30 | 梅卡曼德(北京)机器人科技有限公司 | 3d camera calibration method, point cloud image acquisition method, and camera calibration system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119205935B (en) * | 2024-11-26 | 2025-02-21 | 浙江华锐捷技术有限公司 | Calibration method, electronic device, computer-readable storage medium, and calibration system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132891A (en) * | 2020-11-26 | 2020-12-25 | 三代光学科技(天津)有限公司 | Method for enlarging calibration space |
US11967111B2 (en) * | 2020-12-15 | 2024-04-23 | Kwangwoon University Industry-Academic Collaboration Foundation | Multi-view camera-based iterative calibration method for generation of 3D volume model |
CN112223302B (en) * | 2020-12-17 | 2021-02-26 | 国网瑞嘉(天津)智能机器人有限公司 | Rapid calibration method and device of live working robot based on multiple sensors |
CN115810052A (en) * | 2021-09-16 | 2023-03-17 | 梅卡曼德(北京)机器人科技有限公司 | Camera calibration method, device, electronic equipment and storage medium |
CN114371472B (en) * | 2021-12-15 | 2024-07-12 | 中电海康集团有限公司 | Automatic combined calibration device and method for laser radar and camera |
CN114519738A (en) * | 2022-01-24 | 2022-05-20 | 西北工业大学宁波研究院 | Hand-eye calibration error correction method based on ICP algorithm |
CN115719387A (en) * | 2022-11-24 | 2023-02-28 | 梅卡曼德(北京)机器人科技有限公司 | 3D camera calibration method, point cloud image acquisition method and camera calibration system |
-
2022
- 2022-11-24 CN CN202211483388.6A patent/CN115719387A/en active Pending
-
2023
- 2023-10-18 WO PCT/CN2023/125286 patent/WO2024109403A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024109403A1 (en) * | 2022-11-24 | 2024-05-30 | 梅卡曼德(北京)机器人科技有限公司 | 3d camera calibration method, point cloud image acquisition method, and camera calibration system |
Also Published As
Publication number | Publication date |
---|---|
WO2024109403A1 (en) | 2024-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111354042B (en) | Feature extraction method and device of robot visual image, robot and medium | |
CN111179358B (en) | Calibration method, device, equipment and storage medium | |
CN112444242B (en) | Pose optimization method and device | |
CN109405765B (en) | A high-precision depth calculation method and system based on speckle structured light | |
CN106408609B (en) | A kind of parallel institution end movement position and posture detection method based on binocular vision | |
CN113920206B (en) | Calibration method of perspective tilt-shift camera | |
CN112184811B (en) | Monocular space structured light system structure calibration method and device | |
WO2024109403A1 (en) | 3d camera calibration method, point cloud image acquisition method, and camera calibration system | |
CN112991453A (en) | Calibration parameter calibration method and device for binocular camera and electronic equipment | |
CN113822920B (en) | Method for acquiring depth information by structured light camera, electronic equipment and storage medium | |
CN111383264B (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN112184809B (en) | Relative posture estimation method, device, electronic equipment and medium | |
CN108182708B (en) | Calibration method and calibration device of binocular camera and terminal equipment | |
CN114387347B (en) | Method, device, electronic equipment and medium for determining external parameter calibration | |
Ding et al. | A robust detection method of control points for calibration and measurement with defocused images | |
CN117173254A (en) | Camera calibration method, system, device and electronic equipment | |
CN111311671B (en) | Workpiece measuring method and device, electronic equipment and storage medium | |
CN116051634A (en) | Visual positioning method, terminal and storage medium | |
CN115810052A (en) | Camera calibration method, device, electronic equipment and storage medium | |
CN115018922A (en) | Distortion parameter calibration method, electronic device and computer readable storage medium | |
CN109829950B (en) | Method and device for detecting calibration parameters of binocular camera and automatic driving system | |
CN112631200A (en) | Machine tool axis measuring method and device | |
CN115984389B (en) | Calibration method, system calibration method, device and electronic equipment | |
CN117541661A (en) | Binocular camera external parameter automatic correction method, system, device and storage medium | |
CN115205356B (en) | Binocular stereo vision-based quick debugging method for practical training platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |