[go: up one dir, main page]

CN117830435A - Multi-camera system calibration method based on 3D reconstruction - Google Patents

Multi-camera system calibration method based on 3D reconstruction Download PDF

Info

Publication number
CN117830435A
CN117830435A CN202410046673.4A CN202410046673A CN117830435A CN 117830435 A CN117830435 A CN 117830435A CN 202410046673 A CN202410046673 A CN 202410046673A CN 117830435 A CN117830435 A CN 117830435A
Authority
CN
China
Prior art keywords
camera
calibration
image
calibration object
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410046673.4A
Other languages
Chinese (zh)
Inventor
姜光
高静源
苏正阳
贾静
高常泰
李鹏程
陈浩
魏诗瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202410046673.4A priority Critical patent/CN117830435A/en
Publication of CN117830435A publication Critical patent/CN117830435A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a multi-camera system calibration method based on three-dimensional reconstruction, which comprises the following implementation steps: constructing a calibration scene of the multi-camera system to be calibrated; acquiring an image pair of a scene image; carrying out three-dimensional reconstruction on the calibration object; performing scale recovery on the three-dimensional sparse point cloud of the calibration object; constructing a descriptor standard library; acquiring an external reference calibration chart and an internal reference calibration chart of the multi-camera system; obtaining calibration results of internal parameters and external parameters of the multi-camera system; according to the method, after the three-dimensional reconstruction is carried out on the calibration object through the images of all scene images, the three-dimensional sparse point cloud with the real scale of the calibration object is obtained, so that the characteristic points in the images and the three-dimensional points of the calibration object can be accurately matched no matter the camera to be calibrated shoots the calibration object from any angle, the calibration accuracy is effectively improved, and meanwhile, the external parameters and the internal parameters of the camera are calibrated through the descriptor standard library, and the calibration efficiency is effectively improved.

Description

基于三维重建的多相机系统标定方法Multi-camera system calibration method based on 3D reconstruction

技术领域Technical Field

本发明属于计算机技术领域,涉及一种多相机系统标定方法,具体涉及一种基于三维重建的多相机系统标定方法。The present invention belongs to the field of computer technology and relates to a multi-camera system calibration method, and specifically to a multi-camera system calibration method based on three-dimensional reconstruction.

背景技术Background technique

多相机系统指的是一个由多个相机组成的系统,这些相机协同工作,以从不同的视角或位置捕捉同一个场景的图像。多相机系统的参数包括多相机的内参和外参,内参是描述相机自身成像特性的参数,包括焦距和主点坐标,外参是描述相机坐标系相对于某一参考坐标系如标定物或世界坐标系的位置和朝向的参数。A multi-camera system refers to a system consisting of multiple cameras that work together to capture images of the same scene from different perspectives or positions. The parameters of a multi-camera system include the intrinsic and extrinsic parameters of the multi-camera. The intrinsic parameters are the parameters that describe the imaging characteristics of the camera itself, including the focal length and the coordinates of the principal point. The extrinsic parameters are the parameters that describe the position and orientation of the camera coordinate system relative to a reference coordinate system such as a calibration object or a world coordinate system.

准确地对多相机系统的内、外参进行标定是实现多相机协同工作的基础。多相机系统标定的技术思路通常涉及选择标定对象、捕捉标定图像、检测特征点、估算相机参数这四个关键步骤。在空间受限的环境中,每个相机都能清晰地捕获标定物图像,并且能够将图像中检测到的特征点与标定物在三维空间中的实际点进行匹配是多相机系统标定的难点。Accurately calibrating the intrinsic and extrinsic parameters of a multi-camera system is the basis for achieving multi-camera collaborative work. The technical ideas for multi-camera system calibration usually involve four key steps: selecting calibration objects, capturing calibration images, detecting feature points, and estimating camera parameters. In a space-constrained environment, each camera can clearly capture the image of the calibration object, and can match the feature points detected in the image with the actual points of the calibration object in three-dimensional space, which is the difficulty of multi-camera system calibration.

重投影误差是衡量多相机标定准确性的一个关键指标,它定义为在相机标定过程中,将三维点利用标定结果包括内参、外参重新投影回图像平面后,这些投影点与实际观测到的图像点之间的差异,业界普遍认为,如果重投影误差在0.1个像素以内,表明标定结果是准确的。Reprojection error is a key indicator to measure the accuracy of multi-camera calibration. It is defined as the difference between the projected points and the actually observed image points after the three-dimensional points are reprojected back to the image plane using the calibration results including intrinsic and extrinsic parameters during the camera calibration process. It is generally believed in the industry that if the reprojection error is within 0.1 pixels, the calibration result is accurate.

申请公开号为CN116704045A,名称为“用于监测星空背景模拟系统的多相机系统标定方法”的专利申请,公开了一种多相机系统标定方法,该发明确定标定板上的标志点在世界坐标系下的三维表示;令各三维转台均处于初始的零位位置;利用标定板,标定各组相机装置中的高精度相机,确定各高精度相机的内参数和外参数;在每组相机装置中的三维转台的可转动轴框上设置标记点;选取两组相机装置,标定其余各组相机装置的三维转台;再选出另两组相机装置,标定在先选取的两组相机装置的三维转台;选定基准坐标系,确定其余各组相机装置对应的相机坐标系在基准坐标系下的位姿关系,得到多相机系统的外参数。该发明能够快速、准确地对用于监测星空系统的多相机及转台进行标定。但该发明在获取内参和外参的标定结果的过程中需要多个相机和三维转台之间进行精确的协调,导致其标定精度和效率仍然较低。The patent application with the application publication number CN116704045A and the name “Multi-camera system calibration method for monitoring starry sky background simulation system” discloses a multi-camera system calibration method, which determines the three-dimensional representation of the mark points on the calibration plate in the world coordinate system; sets each three-dimensional turntable to the initial zero position; uses the calibration plate to calibrate the high-precision cameras in each group of camera devices, and determines the intrinsic and extrinsic parameters of each high-precision camera; sets the mark points on the rotatable axis frame of the three-dimensional turntable in each group of camera devices; selects two groups of camera devices, and calibrates the three-dimensional turntables of the remaining groups of camera devices; selects another two groups of camera devices, and calibrates the three-dimensional turntables of the first two groups of camera devices; selects a reference coordinate system, determines the position and posture relationship of the camera coordinate systems corresponding to the remaining groups of camera devices in the reference coordinate system, and obtains the extrinsic parameters of the multi-camera system. The invention can quickly and accurately calibrate the multi-camera and turntable used for monitoring the starry sky system. However, the invention requires precise coordination between multiple cameras and a three-dimensional turntable in the process of obtaining the calibration results of the internal and external parameters, resulting in low calibration accuracy and efficiency.

发明内容Summary of the invention

本发明的目的在于克服上述现有技术存在的缺陷,提出了一种基于三维重建的多相机系统标定方法,用于解决现有技术中存在的标定精度和效率较低的技术问题。The purpose of the present invention is to overcome the defects of the above-mentioned prior art and propose a multi-camera system calibration method based on three-dimensional reconstruction to solve the technical problems of low calibration accuracy and efficiency in the prior art.

为实现上述目的,本发明采取的技术方案包括如下步骤:To achieve the above object, the technical solution adopted by the present invention includes the following steps:

(1)构建待标定多相机系统的标定场景:(1) Construct a calibration scenario for the multi-camera system to be calibrated:

构建包括形状为凸多面体的标定物和与其并排摆放的棋盘格,以及S台待标定相机组成的多相机系统的标定场景,S≥2;Construct a calibration scene for a multi-camera system consisting of a calibration object in the shape of a convex polyhedron, a chessboard placed side by side with it, and S cameras to be calibrated, S ≥ 2;

(2)获取场景图像的图像对:(2) Obtaining image pairs of scene images:

通过内参为C的相机从N个角度对标定场景进行拍摄,得到包含L幅完整体现棋盘格和L'幅未完整体现棋盘格的共N幅场景图像,并对每幅场景图像进行特征检测,得到N幅场景图像的特征点集合第n幅场景图像中包含的Z个特征点及与其对应的Z个描述子,和第z个特征点qnz和描述子dnz间的映射函数g(qnz)=dnz,并对qnz与其他N-1幅场景图像的每个特征点进行匹配后,将第n幅场景图像和与其匹配特征点数大于β的场景图像组成图像对,得到第n幅场景图像对应的多组图像对,其中,N=L+L',N≥20,L≥2;The calibration scene is photographed from N angles by a camera with an internal reference C, and a total of N scene images are obtained, including L complete chessboard images and L' incomplete chessboard images. Feature detection is performed on each scene image to obtain a feature point set of N scene images. The Z feature points and the corresponding Z descriptors contained in the n-th scene image, and the mapping function g(q nz )=d nz between the z-th feature point q nz and the descriptor d nz , and after matching q nz with each feature point of the other N-1 scene images, the n-th scene image and the scene images whose matching feature points are greater than β form an image pair, and multiple groups of image pairs corresponding to the n-th scene image are obtained, where N=L+L', N≥20, L≥2;

(3)对标定物进行三维重建:(3) Perform three-dimensional reconstruction of the calibration object:

通过N幅场景图像对应的图像对对标定物进行三维重建,得到标定物的由I个三维点组成的三维稀疏点云X、X与特征点集合的对应关系F和在拍摄L幅能够完整展示棋盘格的场景图像时,由标定物坐标系相对于相机坐标系的旋转矩阵集r={rl|1≤l≤L}和平移向量集t={tl|1≤l≤L}组成的相机位姿,其中,rl、tl分别表示第l次拍摄能够完整展示棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵、平移向量,X中第i个三维点为XiThe calibration object is reconstructed in three dimensions through the image pairs corresponding to N scene images to obtain the 3D sparse point cloud X, X and feature point set composed of I 3D points of the calibration object. The corresponding relationship F and the camera pose composed of the rotation matrix set r = {r l |1≤l≤L} and the translation vector set t = {t l |1≤l≤L} of the calibration object coordinate system relative to the camera coordinate system when shooting L scenes that can fully display the chessboard, where r l and t l represent the rotation matrix and translation vector of the calibration object coordinate system relative to the camera coordinate system when shooting the l-th scene image that can fully display the chessboard, respectively, and the i-th 3D point in X is Xi ;

(4)对标定物的三维稀疏点云进行尺度恢复:(4) Scale restoration of the 3D sparse point cloud of the calibration object:

通过相机的内参C和完整体现棋盘格的L幅场景图像对标定物的三维稀疏点云X进行尺度恢复,得到标定物具有真实尺度的三维稀疏点云X';The scale of the 3D sparse point cloud X of the calibration object is restored through the camera's intrinsic parameter C and L scene images that fully reflect the chessboard, and the 3D sparse point cloud X' of the calibration object with the real scale is obtained;

(5)构建描述子标准库:(5) Build a descriptor standard library:

通过X与特征点集合的对应关系F,在特征点集合/>中搜索每个三维点Xi对应的一个特征点,并通过qnz与dnz间的映射函数g(qnz)=dnz,将I个特征点的描述子组成描述子标准库J;Through X and feature point set The corresponding relationship F, in the feature point set/> Search for a feature point corresponding to each 3D point Xi , and through the mapping function g( qnz )= dnz between qnz and dnz , combine the descriptors of I feature points into a descriptor standard library J;

(6)获取多相机系统的外参标定图和内参标定图:(6) Obtain the external parameter calibration map and the internal parameter calibration map of the multi-camera system:

多相机系统中的S台待标定相机对公共视野范围内的标定物进行拍摄,得到S幅外参标定图像B,并对移动至每台相机视野范围内的标定物进行拍摄,得到S幅内参标定图像E;The S cameras to be calibrated in the multi-camera system shoot the calibration object within the public field of view to obtain S external parameter calibration images B, and shoot the calibration object moved to the field of view of each camera to obtain S internal parameter calibration images E;

(7)获取多相机系统内参和外参的标定结果:(7) Obtain the calibration results of the internal and external parameters of the multi-camera system:

通过标定物具有真实尺度的三维稀疏点云X'、描述子标准库J和S幅内参标定图像E,对每台相机沿图像坐标系x轴和y轴的焦距ax和ay、主点坐标x0和y0进行标定,得到多相机系统的内参矩阵KS,并通过KS、X'、J和S幅外参标定图像B,对标定物坐标系相对于每台相机的相机坐标系的旋转矩阵RS和平移向量tS进行标定,得到多相机系统的外参。Through the three-dimensional sparse point cloud X' of the calibration object with real scale, the descriptor standard library J and S intrinsic parameter calibration images E, the focal length ax and ay along the x-axis and y-axis of the image coordinate system, and the principal point coordinates x0 and y0 of each camera are calibrated to obtain the intrinsic parameter matrix KS of the multi-camera system. And through KS , X', J and S extrinsic parameter calibration images B, the rotation matrix RS and translation vector tS of the calibration object coordinate system relative to the camera coordinate system of each camera are calibrated to obtain the extrinsic parameters of the multi-camera system.

本发明与现有技术相比,具有以下优点:Compared with the prior art, the present invention has the following advantages:

1.本发明通过内参确定的相机从不同角度对标定场景进行拍摄,并通过所有场景图像对应的图像对对标定物进行三维重建后,获取标定物具有真实尺度的三维稀疏点云,确保了无论待标定相机从任何角对标定物进行拍摄,都能够准确匹配图像中的特征点和标定物的三维点,最后实现对多相机内参和外参的标定,避免了现有技术需要多个相机和三维转台之间进行精确协调的缺陷,有效提高了标定的准确性。1. The present invention shoots the calibration scene from different angles with a camera determined by the intrinsic parameters, and after three-dimensionally reconstructing the calibration object through image pairs corresponding to all scene images, a three-dimensional sparse point cloud of the calibration object with a real scale is obtained, ensuring that no matter from which angle the camera to be calibrated shoots the calibration object, the feature points in the image and the three-dimensional points of the calibration object can be accurately matched. Finally, the calibration of the intrinsic parameters and external parameters of multiple cameras is realized, avoiding the defect of the prior art that precise coordination between multiple cameras and a three-dimensional turntable is required, and effectively improving the accuracy of calibration.

2.本发明通过由特征点集合中每个三维点对应的特征点,以及每个特征点与对应的描述子之间的映射函数获取包括多个特征点的描述子的描述子标准库,并通过其对相机的外参和内参进行标定,能够快速匹配内外参标定图像中的特征点和标定物的三维点,有效提高了标定效率。2. The present invention obtains a descriptor standard library of descriptors including multiple feature points through the feature points corresponding to each three-dimensional point in the feature point set, and the mapping function between each feature point and the corresponding descriptor, and calibrates the external parameters and internal parameters of the camera through it. It can quickly match the feature points in the internal and external parameter calibration images and the three-dimensional points of the calibration object, effectively improving the calibration efficiency.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的本发明的实现流程图。FIG. 1 is a flow chart of an implementation of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施例,对本发明作进一步详细描述。The present invention is further described in detail below in conjunction with the accompanying drawings and specific embodiments.

参照图1,本发明包括如下步骤:Referring to Figure 1, the present invention comprises the following steps:

步骤1)构建待标定多相机系统的标定场景:Step 1) Construct the calibration scene of the multi-camera system to be calibrated:

构建包括形状为凸多面体的标定物和与其并排摆放的棋盘格,以及S台待标定相机组成的多相机系统的标定场景,S≥2;Construct a calibration scene for a multi-camera system consisting of a calibration object in the shape of a convex polyhedron, a chessboard placed side by side with it, and S cameras to be calibrated, S ≥ 2;

由于凸多面体在空间中具有多个面,这使得在不同的角度和位置上的相机都能够捕捉到标定物的部分特征。在本实施例中S=4,使用了一个直径为1米的球体作为标定物,其表面富含大量特征点,与其并排摆放了一个棋盘格以便在拍摄场景时能够同时捕捉到球体和棋盘格上的特征点,棋盘格的设计采用每个方格边长为30mm,这一已知尺寸对于后续工作中恢复三维稀疏点云的尺度至关重要。Since a convex polyhedron has multiple faces in space, cameras at different angles and positions can capture some features of the calibration object. In this embodiment, S=4, a sphere with a diameter of 1 meter is used as the calibration object. Its surface is rich in feature points. A chessboard is placed side by side with it so that feature points on the sphere and the chessboard can be captured simultaneously when shooting the scene. The chessboard is designed with a side length of 30mm for each square. This known size is crucial for restoring the scale of the 3D sparse point cloud in subsequent work.

步骤2)获取场景图像的图像对:Step 2) Obtain image pairs of scene images:

通过内参为C的相机从N个角度对标定场景进行拍摄,得到包含L幅完整体现棋盘格和L'幅未完整体现棋盘格的共N幅场景图像,并对每幅场景图像进行特征检测,得到N幅场景图像的特征点集合第n幅场景图像中包含的Z个特征点及与其对应的Z个描述子,和第z个特征点qnz和描述子dnz间的映射函数g(qnz)=dnz,并对qnz与其他N-1幅场景图像的每个特征点进行匹配后,将第n幅场景图像和与其匹配特征点数大于β的场景图像组成图像对,得到第n幅场景图像对应的多组图像对,其中,N=L+L',N≥20,L≥2;The calibration scene is photographed from N angles by a camera with an internal reference C, and a total of N scene images are obtained, including L complete chessboard images and L' incomplete chessboard images. Feature detection is performed on each scene image to obtain a feature point set of N scene images. The Z feature points and the corresponding Z descriptors contained in the n-th scene image, and the mapping function g(q nz )=d nz between the z-th feature point q nz and the descriptor d nz , and after matching q nz with each feature point of the other N-1 scene images, the n-th scene image and the scene images whose matching feature points are greater than β form an image pair, and multiple groups of image pairs corresponding to the n-th scene image are obtained, where N=L+L', N≥20, L≥2;

本实施例中,N=30,L=5,β=100,沿图像坐标系x轴和y轴的相机焦距分别为1373.77和1373.5像素,主点坐标分别为968.45和542.036像素。In this embodiment, N = 30, L = 5, β = 100, The camera focal lengths along the x-axis and y-axis of the image coordinate system are 1373.77 and 1373.5 pixels, respectively, and the principal point coordinates are 968.45 and 542.036 pixels, respectively.

通过内参为C的相机从N个角度对标定场景进行拍摄时,相邻角度所拍摄的图像之间的重叠部分大于60%,相机的分别率为1920*1080像素。When the calibration scene is photographed from N angles by a camera with an intrinsic reference C, the overlap between images taken at adjacent angles is greater than 60%, and the resolution of the camera is 1920*1080 pixels.

步骤3)对标定物进行三维重建:Step 3) Perform three-dimensional reconstruction of the calibration object:

采用SfM算法通过N幅场景图像对应的图像对对标定物进行三维重建,得到标定物的由I个三维点组成的三维稀疏点云X、X与特征点集合的对应关系F,F表达式为:The SfM algorithm is used to reconstruct the calibration object in three dimensions through the image pairs corresponding to N scene images, and the 3D sparse point cloud X, X and the feature point set composed of I 3D points of the calibration object are obtained. The corresponding relationship F, F expression is:

(Xi,qnz)∈F(X i ,q nz )∈F

其中,表示三维稀疏点云X与N幅场景图像的特征点集合/>的笛卡尔积,对应三维点Xi与图像中特征点qnz的所有可能有序对的集合;(Xi,qnz)∈F表明在SfM算法中,三维点Xi在第n幅图像中有一个对应的二维特征点qnkin, Represents a set of feature points of a three-dimensional sparse point cloud X and N scene images/> The Cartesian product of corresponds to the set of all possible ordered pairs of the three-dimensional point Xi and the feature point qnz in the image; ( Xi , qnz )∈F indicates that in the SfM algorithm, the three-dimensional point Xi has a corresponding two-dimensional feature point qnk in the nth image;

在SfM算法的过程中,得到了在拍摄L幅能够完整展示棋盘格的场景图像时,由标定物坐标系相对于相机坐标系的旋转矩阵集r={rl|1≤l≤L}和平移向量集t={tl|1≤l≤L}组成的相机位姿,其中,rl、tl分别表示第l次拍摄能够完整展示棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵、平移向量,这些相机位姿为后续步骤中对标定物的三维稀疏点云进行尺度恢复做了准备。其中rl为由三个正交的单位列向量组成的3x3的矩阵,每个列向量代表相机坐标系中的一个轴在标定物坐标系中的方向;tl为表示相机坐标系原点相对于标定物坐标系原点的位置的3x1的向量。在本实施中,I=47402。In the process of SfM algorithm, the camera pose consisting of the rotation matrix set r = {r l |1≤l≤L} and the translation vector set t = {t l |1≤l≤L} of the calibration object coordinate system relative to the camera coordinate system is obtained when shooting L images of the scene that can fully display the checkerboard. Among them, r l and t l respectively represent the rotation matrix and translation vector of the calibration object coordinate system relative to the camera coordinate system when shooting the scene image that can fully display the checkerboard. These camera poses prepare for the scale recovery of the three-dimensional sparse point cloud of the calibration object in the subsequent steps. Among them, r l is a 3x3 matrix composed of three orthogonal unit column vectors, each column vector represents the direction of an axis in the camera coordinate system in the calibration object coordinate system; t l is a 3x1 vector representing the position of the origin of the camera coordinate system relative to the origin of the calibration object coordinate system. In this implementation, I = 47402.

步骤4)对标定物的三维稀疏点云进行尺度恢复:Step 4) Scale the 3D sparse point cloud of the calibration object:

通过相机的内参C和完整体现棋盘格的L幅场景图像对标定物的三维稀疏点云X进行尺度恢复,得到标定物具有真实尺度的三维稀疏点云X':The scale of the 3D sparse point cloud X of the calibration object is restored through the camera's intrinsic parameter C and L scene images that fully reflect the chessboard, and the 3D sparse point cloud X' of the calibration object with the real scale is obtained:

X'i=scXi X' i = scX i

tl,l+1=tl+1-rl,l+1tl t l,l+1 =t l+1 -r l,l+1 t l

rl,l+1=rl+1rl T r l,l+1 = r l+1 r l T

其中,sc表示尺度因子,‖·‖2表示取二范数操作,rl、tl分别表示第l次拍摄完整体现棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵和平移向量,rl,l+1、tl,l+1分别表示第l次拍摄相对于第l+1次拍摄完整体现棋盘格的场景图像时相机坐标系的旋转矩阵和平移向量,t′l,l+1表示tl,l+1的真实尺度;t′l,l+1表示tl,l+1的真实尺度;Where sc represents the scale factor, ‖·‖ 2 represents the binary norm operation, r l and t l represent the rotation matrix and translation vector of the calibration object coordinate system relative to the camera coordinate system when the l-th image of the scene fully reflects the checkerboard, r l,l+1 and t l,l+1 represent the rotation matrix and translation vector of the camera coordinate system when the l-th image is taken relative to the l+1-th image of the scene fully reflects the checkerboard, t′ l,l+1 represents the true scale of t l,l+1 ; t′ l,l+1 represents the true scale of t l,l+1 ;

tl,l+1的真实尺度t′l,l+1的获取步骤为:提取L每幅包含完整棋盘格的场景图像中棋盘格的角点,并采用EPnP算法,通过相机的内参C对相机的位姿进行估计,然后采用高斯-牛顿优化方法对位姿进行优化,得到每次对标定场景拍摄时标定物坐标系对相机坐标系的具有真实尺度的旋转矩阵和平移向量,最后通过旋转矩阵、平移向量计算第l次拍摄相对于第l+1次拍摄完整体现棋盘格的场景图像时相机坐标系的具有真实尺度的平移向量t′l,l+1。在本实施中sc=0.53754128。The steps for obtaining the real scale t′ l ,l+1 of t l,l+1 are as follows: extract the corner points of the chessboard in each of L scene images containing a complete chessboard, and use the EPnP algorithm to estimate the camera's pose through the camera's internal parameter C, and then use the Gauss-Newton optimization method to optimize the pose, and obtain the rotation matrix and translation vector with real scale of the calibration object coordinate system to the camera coordinate system when each calibration scene is photographed, and finally calculate the translation vector t′ l,l+1 with real scale of the camera coordinate system when the lth shooting is relative to the l+1th shooting of the scene image that fully reflects the chessboard through the rotation matrix and translation vector. In this implementation, sc=0.53754128.

步骤5)构建描述子标准库:Step 5) Build the descriptor standard library:

通过X与特征点集合的对应关系F,在特征点集合/>中搜索每个三维点Xi对应的一个特征点,并通过qnz与dnz间的映射函数g(qnz)=dnz,将I个特征点的描述子组成描述子标准库J;具体步骤如下:Through X and feature point set The corresponding relationship F, in the feature point set/> A feature point corresponding to each 3D point Xi is searched in, and the descriptors of the I feature points are combined into a descriptor standard library J through the mapping function g( qnz )= dnz between qnz and dnz . The specific steps are as follows:

在特征点集合中搜索每个三维点Xi对应的特征点{qnz|(Xi,qnz)∈F},并随机选取{qnz|(Xi,qnz)∈F}中的一个特征点;如选取特征点qnz,通过映射函数g(qnz)=dnz获取特征点对应的描述子dnk,将选择的描述子dnk加入到集合J中,即J=J∪{dnk};In the feature point set Search for the feature point {q nz |(X i ,q nz )∈F} corresponding to each 3D point Xi , and randomly select a feature point in {q nz |(X i ,q nz )∈F}; for example, select the feature point q nz , obtain the descriptor d nk corresponding to the feature point through the mapping function g(q nz )=d nz , and add the selected descriptor d nk to the set J, that is, J=J∪{d nk };

步骤6)获取多相机系统的外参标定图和内参标定图:Step 6) Obtain the external parameter calibration map and the internal parameter calibration map of the multi-camera system:

移动标定物位于S台相机的共同视野范围内,使用多相机系统的每一台相机同时对公共视野范围内的标定物进行拍摄,得到S幅外参标定图像B,并移动标定物依次出现在每台相机的视野范围内,使标定物尽可能大地覆盖每台相机的视野,对标定物进行拍摄,得到S幅内参标定图像E。在本实施例中S=4。The mobile calibration object is located within the common field of view of S cameras, and each camera of the multi-camera system is used to simultaneously shoot the calibration object within the common field of view to obtain S external parameter calibration images B, and the mobile calibration object is sequentially placed within the field of view of each camera, so that the calibration object covers the field of view of each camera as much as possible, and the calibration object is shot to obtain S internal parameter calibration images E. In this embodiment, S=4.

步骤7)获取多相机系统内参和外参的标定结果:Step 7) Obtain the calibration results of the multi-camera system internal and external parameters:

通过标定物具有真实尺度的三维稀疏点云X'、描述子标准库J和S幅内参标定图像E,对每台相机沿图像坐标系x轴和y轴的焦距ax和ay、主点坐标x0和y0进行标定,得到多相机系统的内参矩阵KS,并通过KS、X'、J和S幅外参标定图像B,对标定物坐标系相对于每台相机的相机坐标系的旋转矩阵RS和平移向量tS进行标定,得到多相机系统的外参;具体步骤如下:Through the calibration object with real-scale three-dimensional sparse point cloud X', descriptor standard library J and S internal parameter calibration images E, the focal length ax and ay of each camera along the x-axis and y-axis of the image coordinate system, and the principal point coordinates x0 and y0 are calibrated to obtain the intrinsic parameter matrix Ks of the multi-camera system. And through Ks , X', J and S external parameter calibration images B, the rotation matrix RS and translation vector tS of the calibration object coordinate system relative to the camera coordinate system of each camera are calibrated to obtain the external parameters of the multi-camera system. The specific steps are as follows:

(7a)采用SIFT算法对内参标定图像E的每幅图像进行特征提取,得到第s幅图像的H个特征点及其相应的H个描述子,并采用ANN算法对第h个描述子vsh与描述子标准库J进行匹配,然后将匹配结果中的最近似描述子匹配对应的三维点X'i与第h个特征点ush组成匹配点对,得到第s幅图像的W个匹配点对,其中W≤H;(7a) The SIFT algorithm is used to extract features from each image of the intrinsic calibration image E, and H feature points and their corresponding H descriptors of the s-th image are obtained. The h-th descriptor v sh is matched with the descriptor standard library J by the ANN algorithm. Then, the three-dimensional point X' i corresponding to the closest descriptor in the matching result is matched with the h-th feature point u sh to form a matching point pair, and W matching point pairs of the s-th image are obtained, where W ≤ H.

(7b)通过W个匹配点对计算多相机系统的相机投影矩阵Ps,并对Ps的前三列组成的矩阵MS进行QR分解,分解得到的上三角矩阵Ks即为多相机系统的内参矩阵;(7b) The camera projection matrix P s of the multi-camera system is calculated through W matching point pairs, and the matrix MS composed of the first three columns of P s is subjected to QR decomposition. The upper triangular matrix K s obtained by the decomposition is the intrinsic parameter matrix of the multi-camera system;

(7c)采用SIFT算法对外参标定图像B的每幅图像进行特征提取,得到每幅图像的特征点及其相应的描述子,并采用ANN算法对每个描述子与描述子标准库J进行匹配,然后将匹配结果中的最近似描述子匹配对应的三维点X'i与特征点组成匹配点对,通过内参矩阵Ks和匹配点对,采用EPnP算法估计多相机系统的位姿,采用高斯-牛顿法对多相机系统的位姿进行优化,得到多相机系统中标定物坐标系相对于每台相机的相机坐标系的旋转矩阵RS和平移向量tS(7c) The SIFT algorithm is used to extract features from each image of the external calibration image B to obtain the feature points of each image and its corresponding descriptor. The ANN algorithm is used to match each descriptor with the descriptor standard library J. Then, the three-dimensional point X'i corresponding to the closest descriptor in the matching result is matched with the feature point to form a matching point pair. The EPnP algorithm is used to estimate the pose of the multi-camera system through the internal parameter matrix Ks and the matching point pair. The Gauss-Newton method is used to optimize the pose of the multi-camera system to obtain the rotation matrix RS and translation vector tS of the calibration object coordinate system relative to the camera coordinate system of each camera in the multi-camera system.

表1中的实验数据为验证本实施例内外参标定结果准确性的重投影误差,从表1中可以看出,本实施例的4台相机标定结果的重投影误差的最小值和最大值分别为0.0237078和0.0541603,与业界认为的重投影误差为0.1个像素相比有了显著的降低,说明本实施例的标定结果准确。The experimental data in Table 1 is the reprojection error for verifying the accuracy of the internal and external parameter calibration results of this embodiment. It can be seen from Table 1 that the minimum and maximum values of the reprojection errors of the calibration results of the four cameras in this embodiment are 0.0237078 and 0.0541603, respectively, which are significantly lower than the reprojection error of 0.1 pixel believed by the industry, indicating that the calibration results of this embodiment are accurate.

表1.Table 1.

相机编号Camera No. 重投影误差(单位pixels)Reprojection error (in pixels) 相机1Camera 1 0.0500130.050013 相机2Camera 2 0.02370780.0237078 相机3Camera 3 0.05416030.0541603 相机4Camera 4 0.04802740.0480274

Claims (9)

1.一种基于三维重建的多相机系统标定方法,其特征在于,包括如下步骤:1. A multi-camera system calibration method based on three-dimensional reconstruction, characterized in that it includes the following steps: (1)构建待标定多相机系统的标定场景:(1) Construct a calibration scenario for the multi-camera system to be calibrated: 构建包括形状为凸多面体的标定物和与其并排摆放的棋盘格,以及S台待标定相机组成的多相机系统的标定场景,S≥2;Construct a calibration scene for a multi-camera system consisting of a calibration object in the shape of a convex polyhedron, a chessboard placed side by side with it, and S cameras to be calibrated, S ≥ 2; (2)获取场景图像的图像对:(2) Obtaining image pairs of scene images: 通过内参为C的相机从N个角度对标定场景进行拍摄,得到包含L幅完整体现棋盘格和L'幅未完整体现棋盘格的共N幅场景图像,并对每幅场景图像进行特征检测,得到N幅场景图像的特征点集合第n幅场景图像中包含的Z个特征点及与其对应的Z个描述子,和第z个特征点qnz和描述子dnz间的映射函数g(qnz)=dnz,并对qnz与其他N-1幅场景图像的每个特征点进行匹配后,将第n幅场景图像和与其匹配特征点数大于β的场景图像组成图像对,得到第n幅场景图像对应的多组图像对,其中,N=L+L',N≥20,L≥2;The calibration scene is photographed from N angles by a camera with an internal reference C, and a total of N scene images are obtained, including L complete chessboard images and L' incomplete chessboard images. Feature detection is performed on each scene image to obtain a feature point set of N scene images. The Z feature points and the corresponding Z descriptors contained in the n-th scene image, and the mapping function g(q nz )=d nz between the z-th feature point q nz and the descriptor d nz , and after matching q nz with each feature point of the other N-1 scene images, the n-th scene image and the scene images whose matching feature points are greater than β form an image pair, and multiple groups of image pairs corresponding to the n-th scene image are obtained, where N=L+L', N≥20, L≥2; (3)对标定物进行三维重建:(3) Perform three-dimensional reconstruction of the calibration object: 通过N幅场景图像对应的图像对对标定物进行三维重建,得到标定物的由I个三维点组成的三维稀疏点云X、X与特征点集合的对应关系F和在拍摄L幅能够完整展示棋盘格的场景图像时,由标定物坐标系相对于相机坐标系的旋转矩阵集r={rl|1≤l≤L}和平移向量集t={tl|1≤l≤L}组成的相机位姿,其中,rl、tl分别表示第l次拍摄能够完整展示棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵、平移向量,X中第i个三维点为XiThe calibration object is reconstructed in three dimensions through the image pairs corresponding to N scene images to obtain the 3D sparse point cloud X, X and feature point set composed of I 3D points of the calibration object. The corresponding relationship F and the camera pose composed of the rotation matrix set r = {r l |1≤l≤L} and the translation vector set t = {t l |1≤l≤L} of the calibration object coordinate system relative to the camera coordinate system when shooting L scenes that can fully display the chessboard, where r l and t l represent the rotation matrix and translation vector of the calibration object coordinate system relative to the camera coordinate system when shooting the l-th scene image that can fully display the chessboard, respectively, and the i-th 3D point in X is Xi ; (4)对标定物的三维稀疏点云进行尺度恢复:(4) Scale restoration of the 3D sparse point cloud of the calibration object: 通过相机的内参C和完整体现棋盘格的L幅场景图像对标定物的三维稀疏点云X进行尺度恢复,得到标定物具有真实尺度的三维稀疏点云X';The scale of the 3D sparse point cloud X of the calibration object is restored through the camera's intrinsic parameter C and L scene images that fully reflect the chessboard, and the 3D sparse point cloud X' of the calibration object with the real scale is obtained; (5)构建描述子标准库:(5) Build a descriptor standard library: 通过X与特征点集合的对应关系F,在特征点集合/>中搜索每个三维点Xi对应的一个特征点,并通过qnz与dnz间的映射函数g(qnz)=dnz,将I个特征点的描述子组成描述子标准库J;Through X and feature point set The corresponding relationship F, in the feature point set/> Search for a feature point corresponding to each 3D point Xi , and through the mapping function g( qnz )= dnz between qnz and dnz , combine the descriptors of I feature points into a descriptor standard library J; (6)获取多相机系统的外参标定图和内参标定图:(6) Obtain the external parameter calibration map and the internal parameter calibration map of the multi-camera system: 多相机系统中的S台待标定相机对公共视野范围内的标定物进行拍摄,得到S幅外参标定图像B,并对移动至每台相机视野范围内的标定物进行拍摄,得到S幅内参标定图像E;The S cameras to be calibrated in the multi-camera system shoot the calibration object within the public field of view to obtain S external parameter calibration images B, and shoot the calibration object moved to the field of view of each camera to obtain S internal parameter calibration images E; (7)获取多相机系统内参和外参的标定结果:(7) Obtain the calibration results of the internal and external parameters of the multi-camera system: 通过标定物具有真实尺度的三维稀疏点云X'、描述子标准库J和S幅内参标定图像E,对每台相机沿图像坐标系x轴和y轴的焦距ax和ay、主点坐标x0和y0进行标定,得到多相机系统的内参矩阵KS,并通过KS、X'、J和S幅外参标定图像B,对标定物坐标系相对于每台相机的相机坐标系的旋转矩阵RS和平移向量tS进行标定,得到多相机系统的外参。Through the three-dimensional sparse point cloud X' of the calibration object with real scale, the descriptor standard library J and S intrinsic parameter calibration images E, the focal length ax and ay along the x-axis and y-axis of the image coordinate system, and the principal point coordinates x0 and y0 of each camera are calibrated to obtain the intrinsic parameter matrix KS of the multi-camera system. And through KS , X', J and S extrinsic parameter calibration images B, the rotation matrix RS and translation vector tS of the calibration object coordinate system relative to the camera coordinate system of each camera are calibrated to obtain the extrinsic parameters of the multi-camera system. 2.根据权利要求1所述的方法,其特征在于,步骤(2)中所述的相机的内参C,由以像素为单位表示的沿图像坐标系x轴和y轴的相机焦距ax和ay,和主点坐标x0和y0组成的矩阵,其表达试为:2. The method according to claim 1, characterized in that the camera intrinsic parameter C in step (2) is a matrix composed of the camera focal lengths ax and ay along the x-axis and y-axis of the image coordinate system expressed in pixels, and the principal point coordinates x0 and y0 , which can be expressed as follows: 3.根据权利要求1所述的方法,其特征在于,步骤(2)中所述的对每幅场景图像进行特征检测,采用SIFT算法。3. The method according to claim 1 is characterized in that the feature detection of each scene image in step (2) adopts SIFT algorithm. 4.根据权利要求1所述的方法,其特征在于,步骤(3)中所述的对标定物进行三维重建,采用SfM算法。4. The method according to claim 1 is characterized in that the three-dimensional reconstruction of the calibration object described in step (3) adopts the SfM algorithm. 5.根据权利要求1所述的方法,其特征在于,步骤(3)中所述的X与特征点集合的对应关系F,其表达式为:5. The method according to claim 1, characterized in that the X in step (3) is related to the feature point set The corresponding relationship F is expressed as: (Xi,qnz)∈F(X i ,q nz )∈F 其中,表示三维稀疏点云X与N幅场景图像的特征点集合/>的笛卡尔积,对应三维点Xi与图像中特征点qnz的所有可能有序对的集合。in, Represents a set of feature points of a three-dimensional sparse point cloud X and N scene images/> The Cartesian product of corresponds to the set of all possible ordered pairs of three-dimensional points Xi and feature points qnz in the image. 6.根据权利要求1所述的方法,其特征在于,步骤(3)中所述的第l次拍摄能够完整展示棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵rl、平移向量tl,其中rl为由三个正交的单位列向量组成的3x3的矩阵,每个列向量代表相机坐标系中的一个轴在标定物坐标系中的方向;tl为表示相机坐标系原点相对于标定物坐标系原点的位置的3x1的向量。6. The method according to claim 1, characterized in that, when the lth shooting described in step (3) can completely display the checkerboard scene image, the rotation matrix r l and translation vector t l of the calibration object coordinate system relative to the camera coordinate system, wherein r l is a 3x3 matrix composed of three orthogonal unit column vectors, each column vector represents the direction of an axis in the camera coordinate system in the calibration object coordinate system; t l is a 3x1 vector representing the position of the origin of the camera coordinate system relative to the origin of the calibration object coordinate system. 7.根据权利要求1所述的方法,其特征在于,步骤(4)中所述的标定物具有真实尺度的三维稀疏点云X',计算公式为:7. The method according to claim 1, characterized in that the calibration object in step (4) has a three-dimensional sparse point cloud X' of real scale, and the calculation formula is: X'i=scXi X' i = scX i tl,l+1=tl+1-rl,l+1tl t l,l+1 =t l+1 -r l,l+1 t l rl,l+1=rl+1rl T r l,l+1 = r l+1 r l T 其中,sc表示尺度因子,‖·‖2表示取二范数操作,rl、tl分别表示第l次拍摄完整体现棋盘格的场景图像时标定物坐标系相对于相机坐标系的旋转矩阵和平移向量,rl,l+1、tl,l+1分别表示第l次拍摄相对于第l+1次拍摄完整体现棋盘格的场景图像时相机坐标系的旋转矩阵和平移向量,t′l,l+1表示tl,l+1的真实尺度。Where sc represents the scale factor, ‖·‖ 2 represents the binary norm operation, r l and t l represent the rotation matrix and translation vector of the calibration object coordinate system relative to the camera coordinate system when the l-th shot fully reflects the scene image of the checkerboard, r l,l+1 and t l,l+1 represent the rotation matrix and translation vector of the camera coordinate system when the l-th shot is relative to the l+1-th shot fully reflects the scene image of the checkerboard, and t′ l,l+1 represents the actual scale of t l,l+1 . 8.根据权利要求1所述的方法,其特征在于,步骤(5)中所述的在特征点集合中搜索每个三维点Xi对应的一个特征点,搜索方法为:8. The method according to claim 1, characterized in that the step (5) in the feature point set Search for a feature point corresponding to each 3D point Xi in the search method: 在特征点集合中搜索每个三维点Xi对应的特征点{qnz|(Xi,qnz)∈F},并随机选取{qnz|(Xi,qnz)∈F}中的一个特征点。In the feature point set Search for the feature point {q nz |(X i ,q nz )∈F} corresponding to each 3D point Xi , and randomly select a feature point in {q nz |(X i ,q nz )∈F}. 9.根据权利要求1所述的方法,其特征在于,步骤(7)中所述的对每台相机沿图像坐标系x轴和y轴的焦距ax和ay、主点坐标x0和y0进行标定,实现步骤为:9. The method according to claim 1, characterized in that the step (7) of calibrating the focal length ax and ay of each camera along the x-axis and y-axis of the image coordinate system and the principal point coordinates x0 and y0 is implemented by: (7a)采用SIFT算法对内参标定图像E中的每幅图像进行特征提取,得到第s幅图像的H个特征点及其相应的H个描述子,并采用ANN算法对第h个描述子vsh与描述子标准库J进行匹配,然后将匹配结果中的最近似描述子匹配对应的三维点X'i与第h个特征点ush组成匹配点对,得到第s幅图像的W个匹配点对,其中W≤H。(7a) The SIFT algorithm is used to extract features from each image in the intrinsic calibration image E, and H feature points and their corresponding H descriptors of the s-th image are obtained. The ANN algorithm is used to match the h-th descriptor v sh with the descriptor standard library J. Then, the three-dimensional point X' i corresponding to the closest descriptor in the matching result is matched with the h-th feature point u sh to form a matching point pair, and W matching point pairs of the s-th image are obtained, where W ≤ H. (7b)通过W个匹配点对计算多相机系统的相机投影矩阵PS,并对PS的前三列组成的矩阵MS进行QR分解,分解得到的上三角矩阵Ks即为多相机系统的内参矩阵。(7b) The camera projection matrix PS of the multi-camera system is calculated through W matching point pairs, and the matrix MS composed of the first three columns of PS is subjected to QR decomposition. The upper triangular matrix Ks obtained by the decomposition is the intrinsic parameter matrix of the multi-camera system.
CN202410046673.4A 2024-01-12 2024-01-12 Multi-camera system calibration method based on 3D reconstruction Pending CN117830435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410046673.4A CN117830435A (en) 2024-01-12 2024-01-12 Multi-camera system calibration method based on 3D reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410046673.4A CN117830435A (en) 2024-01-12 2024-01-12 Multi-camera system calibration method based on 3D reconstruction

Publications (1)

Publication Number Publication Date
CN117830435A true CN117830435A (en) 2024-04-05

Family

ID=90515297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410046673.4A Pending CN117830435A (en) 2024-01-12 2024-01-12 Multi-camera system calibration method based on 3D reconstruction

Country Status (1)

Country Link
CN (1) CN117830435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118823140A (en) * 2024-09-19 2024-10-22 中国水利水电科学研究院 A camera automatic calibration method and device for field flow measurement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118823140A (en) * 2024-09-19 2024-10-22 中国水利水电科学研究院 A camera automatic calibration method and device for field flow measurement

Similar Documents

Publication Publication Date Title
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
CN112132906B (en) External parameter calibration method and system between depth camera and visible light camera
CN113393439A (en) Forging defect detection method based on deep learning
CN113091608B (en) A Fast Implementation Method of Digital Speckle Correlation Based on Grid Extraction of Seed Points
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN114119987B (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN109961485A (en) A method for target localization based on monocular vision
CN102521816A (en) Real-time wide-scene monitoring synthesis method for cloud data center room
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN106500625B (en) A kind of telecentricity stereo vision measurement method
CN114998448B (en) A method for multi-constrained binocular fisheye camera calibration and spatial point positioning
CN106969723A (en) High speed dynamic object key point method for three-dimensional measurement based on low speed camera array
CN112164119B (en) Calibration method for multi-camera system placed in surrounding mode and suitable for narrow space
CN112132908A (en) A camera external parameter calibration method and device based on intelligent detection technology
CN110874854A (en) Large-distortion wide-angle camera binocular photogrammetry method based on small baseline condition
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN113808273B (en) A disordered incremental sparse point cloud reconstruction method for numerical simulation of ship traveling waves
CN114897990A (en) Camera distortion calibration method and system based on neural network and storage medium
CN117830435A (en) Multi-camera system calibration method based on 3D reconstruction
He et al. A new camera calibration method from vanishing points in a vision system
CN107240149A (en) Object 3D Model Construction Method Based on Image Processing
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
CN118365716A (en) Camera calibration method based on checkerboard cube calibration object
CN108537831B (en) Method and device for CT imaging of additively manufactured workpieces
CN116091610B (en) Combined calibration method of radar and camera based on three-dimensional tower type checkerboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination