CN108171787A - A kind of three-dimensional rebuilding method based on the detection of ORB features - Google Patents
A kind of three-dimensional rebuilding method based on the detection of ORB features Download PDFInfo
- Publication number
- CN108171787A CN108171787A CN201711366005.6A CN201711366005A CN108171787A CN 108171787 A CN108171787 A CN 108171787A CN 201711366005 A CN201711366005 A CN 201711366005A CN 108171787 A CN108171787 A CN 108171787A
- Authority
- CN
- China
- Prior art keywords
- points
- camera
- feature
- orb
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于ORB特征检测的三维重建方法,属特征检测技术领域,通过安装在无人机云台上的双目摄像头获取实时图像及预处理,对摄像机进行标定,求解出相机内外参数和畸变系数对像素坐标校正,用ORB特征检测对图像特征点检测,使用FLANN进行匹配,求出特征点的空间坐标,利用OpenGL对空间离散点进行三维重建。本发明可加快系统执行速度,同时具有旋转不变性和抗噪声干扰的鲁棒性,极大地提高实时三维重建的精度。
The invention discloses a three-dimensional reconstruction method based on ORB feature detection, which belongs to the technical field of feature detection. Real-time images and preprocessing are obtained through a binocular camera installed on the platform of an unmanned aerial vehicle, and the camera is calibrated to solve the problem of inside and outside of the camera. Parameters and distortion coefficients are used to correct pixel coordinates, ORB feature detection is used to detect image feature points, FLANN is used to match, and the spatial coordinates of feature points are obtained, and OpenGL is used to perform three-dimensional reconstruction of spatial discrete points. The invention can speed up the execution speed of the system, has rotation invariance and robustness against noise interference, and greatly improves the precision of real-time three-dimensional reconstruction.
Description
技术领域technical field
本发明涉及特征检测技术领域,尤其涉及一种基于ORB特征检测的三维重建方法。The invention relates to the technical field of feature detection, in particular to a three-dimensional reconstruction method based on ORB feature detection.
背景技术Background technique
图像的特征点提取与检测是三维重建技术中关键的部分,影响着特征点匹配的正确率和最终的重建结果。随着无人机的普及和发展,无人机自主避障和路径规划以及利用无人机进行采集数字图像信息进行实时三维重建等越来越受到人们的关注和重视。目前三维重建技术中图像特征点检测与匹配算法包括基于灰度的匹配、基于特征的匹配和基于局部不变描述子的匹配,其中基于局部不变描述子的图像匹配包括SIFT特征描述子、SURF特征描述子、D-nets特征描述子等。无人机在拍摄画面时,所拍到的画面容易受到外界光线和自身干扰,产生图像噪声。这对三维重建的结果有很大的影响。Image feature point extraction and detection is a key part of 3D reconstruction technology, which affects the accuracy of feature point matching and the final reconstruction result. With the popularization and development of drones, autonomous obstacle avoidance and path planning of drones, and the use of drones to collect digital image information for real-time 3D reconstruction have attracted more and more attention and attention. At present, image feature point detection and matching algorithms in 3D reconstruction technology include matching based on grayscale, matching based on features and matching based on local invariant descriptors, among which image matching based on local invariant descriptors includes SIFT feature descriptor, SURF Feature descriptors, D-nets feature descriptors, etc. When the UAV is shooting pictures, the pictures captured are easily interfered by external light and itself, resulting in image noise. This has a great influence on the result of 3D reconstruction.
发明内容Contents of the invention
针对现有技术的不足,本发明所解决的问题是实时三维重建精度不高的问题。Aiming at the deficiencies of the prior art, the problem solved by the present invention is the problem of low real-time three-dimensional reconstruction accuracy.
为解决上述技术问题,本发明采用的技术方案是一种基于ORB特征检测的三维重建方法,通过安装在无人机云台上的双目摄像头获取实时图像及预处理,对摄像机进行标定,求解出相机内外参数和畸变系数对像素坐标校正,用ORB特征检测对图像特征点检测,使用FLANN进行匹配,求出特征点的空间坐标,利用OpenGL对空间离散点进行三维重建,包括以下步骤:In order to solve the above-mentioned technical problems, the technical solution adopted in the present invention is a three-dimensional reconstruction method based on ORB feature detection, which obtains real-time images and preprocessing through the binocular camera installed on the UAV platform, calibrates the camera, and solves the problem of The internal and external parameters of the camera and the distortion coefficient are used to correct the pixel coordinates, the ORB feature detection is used to detect the image feature points, the FLANN is used to match, the spatial coordinates of the feature points are obtained, and the OpenGL is used to perform three-dimensional reconstruction of the spatial discrete points, including the following steps:
步骤1:将双目摄像系统搭载在无人机云台上,建立无人机地面基站系统,实时传输图像;Step 1: Carry the binocular camera system on the UAV platform, set up the UAV ground base station system, and transmit images in real time;
步骤2:将获取的图像进行预处理,包括高斯滤波,直方图均衡化;Step 2: preprocessing the acquired image, including Gaussian filtering, histogram equalization;
步骤3:对双目摄像机进行标定,求解相机内外参数以及相机畸变系数;Step 3: Calibrate the binocular camera, solve the internal and external parameters of the camera and the camera distortion coefficient;
步骤4:通过相机畸变系数对像素坐标进行矫正,为图像特征点检测和匹配做准备;Step 4: Correct the pixel coordinates through the camera distortion coefficient to prepare for the detection and matching of image feature points;
步骤5:利用ORB算法对图像进行特征检测,配合FLANN进行特征点匹配;Step 5: Use the ORB algorithm to perform feature detection on the image, and cooperate with FLANN to perform feature point matching;
步骤6:利用极线几何约束剔除误匹配点,使用RANSAC算法进行优化匹配;Step 6: Use epipolar geometric constraints to eliminate mismatching points, and use RANSAC algorithm to optimize matching;
步骤7:对匹配后的点坐标通过空间三维坐标计算公式求解出图像特征点的空间坐标得出点云模型,具体如下:Step 7: Solve the spatial coordinates of the image feature points through the spatial three-dimensional coordinate calculation formula for the matched point coordinates to obtain the point cloud model, as follows:
根据三维空间坐标公式(x1,y1,z1),According to the three-dimensional space coordinate formula (x 1 ,y 1 ,z 1 ),
式中(u1,v1)和(u2,v2)为双目摄像机左右图像坐标系上两点坐标,b为基线长度;In the formula (u 1 , v 1 ) and (u 2 , v 2 ) are the coordinates of two points on the left and right image coordinate system of the binocular camera, and b is the baseline length;
步骤8:利用OpenGL对点云模型的空间离散点云进行三维重建。Step 8: Use OpenGL to perform 3D reconstruction of the spatially discrete point cloud of the point cloud model.
所述步骤3中,包括以下分步骤:In the step 3, the following sub-steps are included:
1)采用张正友标定法根据特征间的对应关系,建立需要标定的参数阵,包括左右摄像机内参数矩阵、畸变系数矩阵及左右相机的相对关系矩阵;1) Using the Zhang Zhengyou calibration method to establish the parameter matrix to be calibrated according to the corresponding relationship between the features, including the internal parameter matrix of the left and right cameras, the distortion coefficient matrix and the relative relationship matrix of the left and right cameras;
2)求解单应矩阵:2) Solve the homography matrix:
单应矩阵:Homography matrix:
当平面靶标有四个以上,可求解H:When there are more than four planar targets, H can be solved:
λ[h1 h2 h3]=H=M1[r1 r2 t];λ[h 1 h 2 h 3 ]=H=M 1 [r 1 r 2 t];
其中M为投影矩阵,M1为相机内参数矩阵,M2为相机外参数矩阵;Among them, M is the projection matrix, M 1 is the camera internal parameter matrix, and M 2 is the camera external parameter matrix;
根据旋转矩阵约束,任意两个旋转矩阵垂直可得到:According to the rotation matrix constraints, any two rotation matrices can be vertically obtained:
3)求解相机内参数:3) Solve the internal parameters of the camera:
令B=bij=M1 -TM1 -1即对称阵;Let B=b ij =M 1 -T M 1 -1 be a symmetrical array;
定义参数变量b=[b11,b12,b13,b22,b23,b33]T,则有hi TBhj=Vij Tb;Define parameter variable b=[b 11 ,b 12 ,b 13 ,b 22 ,b 23 ,b 33 ] T , then h i T Bh j =V ij T b;
其中 in
有:当n≥3时可以求出6个内参,可求解b*=arg min||Vb||;Have: When n≥3, 6 internal parameters can be obtained, and b*=arg min||Vb|| can be solved;
进一步可确定M1各参数;Further each parameter of M can be determined;
4)根据M1可进一步求出摄像头外参数,就是M2的R和T,4) According to M 1, the extrinsic parameters of the camera can be further calculated, that is, R and T of M 2 ,
系数λ=1/||M1 -1h1||=1/||M1 -1h2||,r1=λM1 -1h1,r2=λM1 -1h2,r3=r1×r2,Coefficient λ=1/||M 1 -1 h 1 ||=1/||M 1 -1 h 2 ||, r 1 = λM 1 -1 h 1 , r 2 = λM 1 -1 h 2 , r 3 = r 1 ×r 2 ,
t=λM1 -1h3;t=λM 1 -1 h 3 ;
5)进一步求解径向畸变参数k1,k2。5) Further solve the radial distortion parameters k1, k2.
所述步骤5中,包括以下分步骤:In the step 5, the following sub-steps are included:
1)ORB算法使用FAST算法提取特征点,使用Harris角点检测对特征点排序,取N个较好的角点作为特征点;1) The ORB algorithm uses the FAST algorithm to extract feature points, uses Harris corner detection to sort the feature points, and takes N better corner points as feature points;
ORB采用”intensity centroid”的方法确定特征点方向,把特征点的领域范围看成一个patch,求取这个patch的质心,将这个patch的质心与特征点连线,求出该直线与横坐标轴的夹角,就是该特征点的方向;ORB uses the "intensity centroid" method to determine the direction of feature points, regards the range of feature points as a patch, finds the centroid of the patch, connects the centroid of the patch with the feature points, and finds the line and the abscissa axis The included angle is the direction of the feature point;
该patch的质心公式如下:The centroid formula of the patch is as follows:
式中Mpq为灰度矩,I(x,y)为图像的(x,y)点处的灰度值,p,q为灰度矩的阶数;In the formula, M pq is the gray moment, I(x, y) is the gray value at the point (x, y) of the image, p, q are the order of the gray moment;
2)质心定义为:2) The centroid is defined as:
M01,M10,M00为三个不同的一阶灰度矩;M 01 , M 10 , and M 00 are three different first-order gray moments;
3)求取OC的方向,把x,y的范围保持在patch之内,以特征点为坐标原点,得到方向角为;3) Find the direction of OC, keep the range of x and y within the patch, take the feature point as the coordinate origin, and obtain the direction angle as;
4)ORB算法解决旋转不变性和抗噪声能力,采用patch领域内的匹配点xi,yi,设生成的特征点描述的n个测试点为(xi,yi),定义一个2*n的矩阵如下:4) The ORB algorithm solves the rotation invariance and anti-noise ability, adopts the matching points x i , y i in the patch field, sets the n test points described by the generated feature points as (xi , y i ), and defines a 2* The matrix of n is as follows:
利用旋转矩阵R,求得旋转后匹配的坐标Sθ=RS, Using the rotation matrix R, obtain the matched coordinates S θ = RS after rotation,
5)利用ORB算法进行特征点检测提取,同时使用改进的Brief描述子生成特征点描述符;5) Use the ORB algorithm to detect and extract feature points, and use the improved Brief descriptor to generate feature point descriptors;
6)利用FLANN算法进行特征点匹配。6) Use the FLANN algorithm for feature point matching.
在步骤6中,包括以下分步骤:In step 6, the following sub-steps are included:
1)经过畸变矫正的摄像机,匹配点在图像平面上的极线上;1) After the distortion-corrected camera, the matching point is on the epipolar line on the image plane;
2)计算Pl(xl,yl)和Pr(xr,yr)中|yl-yr|的差值,并设定阈值,阈值越小则对精度要求越高;2) Calculate the difference between |y l -y r | in P l (x l , y l ) and P r (x r , y r ), and set the threshold. The smaller the threshold, the higher the precision requirement;
3)剔除超出阈值范围之外的匹配点,正确的匹配点一般都在极线附近;3) Eliminate matching points beyond the threshold range, and the correct matching points are generally near the epipolar line;
4)从剩余的匹配点选取4组匹配点对进一步筛选和优化,计算出单应矩阵H,作为模型;4) Select 4 groups of matching point pairs from the remaining matching points for further screening and optimization, and calculate the homography matrix H as a model;
5)计算数据集中与模型的投影误差,若小于阈值,则加入为内点集I;5) Calculate the projection error between the data set and the model, if it is less than the threshold, add it as the interior point set I;
6)根据内点个数判断是否为最优解,若是则更新内点集I,同时更新迭代次数K;6) Determine whether it is the optimal solution according to the number of interior points, if so, update the interior point set I, and update the number of iterations K at the same time;
7)如果迭代次数大于K,则退出;否则K+1,重复上述步骤;7) If the number of iterations is greater than K, then exit; otherwise K+1, repeat the above steps;
其中p为置信度,一般取0.995;w为内点的比例;m为计算模型所需要的样本数4;Among them, p is the confidence level, generally 0.995; w is the proportion of internal points; m is the number of samples required for the calculation model 4;
经过极线约束的初步筛选,内点的比例w相对较高,从而减少迭代次数,减少运算时。After the initial screening of epipolar constraints, the proportion w of interior points is relatively high, thereby reducing the number of iterations and reducing the operation time.
在步骤8中,包括以下分步骤:In step 8, the following sub-steps are included:
1)在计算机利用OpenGL函数绘制场景完成三维图形的绘制;1) Utilize the OpenGL function to draw the scene in the computer to complete the drawing of the three-dimensional graphics;
2)调用OpenGL函数进行绘图,最终完成三位重构。2) Call the OpenGL function to draw, and finally complete the three-dimensional reconstruction.
本发明的技术方案利用搭载在无人机云台上的双目摄像机获取和传输图像,采用ORB图像特征点检测和FLANN特征点匹配,可加快系统执行速度,同时具有旋转不变性和抗噪声干扰的鲁棒性,极大地提高实时三维重建的精度。The technical solution of the present invention uses the binocular camera mounted on the UAV platform to acquire and transmit images, and adopts ORB image feature point detection and FLANN feature point matching, which can speed up the execution speed of the system, and at the same time has rotation invariance and anti-noise interference The robustness greatly improves the accuracy of real-time 3D reconstruction.
附图说明Description of drawings
图1为本发明结构示意图。Fig. 1 is a schematic diagram of the structure of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步的说明,但不是对本发明的限定。The present invention will be further described below in conjunction with the accompanying drawings, but the present invention is not limited thereto.
图1示出了一种基于ORB特征检测的三维重建方法,通过安装在无人机云台上的双目摄像头获取实时图像及预处理,对摄像机进行标定,求解出相机内外参数和畸变系数对像素坐标校正,用ORB特征检测对图像特征点检测,使用FLANN进行匹配,求出特征点的空间坐标,利用OpenGL对空间离散点进行三维重建,包括以下步骤:Figure 1 shows a 3D reconstruction method based on ORB feature detection. Real-time images are obtained and preprocessed by a binocular camera installed on the UAV platform, and the camera is calibrated to solve the camera internal and external parameters and distortion coefficients. Pixel coordinate correction, use ORB feature detection to detect image feature points, use FLANN to match, find the spatial coordinates of feature points, use OpenGL to perform three-dimensional reconstruction of spatial discrete points, including the following steps:
步骤1:将双目摄像系统搭载在无人机云台上,建立无人机地面基站系统,实时传输图像;Step 1: Carry the binocular camera system on the UAV platform, set up the UAV ground base station system, and transmit images in real time;
步骤2:将获取的图像进行预处理,包括高斯滤波,直方图均衡化;Step 2: preprocessing the acquired image, including Gaussian filtering, histogram equalization;
高斯滤波器对于抑制正太分布的噪声非常有效。一般使用二维高斯滤波函数作为平滑滤波器:Gaussian filters are very effective at suppressing normally distributed noise. Generally, a two-dimensional Gaussian filter function is used as a smoothing filter:
步骤3:对双目摄像机进行标定,求解相机内外参数以及相机畸变系数;Step 3: Calibrate the binocular camera, solve the internal and external parameters of the camera and the camera distortion coefficient;
步骤4:通过相机畸变系数对像素坐标进行矫正,为图像特征点检测和匹配做准备;Step 4: Correct the pixel coordinates through the camera distortion coefficient to prepare for the detection and matching of image feature points;
步骤5:利用ORB算法对图像进行特征检测,配合FLANN进行特征点匹配;Step 5: Use the ORB algorithm to perform feature detection on the image, and cooperate with FLANN to perform feature point matching;
步骤6:利用极线几何约束剔除误匹配点,使用RANSAC算法进行优化匹配;Step 6: Use epipolar geometric constraints to eliminate mismatching points, and use RANSAC algorithm to optimize matching;
步骤7:对匹配后的点坐标通过空间三维坐标计算公式求解出图像特征点的空间坐标得出点云模型,具体如下:Step 7: Solve the spatial coordinates of the image feature points through the spatial three-dimensional coordinate calculation formula for the matched point coordinates to obtain the point cloud model, as follows:
根据三维空间坐标公式(x1,y1,z1),According to the three-dimensional space coordinate formula (x 1 ,y 1 ,z 1 ),
式中(u1,v1)和(u2,v2)为双目摄像机左右图像坐标系上两点坐标,b为基线长度;In the formula (u 1 , v 1 ) and (u 2 , v 2 ) are the coordinates of two points on the left and right image coordinate system of the binocular camera, and b is the baseline length;
步骤8:利用OpenGL对点云模型的空间离散点云进行三维重建。Step 8: Use OpenGL to perform 3D reconstruction of the spatially discrete point cloud of the point cloud model.
所述步骤3中,包括以下分步骤:In the step 3, the following sub-steps are included:
1)采用张正友标定法根据特征间的对应关系,建立需要标定的参数阵,包括左右摄像机内参数矩阵、畸变系数矩阵及左右相机的相对关系矩阵;1) Using the Zhang Zhengyou calibration method to establish the parameter matrix to be calibrated according to the corresponding relationship between the features, including the internal parameter matrix of the left and right cameras, the distortion coefficient matrix and the relative relationship matrix of the left and right cameras;
2)求解单应矩阵:2) Solve the homography matrix:
单应矩阵:Homography matrix:
当平面靶标有四个以上,可求解H:When there are more than four planar targets, H can be solved:
λ[h1 h2 h3]=H=M1[r1 r2 t];λ[h 1 h 2 h 3 ]=H=M 1 [r 1 r 2 t];
其中M为投影矩阵,M1为相机内参数矩阵,M2为相机外参数矩阵;Among them, M is the projection matrix, M 1 is the camera internal parameter matrix, and M 2 is the camera external parameter matrix;
根据旋转矩阵约束,任意两个旋转矩阵垂直可得到:According to the rotation matrix constraints, any two rotation matrices can be vertically obtained:
3)求解相机内参数:3) Solve the internal parameters of the camera:
令B=bij=M1 -TM1 -1即对称阵;Let B=b ij =M 1 -T M 1 -1 be a symmetrical array;
定义参数变量b=[b11,b12,b13,b22,b23,b33]T,则有hi TBhj=Vij Tb;Define parameter variable b=[b 11 ,b 12 ,b 13 ,b 22 ,b 23 ,b 33 ] T , then h i T Bh j =V ij T b;
其中 in
有:当n≥3时可以求出6个内参,可求解b*=arg min||Vb||;Have: When n≥3, 6 internal parameters can be obtained, and b*=arg min||Vb|| can be solved;
进一步可确定M1各参数;Further each parameter of M can be determined;
4)根据M1可进一步求出摄像头外参数,就是M2的R和T,4) According to M 1, the extrinsic parameters of the camera can be further calculated, that is, R and T of M 2 ,
系数λ=1/||M1 -1h1||=1/||M1 -1h2||,r1=λM1 -1h1,r2=λM1 -1h2,r3=r1×r2,Coefficient λ=1/||M 1 -1 h 1 ||=1/||M 1 -1 h 2 ||, r 1 = λM 1 -1 h 1 , r 2 = λM 1 -1 h 2 , r 3 = r 1 ×r 2 ,
t=λM1 -1h3;t=λM 1 -1 h 3 ;
5)进一步求解径向畸变参数k1,k2。5) Further solve the radial distortion parameters k1, k2.
所述步骤5中,包括以下分步骤:In the step 5, the following sub-steps are included:
1)ORB算法使用FAST算法提取特征点,使用Harris角点检测对特征点排序,取N个较好的角点作为特征点;1) The ORB algorithm uses the FAST algorithm to extract feature points, uses Harris corner detection to sort the feature points, and takes N better corner points as feature points;
ORB采用”intensity centroid”的方法确定特征点方向,把特征点的领域范围看成一个patch,求取这个patch的质心,将这个patch的质心与特征点连线,求出该直线与横坐标轴的夹角,就是该特征点的方向;ORB uses the "intensity centroid" method to determine the direction of feature points, regards the range of feature points as a patch, finds the centroid of the patch, connects the centroid of the patch with the feature points, and finds the line and the abscissa axis The included angle is the direction of the feature point;
该patch的质心公式如下:The centroid formula of the patch is as follows:
式中Mpq为灰度矩,I(x,y)为图像的(x,y)点处的灰度值,p,q为灰度矩的阶数;In the formula, M pq is the gray moment, I(x, y) is the gray value at the point (x, y) of the image, p, q are the order of the gray moment;
2)质心定义为:2) The centroid is defined as:
M01,M10,M00为三个不同的一阶灰度矩;M 01 , M 10 , and M 00 are three different first-order gray moments;
3)求取OC的方向,把x,y的范围保持在patch之内,以特征点为坐标原点,得到方向角为;3) Find the direction of OC, keep the range of x and y within the patch, take the feature point as the coordinate origin, and obtain the direction angle as;
4)ORB算法解决旋转不变性和抗噪声能力,采用patch领域内的匹配点xi,yi,设生成的特征点描述的n个测试点为(xi,yi),定义一个2*n的矩阵如下:4) The ORB algorithm solves the rotation invariance and anti-noise ability, adopts the matching points x i , y i in the patch field, sets the n test points described by the generated feature points as (xi , y i ), and defines a 2* The matrix of n is as follows:
利用旋转矩阵R,求得旋转后匹配的坐标Sθ=RS,Using the rotation matrix R, obtain the matched coordinates S θ = RS after rotation,
5)利用ORB算法进行特征点检测提取,同时使用改进的Brief描述子生成特征点描述符;5) Use the ORB algorithm to detect and extract feature points, and use the improved Brief descriptor to generate feature point descriptors;
6)利用FLANN算法进行特征点匹配。6) Use the FLANN algorithm for feature point matching.
在步骤6中,包括以下分步骤:In step 6, the following sub-steps are included:
1)经过畸变矫正的摄像机,匹配点在图像平面上的极线上;1) After the distortion-corrected camera, the matching point is on the epipolar line on the image plane;
2)计算Pl(xl,yl)和Pr(xr,yr)中|yl-yr|的差值,并设定阈值,阈值越小则对精度要求越高;2) Calculate the difference between |y l -y r | in P l (x l , y l ) and P r (x r , y r ), and set the threshold. The smaller the threshold, the higher the precision requirement;
3)剔除超出阈值范围之外的匹配点,正确的匹配点一般都在极线附近;3) Eliminate matching points beyond the threshold range, and the correct matching points are generally near the epipolar line;
4)从剩余的匹配点选取4组匹配点对进一步筛选和优化,计算出单应矩阵H,作为模型;4) Select 4 groups of matching point pairs from the remaining matching points for further screening and optimization, and calculate the homography matrix H as a model;
5)计算数据集中与模型的投影误差,若小于阈值,则加入为内点集I;5) Calculate the projection error between the data set and the model, if it is less than the threshold, add it as the interior point set I;
6)根据内点个数判断是否为最优解,若是则更新内点集I,同时更新迭代次数K;6) Determine whether it is the optimal solution according to the number of interior points, if so, update the interior point set I, and update the number of iterations K at the same time;
7)如果迭代次数大于K,则退出;否则K+1,重复上述步骤;7) If the number of iterations is greater than K, then exit; otherwise K+1, repeat the above steps;
其中p为置信度,一般取0.995;w为内点的比例;m为计算模型所需要的样本数4;Among them, p is the confidence level, generally 0.995; w is the proportion of internal points; m is the number of samples required for the calculation model 4;
经过极线约束的初步筛选,内点的比例w相对较高,从而减少迭代次数,减少运算时。After the initial screening of epipolar constraints, the proportion w of interior points is relatively high, thereby reducing the number of iterations and reducing the operation time.
所述步骤8中,包括以下分步骤:In said step 8, the following sub-steps are included:
1)在计算机利用OpenGL函数绘制场景完成三维图形的绘制;1) Utilize the OpenGL function to draw the scene in the computer to complete the drawing of the three-dimensional graphics;
2)调用OpenGL函数进行绘图,最终完成三位重构。2) Call the OpenGL function to draw, and finally complete the three-dimensional reconstruction.
采用本发明的技术方案的有益效果可满足搭载无人机的双目视觉系统三维重建的要求,确保精确性和稳定性。The beneficial effect of adopting the technical solution of the present invention can meet the requirements of the three-dimensional reconstruction of the binocular vision system equipped with the drone, and ensure accuracy and stability.
以上结合附图对本发明的实施方式做出了详细说明,但本发明不局限于所描述的实施方式。对于本领域技术人员而言,在不脱离本发明的原理和精神的情况下,对这些实施方式进行各种变化、修改、替换和变型仍落入本发明的保护范围内。The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. For those skilled in the art, without departing from the principle and spirit of the present invention, various changes, modifications, replacements and modifications to these embodiments still fall within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711366005.6A CN108171787A (en) | 2017-12-18 | 2017-12-18 | A kind of three-dimensional rebuilding method based on the detection of ORB features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711366005.6A CN108171787A (en) | 2017-12-18 | 2017-12-18 | A kind of three-dimensional rebuilding method based on the detection of ORB features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108171787A true CN108171787A (en) | 2018-06-15 |
Family
ID=62522364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711366005.6A Pending CN108171787A (en) | 2017-12-18 | 2017-12-18 | A kind of three-dimensional rebuilding method based on the detection of ORB features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108171787A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109008909A (en) * | 2018-07-13 | 2018-12-18 | 宜宾学院 | A low-power capsule endoscope image acquisition and 3D reconstruction system |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN109086795A (en) * | 2018-06-27 | 2018-12-25 | 上海理工大学 | A kind of accurate elimination method of image mismatch |
CN109191509A (en) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | A kind of virtual binocular three-dimensional reconstruction method based on structure light |
CN109360269A (en) * | 2018-09-30 | 2019-02-19 | 国网黑龙江省电力有限公司电力科学研究院 | Ground 3D Plane Reconstruction Method Based on Computer Vision |
CN109727287A (en) * | 2018-12-27 | 2019-05-07 | 江南大学 | An improved registration method and system for augmented reality |
CN109859314A (en) * | 2019-03-12 | 2019-06-07 | 上海曼恒数字技术股份有限公司 | Three-dimensional rebuilding method, device, electronic equipment and storage medium |
CN109934911A (en) * | 2019-03-15 | 2019-06-25 | 鲁东大学 | OpenGL-based 3D modeling method for high-precision oblique photography on mobile terminals |
CN110009722A (en) * | 2019-04-16 | 2019-07-12 | 成都四方伟业软件股份有限公司 | Three-dimensional rebuilding method and device |
CN110070568A (en) * | 2019-04-28 | 2019-07-30 | 重庆学析优科技有限公司 | A kind of picture antidote and system |
WO2020024211A1 (en) * | 2018-08-02 | 2020-02-06 | 深圳市道通智能航空技术有限公司 | Unmanned aerial vehicle landing method and apparatus, and unmanned aerial vehicle |
CN110909634A (en) * | 2019-11-07 | 2020-03-24 | 深圳市凯迈生物识别技术有限公司 | Visible light and double infrared combined rapid in vivo detection method |
CN111080714A (en) * | 2019-12-13 | 2020-04-28 | 太原理工大学 | Parallel binocular camera calibration method based on three-dimensional reconstruction |
CN111316325A (en) * | 2019-03-08 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Shooting device parameter calibration method, equipment and storage medium |
CN112002016A (en) * | 2020-08-28 | 2020-11-27 | 中国科学院自动化研究所 | Continuous curved surface reconstruction method, system and device based on binocular vision |
CN112288852A (en) * | 2020-10-28 | 2021-01-29 | 华润电力技术研究院有限公司 | Coal yard three-dimensional reconstruction method and system and intelligent control method of thermal power generating unit |
CN112347935A (en) * | 2020-11-07 | 2021-02-09 | 南京天通新创科技有限公司 | Binocular vision SLAM-based automatic driving vehicle positioning method and system |
CN113228104A (en) * | 2018-11-06 | 2021-08-06 | 菲力尔商业系统公司 | Automatic co-registration of thermal and visible image pairs |
WO2021195939A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳市大疆创新科技有限公司 | Calibrating method for external parameters of binocular photographing device, movable platform and system |
CN115359193A (en) * | 2022-10-19 | 2022-11-18 | 南京航空航天大学 | Rapid semi-dense three-dimensional reconstruction method based on binocular fisheye camera |
CN117032276A (en) * | 2023-07-04 | 2023-11-10 | 长沙理工大学 | Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103247075A (en) * | 2013-05-13 | 2013-08-14 | 北京工业大学 | Variational mechanism-based indoor scene three-dimensional reconstruction method |
CN104732518A (en) * | 2015-01-19 | 2015-06-24 | 北京工业大学 | PTAM improvement method based on ground characteristics of intelligent robot |
-
2017
- 2017-12-18 CN CN201711366005.6A patent/CN108171787A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103247075A (en) * | 2013-05-13 | 2013-08-14 | 北京工业大学 | Variational mechanism-based indoor scene three-dimensional reconstruction method |
CN104732518A (en) * | 2015-01-19 | 2015-06-24 | 北京工业大学 | PTAM improvement method based on ground characteristics of intelligent robot |
Non-Patent Citations (2)
Title |
---|
丁洁琼: "基于RGB-D的SLAM算法研究", 《万方数据库》 * |
曹会敏: "用于目标识别的三维重建技术研究", 《万方数据库》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086795A (en) * | 2018-06-27 | 2018-12-25 | 上海理工大学 | A kind of accurate elimination method of image mismatch |
CN109008909A (en) * | 2018-07-13 | 2018-12-18 | 宜宾学院 | A low-power capsule endoscope image acquisition and 3D reconstruction system |
CN109008909B (en) * | 2018-07-13 | 2024-01-26 | 宜宾学院 | Low-power-consumption capsule endoscope image acquisition and three-dimensional reconstruction system |
CN109191509A (en) * | 2018-07-25 | 2019-01-11 | 广东工业大学 | A kind of virtual binocular three-dimensional reconstruction method based on structure light |
WO2020024211A1 (en) * | 2018-08-02 | 2020-02-06 | 深圳市道通智能航空技术有限公司 | Unmanned aerial vehicle landing method and apparatus, and unmanned aerial vehicle |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN109089100B (en) * | 2018-08-13 | 2020-10-23 | 西安理工大学 | Method for synthesizing binocular stereo video |
CN109360269A (en) * | 2018-09-30 | 2019-02-19 | 国网黑龙江省电力有限公司电力科学研究院 | Ground 3D Plane Reconstruction Method Based on Computer Vision |
US11869204B2 (en) | 2018-11-06 | 2024-01-09 | Teledyne Flir Commercial Systems, Inc. | Automatic co-registration of thermal and visible image pairs |
CN113228104A (en) * | 2018-11-06 | 2021-08-06 | 菲力尔商业系统公司 | Automatic co-registration of thermal and visible image pairs |
CN109727287A (en) * | 2018-12-27 | 2019-05-07 | 江南大学 | An improved registration method and system for augmented reality |
CN109727287B (en) * | 2018-12-27 | 2023-08-08 | 江南大学 | An improved registration method and system suitable for augmented reality |
CN111316325A (en) * | 2019-03-08 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Shooting device parameter calibration method, equipment and storage medium |
WO2020181409A1 (en) * | 2019-03-08 | 2020-09-17 | 深圳市大疆创新科技有限公司 | Capture device parameter calibration method, apparatus, and storage medium |
CN111316325B (en) * | 2019-03-08 | 2021-07-30 | 深圳市大疆创新科技有限公司 | Shooting device parameter calibration method, equipment and storage medium |
CN109859314A (en) * | 2019-03-12 | 2019-06-07 | 上海曼恒数字技术股份有限公司 | Three-dimensional rebuilding method, device, electronic equipment and storage medium |
CN109934911A (en) * | 2019-03-15 | 2019-06-25 | 鲁东大学 | OpenGL-based 3D modeling method for high-precision oblique photography on mobile terminals |
CN110009722A (en) * | 2019-04-16 | 2019-07-12 | 成都四方伟业软件股份有限公司 | Three-dimensional rebuilding method and device |
CN110070568A (en) * | 2019-04-28 | 2019-07-30 | 重庆学析优科技有限公司 | A kind of picture antidote and system |
CN110909634A (en) * | 2019-11-07 | 2020-03-24 | 深圳市凯迈生物识别技术有限公司 | Visible light and double infrared combined rapid in vivo detection method |
CN111080714A (en) * | 2019-12-13 | 2020-04-28 | 太原理工大学 | Parallel binocular camera calibration method based on three-dimensional reconstruction |
WO2021195939A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳市大疆创新科技有限公司 | Calibrating method for external parameters of binocular photographing device, movable platform and system |
CN112002016A (en) * | 2020-08-28 | 2020-11-27 | 中国科学院自动化研究所 | Continuous curved surface reconstruction method, system and device based on binocular vision |
CN112002016B (en) * | 2020-08-28 | 2024-01-26 | 中国科学院自动化研究所 | Continuous curved surface reconstruction method, system and device based on binocular vision |
CN112288852A (en) * | 2020-10-28 | 2021-01-29 | 华润电力技术研究院有限公司 | Coal yard three-dimensional reconstruction method and system and intelligent control method of thermal power generating unit |
CN112347935A (en) * | 2020-11-07 | 2021-02-09 | 南京天通新创科技有限公司 | Binocular vision SLAM-based automatic driving vehicle positioning method and system |
CN112347935B (en) * | 2020-11-07 | 2021-11-02 | 的卢技术有限公司 | Binocular vision SLAM-based automatic driving vehicle positioning method and system |
CN115359193A (en) * | 2022-10-19 | 2022-11-18 | 南京航空航天大学 | Rapid semi-dense three-dimensional reconstruction method based on binocular fisheye camera |
CN115359193B (en) * | 2022-10-19 | 2023-01-31 | 南京航空航天大学 | A fast semi-dense 3D reconstruction method based on binocular fisheye camera |
CN117032276A (en) * | 2023-07-04 | 2023-11-10 | 长沙理工大学 | Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle |
CN117032276B (en) * | 2023-07-04 | 2024-06-25 | 长沙理工大学 | Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108171787A (en) | A kind of three-dimensional rebuilding method based on the detection of ORB features | |
CN107292965B (en) | Virtual and real shielding processing method based on depth image data stream | |
CN102779347B (en) | Method and device for tracking and locating target for aircraft | |
CN106934809B (en) | Unmanned aerial vehicle aerial autonomous refueling rapid docking navigation method based on binocular vision | |
CN110956661B (en) | Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix | |
CN102088569B (en) | Sequence image splicing method and system of low-altitude unmanned vehicle | |
CN110799921A (en) | Filming method, device and drone | |
CN107953329B (en) | Object recognition and attitude estimation method, device and robotic arm grasping system | |
CN109961417B (en) | Image processing method, image processing apparatus, and mobile apparatus control method | |
CN110969667A (en) | Multi-spectrum camera external parameter self-correction algorithm based on edge features | |
CN107063228A (en) | Targeted attitude calculation method based on binocular vision | |
CN113191954B (en) | Panoramic image stitching method based on binocular camera | |
CN111027415B (en) | Vehicle detection method based on polarization image | |
CN106530239B (en) | Low-altitude tracking method for small unmanned rotorcraft moving target based on large field of view bionic fisheye | |
WO2018216341A1 (en) | Information processing device, information processing method, and program | |
WO2020114433A1 (en) | Depth perception method and apparatus, and depth perception device | |
WO2023279584A1 (en) | Target detection method, target detection apparatus, and robot | |
CN109325913A (en) | Unmanned plane image split-joint method and device | |
CN118135526A (en) | Visual target recognition and positioning method for four-rotor unmanned aerial vehicle based on binocular camera | |
CN111383264B (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN116433843A (en) | Three-dimensional model reconstruction method and device based on binocular vision reconstruction route | |
CN116309844A (en) | Three-dimensional measurement method based on single aviation picture of unmanned aerial vehicle | |
CN107743201A (en) | A fast mosaic method and device for power line corridor consumer grade digital camera | |
CN117333686A (en) | Target positioning method, device, equipment and medium | |
CN111047636B (en) | Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |
|
RJ01 | Rejection of invention patent application after publication |