CN108317953A - A kind of binocular vision target surface 3D detection methods and system based on unmanned plane - Google Patents
A kind of binocular vision target surface 3D detection methods and system based on unmanned plane Download PDFInfo
- Publication number
- CN108317953A CN108317953A CN201810081160.1A CN201810081160A CN108317953A CN 108317953 A CN108317953 A CN 108317953A CN 201810081160 A CN201810081160 A CN 201810081160A CN 108317953 A CN108317953 A CN 108317953A
- Authority
- CN
- China
- Prior art keywords
- point
- binocular vision
- unmanned aerial
- aerial vehicle
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 22
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000012847 principal component analysis method Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008676 import Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于无人机的双目视觉目标表面3D检测系统,包括:无人机、双目视觉系统、计算机系统和遥控设备;其检测方法为:利用无人机搭载的双目视觉系统遍历采集目标表面的图像数据;利用双目视觉系统采集到的图像,通过特征提取,匹配获取左右图像的视差信息;结合摄像机内外参数求得图像中点的三维坐标,获得目标局部三维点云;通过对左摄像机相邻图像三维点云的特征提取,匹配,得到两个三维点云坐标系之间的变换关系,进而实现三维点云的两两配准,最终得到整个目标表面的三维点云数据,对比目标表面原始三维数据实现对目标的检测。本发明应用于大型目标如大型工业设备等的表面检测,具有操作简单,效率高的特点。
The invention discloses a binocular vision target surface 3D detection system based on an unmanned aerial vehicle, comprising: an unmanned aerial vehicle, a binocular vision system, a computer system and a remote control device; The vision system traverses and collects the image data of the target surface; uses the images collected by the binocular vision system to obtain the disparity information of the left and right images through feature extraction and matching; combines the internal and external parameters of the camera to obtain the 3D coordinates of the midpoint of the image to obtain the local 3D point of the target cloud; through the feature extraction and matching of the 3D point cloud of the adjacent image of the left camera, the transformation relationship between the two 3D point cloud coordinate systems is obtained, and then the pairwise registration of the 3D point cloud is realized, and finally the 3D image of the entire target surface is obtained The point cloud data is compared with the original 3D data of the target surface to realize the detection of the target. The invention is applied to the surface detection of large objects such as large industrial equipment, and has the characteristics of simple operation and high efficiency.
Description
技术领域technical field
本发明涉及视觉测量技术领域,具体涉及一种基于无人机的双目视觉目标表面3D检测方法及系统。The invention relates to the technical field of visual measurement, in particular to a 3D detection method and system for a binocular vision target surface based on an unmanned aerial vehicle.
背景技术Background technique
对于大型目标的表面三维重构,在高精度要求下,固定的探测器无法感知其全部三维表面。只能采用移动探测器扫描的方法,在动态的坐标系下逐块探测目标表面的三维数据,之后加以拼接,得到最终目标。因此,图像三维重构和重构后的三维图像拼接技术在此应用领域中具有十分重要的地位。目前,对大型目标表面三维视觉重构的方法研究并不多见,对于较为复杂的大型设备的表面检测就更少有人涉猎。因此,针对此方面的研究成果,将对工业信息化水平、生产效率和生产安全性产生巨大的推进作用。For the 3D reconstruction of the surface of a large target, under the requirement of high precision, the fixed detector cannot perceive its entire 3D surface. Only the method of moving the detector to scan can be used to detect the three-dimensional data of the target surface block by block under the dynamic coordinate system, and then splicing to obtain the final target. Therefore, image 3D reconstruction and reconstructed 3D image mosaic technology play a very important role in this application field. At present, there are few studies on the methods of 3D visual reconstruction of large target surfaces, and even less people have been involved in the surface detection of relatively complex large equipment. Therefore, the research results in this area will greatly promote the level of industrial informatization, production efficiency and production safety.
由于无人机具有飞行灵活,价格低廉,续航时间长,控制范围大,有效载荷大等优点,为采用无人机挂载双目摄像机对锅炉、反应塔、蒸馏塔等大型目标进行表面检测提供了可能。Due to the advantages of UAVs such as flexible flight, low price, long battery life, large control range, and large payload, it provides a good solution for surface detection of large targets such as boilers, reaction towers, and distillation towers by using UAVs to mount binocular cameras. possible.
现在关于双目视觉三维检测技术的研究如下:The current research on binocular vision 3D detection technology is as follows:
经检索申请号为CN105716530A,专利名为“一种基于双目立体视觉的车辆几何尺寸的测量方法”采用摄像机从前后左右后五个方向采集车辆场景图像,计算相同像素点在两幅场景图中的坐标差值然后计算出车辆场景点的深度信息,进而计算出车辆的尺寸,此方法摄像机是在五个方向固定安装,只能在固定地点采集一些相对小型的,可移动的物体,对于室外固定位置的大型装置的测量,可能找不到合适的安装位置且费时费力,综合成本也会提高。After retrieval, the application number is CN105716530A, and the patent name is "A Method for Measuring the Geometric Dimensions of Vehicles Based on Binocular Stereo Vision". The camera is used to collect vehicle scene images from five directions, front, rear, left, right, and rear, and calculate the same pixel points in the two scene images. Then calculate the depth information of the vehicle scene point, and then calculate the size of the vehicle. In this method, the camera is fixedly installed in five directions, and can only collect some relatively small and movable objects at fixed locations. For outdoor The measurement of a large device with a fixed location may not find a suitable installation location, which is time-consuming and labor-intensive, and the overall cost will also increase.
而目前应用于大型固定目标的机载三维测绘方法主要采用挂载机载3D激光雷达扫描采集局部三维点云,然后结合无人机位姿完成三维测绘工作。经检索公布号为CN106443705A,专利名为“机载激光雷达测量系统和方法”通过此方法进行三维测绘,但是3D激光雷达价格昂贵,重量大,对无人机的载重要求高,不能广泛应用于小型无人机上,提高了系统成本。At present, the airborne 3D mapping method applied to large fixed targets mainly uses the mounted airborne 3D lidar to scan and collect local 3D point clouds, and then combines the UAV pose to complete the 3D mapping work. After retrieval, the publication number is CN106443705A, and the patent name is "airborne laser radar measurement system and method". This method is used for three-dimensional surveying and mapping, but 3D laser radar is expensive, heavy, and has high load requirements for drones, so it cannot be widely used. On small UAVs, the system cost is increased.
发明内容Contents of the invention
为解决上述问题,本发明提供了一种基于无人机的双目视觉目标表面3D检测方法及系统。In order to solve the above problems, the present invention provides a method and system for 3D detection of the surface of a binocular vision target based on an unmanned aerial vehicle.
为实现上述目的,本发明采取的技术方案为:In order to achieve the above object, the technical scheme that the present invention takes is:
一种基于无人机的双目视觉目标表面3D检测系统,包括:A drone-based binocular vision target surface 3D detection system, including:
无人机,用于通过挂载在其底部的双目视觉系统按从下到上,从左到右的顺序依次采集目标表面的图像数据;The UAV is used to collect image data of the target surface sequentially from bottom to top and from left to right through the binocular vision system mounted on its bottom;
计算机系统,用于对双目视觉系统采集到的图像进行处理,包括图像特征提取,特征匹配,获取深度信息,计算三维坐标,三维图像拼接等等;输出检测结果;The computer system is used to process the images collected by the binocular vision system, including image feature extraction, feature matching, acquisition of depth information, calculation of three-dimensional coordinates, three-dimensional image stitching, etc.; output detection results;
遥控设备,通过无线电传模块控制无人机的飞行路线和双目视觉系统的启闭;Remote control equipment, through the wireless transmission module to control the flight path of the UAV and the opening and closing of the binocular vision system;
所述无人机包括The drone includes
供电电池组,用于为无人机通信设备、图像采集设备、动力设备提供电力保障;Power supply battery pack, used to provide power guarantee for UAV communication equipment, image acquisition equipment and power equipment;
机载控制单元,用于接收地面控制中心的指令,The airborne control unit is used to receive instructions from the ground control center,
航姿测量系统,用于实时测量无人机的位姿,并能够记录下每个时刻的位姿信息;The attitude measurement system is used to measure the attitude of the UAV in real time, and can record the attitude information at each moment;
wifi/无线电传模块,用于与所述遥控设备实现无线通讯,接收所述遥控设备所发送的控制指令;The wifi/wireless transmission module is used to realize wireless communication with the remote control device and receive control instructions sent by the remote control device;
双目视觉系统,与无人机固定在一起,摄像头面向目标表面设置,用于进行目标图像的采集。The binocular vision system is fixed with the drone, and the camera is set facing the target surface to collect target images.
所述双目视觉系统包括The binocular vision system includes
位姿测量装置,用于测量双目视觉系统的位姿;A pose measuring device for measuring the pose of the binocular vision system;
左右摄像机,用于采集目标的图像数据;The left and right cameras are used to collect the image data of the target;
存储卡,用于分别存储左右摄像机采集到的图像数据;The memory card is used to store the image data collected by the left and right cameras respectively;
所述双目视觉系统左右两个摄像机平行放置且参数相同。The left and right cameras of the binocular vision system are placed in parallel and have the same parameters.
本发明还提供了一种基于无人机的双目视觉目标表面3D检测方法,包括如下步骤:The present invention also provides a 3D detection method based on a drone-based binocular vision target surface, comprising the following steps:
步骤1:通过遥控设备控制无人机按照预定轨迹采集图像,图像采集完毕后将存储卡中的图像导入计算机中进行处理;Step 1: Use the remote control device to control the UAV to collect images according to the predetermined trajectory. After the image collection is completed, import the images in the memory card into the computer for processing;
步骤2:对机载双目视觉系统采集的图像按时间顺序编号,形成双目左右图像序列对;Step 2: Number the images collected by the airborne binocular vision system in chronological order to form a sequence pair of binocular left and right images;
步骤3:摄像机标定,可以得到摄像机内外参数,包括摄像机焦距f和基线距离T等;Step 3: Camera calibration, the internal and external parameters of the camera can be obtained, including the camera focal length f and baseline distance T, etc.;
步骤4:对采集到的图像进行去噪,图像位姿调整等处理;Step 4: Perform denoising, image pose adjustment and other processing on the collected images;
步骤5:对同一时刻拍摄到的左右图像进行特征点提取,进行特征点匹配,得出视差信息,并计算出图像中点的三维坐标,得到局部三维点云;Step 5: Extract feature points from the left and right images captured at the same time, perform feature point matching, obtain disparity information, and calculate the 3D coordinates of the midpoint of the image to obtain a local 3D point cloud;
步骤6:对左摄像头拍摄的相邻图像的三维点云进行配准,使两个三维点云统一到一个坐标系中,得到整个目标表面的三维点云模型;Step 6: Register the 3D point clouds of the adjacent images captured by the left camera, unify the two 3D point clouds into one coordinate system, and obtain the 3D point cloud model of the entire target surface;
步骤7:利用所得到的三维点云模型与大型设备给出的表面出厂数据进行对比,检测其是否有差异。Step 7: Use the obtained 3D point cloud model to compare with the surface factory data given by the large-scale equipment to detect whether there is any difference.
其中,所述步骤4中参考无人机的位姿信息利用仿射变换校正图像位姿。Wherein, in the step 4, the pose information of the UAV is referred to and the pose of the image is corrected by affine transformation.
所述步骤5具体包括如下步骤:Described step 5 specifically comprises the following steps:
S51、提取surf特征点,在特征点周围取一个4*4的矩形区域块,每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好,由匹配的特征点可以得出视差信息;S51, extract the surf feature point, take a 4*4 rectangular area block around the feature point, each sub-area counts the haar wavelet feature of the horizontal direction and the vertical direction of 25 pixels, the haar wavelet feature is after the horizontal direction value, the vertical direction After the direction value, after the absolute value of the horizontal direction, and the sum of the absolute value of the vertical direction, these four directions are used as the feature vector of each sub-block area, and the matching degree is determined by calculating the Euclidean distance between two feature points, Euclidean The shorter the distance, the better the matching degree of the two feature points, and the disparity information can be obtained from the matched feature points;
S52、根据视差信息d=xr-x1和摄像机内外参数通过以下公式求取点的三维坐标:S52. According to the disparity information d= xr - x1 and the internal and external parameters of the camera, the three-dimensional coordinates of the point are obtained by the following formula:
所述步骤6中三维点云配准方法可分为以下步骤:The three-dimensional point cloud registration method in the step 6 can be divided into the following steps:
S1、对相邻三维点云坐标系中的每个点建立K邻域;S1. Establish a K neighborhood for each point in the adjacent three-dimensional point cloud coordinate system;
S2、提取特征点,为每个特征点建立特征描述,匹配相邻三维点云特征点;S2. Extract feature points, establish feature descriptions for each feature point, and match adjacent 3D point cloud feature points;
S3、根据匹配点对可以计算得到两个点云坐标系的转换关系,完成初始配准;S3. According to the matching point pairs, the conversion relationship of the two point cloud coordinate systems can be calculated to complete the initial registration;
S4、根据初始配准提供的初始位置,利用迭代最近点算法对相邻点云坐标系进行迭代,最终可以得到相邻点云坐标系的精确转换关系,使相邻点云坐标系统一到一个坐标系中。S4. According to the initial position provided by the initial registration, the iterative closest point algorithm is used to iterate the adjacent point cloud coordinate system, and finally the accurate conversion relationship of the adjacent point cloud coordinate system can be obtained, so that the adjacent point cloud coordinate system is one to one in the coordinate system.
所述步骤S1中采用KD-tree方法加速搜索点的K邻域。In the step S1, the KD-tree method is used to speed up the K neighborhood of the search point.
所述步骤S2中通过主成分分析法来估计点的法向量,以法向量变化量作为标准提取特征点,根据特征点与邻近点之间的几何关系,选取多种特征来描述特征点,利用直方图的统计特性统计特征点以及特征点的K邻域内点的多种特征,为其建立直方图特征描述,得到特征向量,以两点之间特征向量的距离作为匹配依据,两点之间特征向量距离最小即为匹配点对。In the step S2, the normal vector of the point is estimated by the principal component analysis method, and the feature point is extracted with the normal vector variation as a standard. According to the geometric relationship between the feature point and the adjacent point, a variety of features are selected to describe the feature point. The statistical characteristics of the histogram Statistical feature points and various features of the points in the K neighborhood of the feature points, establish a histogram feature description for it, get the feature vector, use the distance of the feature vector between the two points as the matching basis, and the distance between the two points The minimum distance between feature vectors is the matching point pair.
所述步骤S3中利用四元数法求取相邻点云坐标系之间的变换矩阵,实现步骤如下:S31:分别计算源点集P和目标点集Q的重心:In the step S3, the quaternion method is used to obtain the transformation matrix between adjacent point cloud coordinate systems, and the implementation steps are as follows: S31: Calculate the center of gravity of the source point set P and the target point set Q respectively:
其中,分别为源点集P,目标点集Q的重心三维坐标;NP,NQ分别为源点集P,目标点集Q中点的个数;Pi,Qi分别为源点集P,目标点集Q中第i个点的三维坐标。in, are the three-dimensional coordinates of the center of gravity of the source point set P and the target point set Q ; N P , N Q are the number of points in the source point set P and the target point set Q ; The three-dimensional coordinates of the i-th point in the target point set Q.
S32:根据数据点集P和Q构造协方差矩阵:S32: Construct a covariance matrix according to the data point sets P and Q:
S33:根据协方差矩阵构造4×4的对称矩阵:S33: Construct a 4×4 symmetric matrix according to the covariance matrix:
其中,I3是三阶单位矩阵,tr(∑P,Q)是矩阵∑P,Q的迹,Wherein, I 3 is the third-order identity matrix, tr (ΣP, Q) is the trace of matrix ΣP, Q,
A=[A23 A31 A12]T;Ai,j=(∑P,Q-∑P,QT)ij A=[A 23 A 31 A 12 ] T ; A i, j = (∑P, Q-∑P, Q T ) ij
S34:求得 Q(∑P,Q)的特征值以及特征向量,最大特征值的特征向量即为要求的旋转向量S34: Obtain the eigenvalue and eigenvector of Q(∑P, Q), and the eigenvector of the largest eigenvalue is the required rotation vector
qR=[q0 q1 q3]T;q R = [q 0 q 1 q 3 ] T ;
S35:根据上式所求的旋转向量 qR可以求得旋转矩阵R,从而可以得到最佳的平移矢量,算式为:S35: According to the rotation vector q R obtained by the above formula, the rotation matrix R can be obtained, so that the best translation vector can be obtained, and the formula is:
步骤S4中将相邻两个点云坐标系Xi-1和Xi统一到同一个坐标系可以由下面公式实现:In step S4, unifying the two adjacent point cloud coordinate systems Xi -1 and Xi into the same coordinate system can be realized by the following formula:
Xi-1=XiR+T;其中,R是旋转矩阵,T是平移矩阵。X i-1 =X i R+T; wherein, R is a rotation matrix, and T is a translation matrix.
本发明具有以下有益效果:The present invention has the following beneficial effects:
(1)本发明以双目摄像机作为三维检测传感器相比激光雷达的检测方法重量更轻,可以选择小型无人机作为载体,同时降低了设备采购的成本。(1) The present invention uses a binocular camera as a three-dimensional detection sensor, which is lighter in weight than the lidar detection method, and can choose a small unmanned aerial vehicle as a carrier, while reducing the cost of equipment procurement.
(2)目前的摄像机能够达到足够高的精度,完全能够满足检测需求,而且现有的匹配重构技术已将发展的相当成熟,技术方面完全可以满足需要。(2) The current camera can achieve high enough precision to fully meet the detection requirements, and the existing matching reconstruction technology has been developed quite maturely, and the technical aspects can fully meet the needs.
(3)机载双目摄像系统操作灵活方便,使用也比较简单,适用于各种条件下的三维表面检测。(3) The airborne binocular camera system is flexible and convenient to operate and relatively simple to use, and is suitable for three-dimensional surface detection under various conditions.
附图说明Description of drawings
图1为本发明实施例基于无人机的双目视觉目标表面3D检测系统的系统结构图。Fig. 1 is a system structure diagram of a binocular vision target surface 3D detection system based on an unmanned aerial vehicle according to an embodiment of the present invention.
图2为本发明实施例飞行路线示意图。Fig. 2 is a schematic diagram of the flight route of the embodiment of the present invention.
图3为本发明实施例基于无人机的双目视觉目标表面3D检测系统的检测流程图。Fig. 3 is a detection flow chart of a 3D detection system for a binocular vision target surface based on an unmanned aerial vehicle according to an embodiment of the present invention.
图4为本发明实施例中的双目摄像机深度提取原理图Fig. 4 is the schematic diagram of depth extraction of binocular camera in the embodiment of the present invention
图中:1-无人机,2一电池组,3-机载控制单元,4-WiFi或者无线电传模块,5-双目摄像机,6-固定连接结构,7-航姿检测系统。In the figure: 1-UAV, 2-Battery pack, 3-Onboard control unit, 4-WiFi or wireless transmission module, 5-Binocular camera, 6-Fixed connection structure, 7-Attitude detection system.
具体实施方式Detailed ways
为了使本发明的目的及优点更加清楚明白,以下结合实施例对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objects and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
如图1所示,本发明实施例提供了一种基于无人机的双目视觉目标表面3D检测系统,该三维表面检测方法是一种以双目立体视觉三维重建为基础并估算无人机位姿,通过软件算法将位姿的校正结合到三维重建中,得到目标的深度信息,进而得到表面检测结果。系统包括无人机,电池组,机载控制单元,WiFi或无线电传模块,带有存储卡的双目摄像机,固定连接机构,将摄像机连接到无人机底部。系统工作时,电池组为无人机的电子设备提供动力源,工作人员通过遥控设备通过WiFi或者无线电传模块,传达指令给机载控制单元,控制无人机的上升,转向,飞行路线等,并能控制摄像机的启动与停止。具体执行时,将摄像机与被测目标相隔合适的距离,然后测试摄像机的视野范围,然后根据摄像机视野的横向范围,将被测目标以底部为基准划分为几个部分,然后以其中一个部分为起点控制旋翼飞行器垂直上升,根据视野的纵向范围来确定每上升多少距离摄像机拍摄一次。飞行器飞行轨迹如图2所示,由下向上,再平移到下一个部分,再自上而下,直至覆盖整个目标的表面。停止拍摄,飞行器回到地面,将存储卡取出,将图片进行处理,进行三维重构,得到表面检测结果。As shown in Figure 1, the embodiment of the present invention provides a 3D detection system for binocular vision target surfaces based on drones. The 3D surface detection method is based on binocular stereo vision 3D reconstruction and estimates the Pose, through the software algorithm, the correction of the pose is combined with the 3D reconstruction to obtain the depth information of the target, and then obtain the surface detection result. The system consists of drone, battery pack, on-board control unit, WiFi or wireless telex module, binocular camera with memory card, fixed attachment mechanism to attach the camera to the bottom of the drone. When the system is working, the battery pack provides the power source for the electronic equipment of the UAV, and the staff transmits instructions to the onboard control unit through the remote control device through WiFi or wireless transmission module to control the UAV's ascent, steering, flight route, etc. And can control the start and stop of the camera. During the specific implementation, the camera is separated from the target to be measured by a suitable distance, and then the field of view of the camera is tested, and then according to the horizontal range of the camera field of view, the target to be measured is divided into several parts based on the bottom, and then one of the parts is used as The starting point controls the vertical ascent of the rotorcraft, and according to the longitudinal range of the field of view, it is determined how many distances the camera rises to shoot once. The flight trajectory of the aircraft is shown in Figure 2, from bottom to top, then translate to the next part, and then from top to bottom until covering the entire surface of the target. Stop shooting, the aircraft returns to the ground, takes out the memory card, processes the pictures, performs three-dimensional reconstruction, and obtains the surface inspection results.
如图3所示,根据本发明实例的基于无人机视觉表面3D检测方法包括以下步骤:As shown in Figure 3, the 3D detection method based on the UAV visual surface according to the example of the present invention includes the following steps:
步骤1、通过遥控设备控制无人机按照预定轨迹采集图像,图像采集完毕后将左右摄像机拍摄的图片导入到电脑进行编号,然后将采集到的无人机位姿信息与每个时刻的采集得到的图片进行对应,通过仿射变换原理进行校正,将左右极点映射到无穷远处实现图像的行对准。Step 1. Use the remote control device to control the UAV to collect images according to the predetermined trajectory. After the image collection is completed, import the pictures taken by the left and right cameras into the computer for numbering, and then combine the collected UAV pose information with the collection at each moment to get Corresponding to the picture, corrected by the affine transformation principle, and the left and right poles are mapped to infinity to realize the line alignment of the image.
步骤2、对所有的图像提取surf特征点;Step 2, extract surf feature points for all images;
步骤3,对每一个左右图像对中的surf特征点进行匹配,具体匹配方法如下:Step 3, match the surf feature points in each left and right image pair, the specific matching method is as follows:
Surf算法中,在特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子。In the Surf algorithm, a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. Each sub-region counts the haar wavelet features of the horizontal and vertical directions of 25 pixels, where the horizontal and vertical directions are relative to the main direction. The haar wavelet feature is 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction, and the sum of the absolute value of the vertical direction. These 4 values are used as the feature vector of each sub-block area, so there are 4* 4*4=64-dimensional vectors are used as descriptors of Surf features.
Surf是通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好,取左图某个关键点,通过遍历右图中距离最近的两个关键点,这两个关键点中,如果次最近距离除以最近距离小于阈值范围,则判定为一对匹配点。Surf determines the matching degree by calculating the Euclidean distance between two feature points. The shorter the Euclidean distance, the better the matching degree between the two feature points. Take a key point in the left picture, and traverse the closest distance in the right picture Two key points, among the two key points, if the second-closest distance divided by the shortest distance is less than the threshold range, it is determined as a pair of matching points.
步骤4,深度计算,如图4所示,可以得到简单的三角关系,Step 4, depth calculation, as shown in Figure 4, can get a simple triangular relationship,
因此,深度即可表示为: Therefore, the depth can be expressed as:
其中,xr-x1定义为视差当相机的位置,平行放置的时候,深度Z与视差xr-x1成反比关系,且fT为一个确定的常数。Among them, x r -x 1 is defined as the parallax. When the position of the camera is placed in parallel, the depth Z is inversely proportional to the parallax x r -x 1 , and fT is a definite constant.
则可以求出点的三维坐标如下:Then the three-dimensional coordinates of the point can be obtained as follows:
步骤5:对左摄像头拍摄的相邻图像的三维点云进行配准,使两个三维点云统一到一个坐标系中,得到整个目标表面的三维点云模型,其具体实现步骤如下:Step 5: Register the 3D point clouds of the adjacent images captured by the left camera, unify the two 3D point clouds into one coordinate system, and obtain the 3D point cloud model of the entire target surface. The specific implementation steps are as follows:
(1):采用KD-tree方法搜索点云中每个点的K邻域,建立点的局部邻域;(1): Use the KD-tree method to search the K neighborhood of each point in the point cloud, and establish the local neighborhood of the point;
(2):通过主成分分析法来估计点的法向量,以法向量变化量作为标准提取特征点,根据特征点与邻近点之间的几何关系,选取多种特征来描述特征点,利用直方图的统计特性统计特征点以及特征点的K邻域内点的多种特征,为其建立直方图特征描述,得到特征向量,以两点之间特征向量的距离作为匹配依据,两点之间特征向量距离最小即为匹配点对。(2): Estimate the normal vector of the point by the principal component analysis method, extract the feature point with the normal vector variation as the standard, select a variety of features to describe the feature point according to the geometric relationship between the feature point and the adjacent point, and use the histogram Statistical properties of the graph Statistical feature points and various features of points in the K neighborhood of feature points, establish histogram feature descriptions for them, and obtain feature vectors. The distance between the feature vectors between two points is used as the matching basis, and the features between two points The minimum vector distance is the matching point pair.
(3):根据匹配点对可以计算得到两个点云坐标系的转换关系,完成初始配准,其计算过程如下:(3): According to the matching point pairs, the conversion relationship of the two point cloud coordinate systems can be calculated to complete the initial registration. The calculation process is as follows:
S31:分别计算源点集P和目标点集Q的重心:S31: Calculate the center of gravity of the source point set P and the target point set Q respectively:
其中,分别为源点集P,目标点集Q的重心三维坐标;NP,NQ分别为源点集P,目标点集Q中点的个数;Pi,Qi分别为源点集P,目标点集Q中第i个点的三维坐标。in, are the three-dimensional coordinates of the center of gravity of the source point set P and the target point set Q ; N P , N Q are the number of points in the source point set P and the target point set Q ; The three-dimensional coordinates of the i-th point in the target point set Q.
S32:根据数据点集P和Q构造协方差矩阵:S32: Construct a covariance matrix according to the data point sets P and Q:
S33:根据协方差矩阵构造4×4的对称矩阵:S33: Construct a 4×4 symmetric matrix according to the covariance matrix:
其中,I3是三阶单位矩阵,tr(∑P,Q)是矩阵∑P,Q的迹,Wherein, I 3 is the third-order identity matrix, tr (ΣP, Q) is the trace of matrix ΣP, Q,
Δ=[A23 A31 A12]T;Ai,j=(∑P,Q-∑P,QT)ij;Δ=[A 23 A 31 A 12 ] T ; A i,j =(∑P, Q-∑P, Q T ) ij ;
S34:求得 Q(∑P,Q)的特征值以及特征向量,最大特征值的特征向量即为要求的旋转向量 qR=[q0q1q3]T;S34: Obtain the eigenvalue and eigenvector of Q(∑P, Q), the eigenvector of the largest eigenvalue is the required rotation vector q R =[q 0 q 1 q 3 ] T ;
S35:根据上式所求的旋转向量 qR可以求得旋转矩阵R,从而可以得到最佳的平移矢量,算式为:S35: According to the rotation vector q R obtained by the above formula, the rotation matrix R can be obtained, so that the best translation vector can be obtained, and the formula is:
(4):根据初始配准提供的初始位置,利用迭代最近点算法对相邻点云坐标系进行迭代,最终可以得到相邻点云坐标系的精确转换关系,使相邻点云坐标系统一到一个坐标系中。(4): According to the initial position provided by the initial registration, the iterative closest point algorithm is used to iterate the adjacent point cloud coordinate system, and finally the accurate conversion relationship of the adjacent point cloud coordinate system can be obtained, so that the adjacent point cloud coordinate system into a coordinate system.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications should also be It is regarded as the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810081160.1A CN108317953A (en) | 2018-01-19 | 2018-01-19 | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810081160.1A CN108317953A (en) | 2018-01-19 | 2018-01-19 | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108317953A true CN108317953A (en) | 2018-07-24 |
Family
ID=62887290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810081160.1A Pending CN108317953A (en) | 2018-01-19 | 2018-01-19 | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108317953A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145905A (en) * | 2018-08-29 | 2019-01-04 | 河海大学常州校区 | A kind of transmission line of electricity accessory detection method of view-based access control model conspicuousness |
CN109242898A (en) * | 2018-08-30 | 2019-01-18 | 华强方特(深圳)电影有限公司 | A kind of three-dimensional modeling method and system based on image sequence |
CN109341588A (en) * | 2018-10-08 | 2019-02-15 | 西安交通大学 | A three-dimensional profile measurement method with binocular structured light three-system method viewing angle weighting |
CN109597087A (en) * | 2018-11-15 | 2019-04-09 | 天津大学 | A kind of 3D object detection method based on point cloud data |
CN110058594A (en) * | 2019-04-28 | 2019-07-26 | 东北大学 | The localization for Mobile Robot navigation system and method for multisensor based on teaching |
CN110120013A (en) * | 2019-05-15 | 2019-08-13 | 深圳市凌云视迅科技有限责任公司 | A kind of cloud method and device |
CN110155369A (en) * | 2019-05-29 | 2019-08-23 | 中国民航大学 | A method for inspecting aircraft skin surface cracks |
CN110222382A (en) * | 2019-05-22 | 2019-09-10 | 成都飞机工业(集团)有限责任公司 | A kind of aircraft axes Optimal Fitting method |
CN110717936A (en) * | 2019-10-15 | 2020-01-21 | 哈尔滨工业大学 | Image stitching method based on camera attitude estimation |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN110992291A (en) * | 2019-12-09 | 2020-04-10 | 国网安徽省电力有限公司检修分公司 | Distance measuring method, system and storage medium based on trinocular vision |
CN111462213A (en) * | 2020-03-16 | 2020-07-28 | 天目爱视(北京)科技有限公司 | Equipment and method for acquiring 3D coordinates and dimensions of object in motion process |
CN111784680A (en) * | 2020-07-06 | 2020-10-16 | 天津大学 | Detection method of key point consistency based on left and right eye views of binocular camera |
CN112268548A (en) * | 2020-12-14 | 2021-01-26 | 成都飞机工业(集团)有限责任公司 | Airplane local appearance measuring method based on binocular vision |
WO2021046716A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市大疆创新科技有限公司 | Method, system and device for detecting target object and storage medium |
CN113763562A (en) * | 2021-08-31 | 2021-12-07 | 哈尔滨工业大学(威海) | Elevation feature detection and elevation feature processing method based on binocular vision |
CN114018158A (en) * | 2021-11-02 | 2022-02-08 | 中国大唐集团科技工程有限公司 | Non-contact three-dimensional thermal displacement detection system and application thereof |
CN114396921A (en) * | 2021-11-15 | 2022-04-26 | 中国计量大学 | Qiantanjiang river tidal bore height and propagation speed measuring method based on unmanned aerial vehicle |
CN114820777A (en) * | 2021-03-24 | 2022-07-29 | 北京大成国测科技有限公司 | Unmanned aerial vehicle three-dimensional data front-end processing method and device and unmanned aerial vehicle |
CN115236643A (en) * | 2022-06-27 | 2022-10-25 | 中国电信股份有限公司 | Sensor calibration method, system, device, electronic equipment and medium |
CN115493515A (en) * | 2022-09-05 | 2022-12-20 | 泰州市创新电子有限公司 | Binocular vision three-dimensional measurement method |
CN115953605A (en) * | 2023-03-14 | 2023-04-11 | 深圳中集智能科技有限公司 | Machine vision multi-target image coordinate matching method |
CN116045833A (en) * | 2023-01-03 | 2023-05-02 | 中铁十九局集团有限公司 | Bridge construction deformation monitoring system based on big data |
CN116128955A (en) * | 2022-12-30 | 2023-05-16 | 中国电信股份有限公司卫星通信分公司 | Unmanned aerial vehicle monitoring method and device, electronic equipment and storage medium |
CN116754039A (en) * | 2023-08-16 | 2023-09-15 | 四川吉埃智能科技有限公司 | Method for detecting earthwork volume in ground pits |
CN117491355A (en) * | 2023-11-06 | 2024-02-02 | 广州航海学院 | A visual detection method for the wear amount of three-dimensional curved surfaces of large components of rake teeth |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105043350A (en) * | 2015-06-25 | 2015-11-11 | 闽江学院 | Binocular vision measuring method |
CN105184863A (en) * | 2015-07-23 | 2015-12-23 | 同济大学 | Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method |
CN105928493A (en) * | 2016-04-05 | 2016-09-07 | 王建立 | Binocular vision three-dimensional mapping system and method based on UAV |
CN106500669A (en) * | 2016-09-22 | 2017-03-15 | 浙江工业大学 | A kind of Aerial Images antidote based on four rotor IMU parameters |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
US20170186164A1 (en) * | 2015-12-29 | 2017-06-29 | Government Of The United States As Represetned By The Secretary Of The Air Force | Method for fast camera pose refinement for wide area motion imagery |
CN107316325A (en) * | 2017-06-07 | 2017-11-03 | 华南理工大学 | A kind of airborne laser point cloud based on image registration and Image registration fusion method |
-
2018
- 2018-01-19 CN CN201810081160.1A patent/CN108317953A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105043350A (en) * | 2015-06-25 | 2015-11-11 | 闽江学院 | Binocular vision measuring method |
CN105184863A (en) * | 2015-07-23 | 2015-12-23 | 同济大学 | Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method |
US20170186164A1 (en) * | 2015-12-29 | 2017-06-29 | Government Of The United States As Represetned By The Secretary Of The Air Force | Method for fast camera pose refinement for wide area motion imagery |
CN105928493A (en) * | 2016-04-05 | 2016-09-07 | 王建立 | Binocular vision three-dimensional mapping system and method based on UAV |
CN106500669A (en) * | 2016-09-22 | 2017-03-15 | 浙江工业大学 | A kind of Aerial Images antidote based on four rotor IMU parameters |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
CN107316325A (en) * | 2017-06-07 | 2017-11-03 | 华南理工大学 | A kind of airborne laser point cloud based on image registration and Image registration fusion method |
Non-Patent Citations (4)
Title |
---|
HERBERT BAY, ET AL.: ""Speeded-Up Robust Features (SURF)"", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 * |
XIANG GAO: ""A Mosaic Method on Images Small of Unmanned Aerial Vehicle"", 《2ND WORKSHOP ON ADVANCED RESEARCH AND TECHNOLOGY IN INDUSTRY APPLICATIONS》 * |
张蒙: ""基于改进的ICP算法的点云配准技术"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
陶海跻 等: ""一种基于法向量的点云自动配准方法"", 《中国激光》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145905A (en) * | 2018-08-29 | 2019-01-04 | 河海大学常州校区 | A kind of transmission line of electricity accessory detection method of view-based access control model conspicuousness |
CN109242898B (en) * | 2018-08-30 | 2022-03-22 | 华强方特(深圳)电影有限公司 | Three-dimensional modeling method and system based on image sequence |
CN109242898A (en) * | 2018-08-30 | 2019-01-18 | 华强方特(深圳)电影有限公司 | A kind of three-dimensional modeling method and system based on image sequence |
CN109341588A (en) * | 2018-10-08 | 2019-02-15 | 西安交通大学 | A three-dimensional profile measurement method with binocular structured light three-system method viewing angle weighting |
CN109597087B (en) * | 2018-11-15 | 2022-07-01 | 天津大学 | A 3D object detection method based on point cloud data |
CN109597087A (en) * | 2018-11-15 | 2019-04-09 | 天津大学 | A kind of 3D object detection method based on point cloud data |
CN110058594A (en) * | 2019-04-28 | 2019-07-26 | 东北大学 | The localization for Mobile Robot navigation system and method for multisensor based on teaching |
CN110120013A (en) * | 2019-05-15 | 2019-08-13 | 深圳市凌云视迅科技有限责任公司 | A kind of cloud method and device |
CN110120013B (en) * | 2019-05-15 | 2023-10-20 | 深圳市凌云视迅科技有限责任公司 | Point cloud splicing method and device |
CN110222382A (en) * | 2019-05-22 | 2019-09-10 | 成都飞机工业(集团)有限责任公司 | A kind of aircraft axes Optimal Fitting method |
CN110222382B (en) * | 2019-05-22 | 2023-04-18 | 成都飞机工业(集团)有限责任公司 | Aircraft coordinate system optimization fitting method |
CN110155369A (en) * | 2019-05-29 | 2019-08-23 | 中国民航大学 | A method for inspecting aircraft skin surface cracks |
CN110155369B (en) * | 2019-05-29 | 2022-05-17 | 中国民航大学 | Method for checking surface cracks of aircraft skin |
WO2021046716A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市大疆创新科技有限公司 | Method, system and device for detecting target object and storage medium |
CN110717936B (en) * | 2019-10-15 | 2023-04-28 | 哈尔滨工业大学 | An Image Stitching Method Based on Camera Pose Estimation |
CN110717936A (en) * | 2019-10-15 | 2020-01-21 | 哈尔滨工业大学 | Image stitching method based on camera attitude estimation |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN110992291B (en) * | 2019-12-09 | 2023-07-21 | 国网安徽省电力有限公司超高压分公司 | Distance measurement method, system and storage medium based on trinocular vision |
CN110992291A (en) * | 2019-12-09 | 2020-04-10 | 国网安徽省电力有限公司检修分公司 | Distance measuring method, system and storage medium based on trinocular vision |
CN111462213A (en) * | 2020-03-16 | 2020-07-28 | 天目爱视(北京)科技有限公司 | Equipment and method for acquiring 3D coordinates and dimensions of object in motion process |
CN111784680B (en) * | 2020-07-06 | 2022-06-28 | 天津大学 | Detection method of key point consistency based on left and right eye views of binocular camera |
CN111784680A (en) * | 2020-07-06 | 2020-10-16 | 天津大学 | Detection method of key point consistency based on left and right eye views of binocular camera |
CN112268548A (en) * | 2020-12-14 | 2021-01-26 | 成都飞机工业(集团)有限责任公司 | Airplane local appearance measuring method based on binocular vision |
CN114820777A (en) * | 2021-03-24 | 2022-07-29 | 北京大成国测科技有限公司 | Unmanned aerial vehicle three-dimensional data front-end processing method and device and unmanned aerial vehicle |
CN113763562A (en) * | 2021-08-31 | 2021-12-07 | 哈尔滨工业大学(威海) | Elevation feature detection and elevation feature processing method based on binocular vision |
CN113763562B (en) * | 2021-08-31 | 2023-08-29 | 哈尔滨工业大学(威海) | Facade Feature Detection and Facade Feature Processing Method Based on Binocular Vision |
CN114018158A (en) * | 2021-11-02 | 2022-02-08 | 中国大唐集团科技工程有限公司 | Non-contact three-dimensional thermal displacement detection system and application thereof |
CN114396921A (en) * | 2021-11-15 | 2022-04-26 | 中国计量大学 | Qiantanjiang river tidal bore height and propagation speed measuring method based on unmanned aerial vehicle |
CN114396921B (en) * | 2021-11-15 | 2023-12-08 | 中国计量大学 | Method for measuring tidal height and propagation speed of Yangtze river on basis of unmanned aerial vehicle |
CN115236643A (en) * | 2022-06-27 | 2022-10-25 | 中国电信股份有限公司 | Sensor calibration method, system, device, electronic equipment and medium |
CN115236643B (en) * | 2022-06-27 | 2024-09-03 | 中国电信股份有限公司 | Sensor calibration method, system, device, electronic equipment and medium |
CN115493515A (en) * | 2022-09-05 | 2022-12-20 | 泰州市创新电子有限公司 | Binocular vision three-dimensional measurement method |
CN116128955A (en) * | 2022-12-30 | 2023-05-16 | 中国电信股份有限公司卫星通信分公司 | Unmanned aerial vehicle monitoring method and device, electronic equipment and storage medium |
CN116045833A (en) * | 2023-01-03 | 2023-05-02 | 中铁十九局集团有限公司 | Bridge construction deformation monitoring system based on big data |
CN116045833B (en) * | 2023-01-03 | 2023-12-22 | 中铁十九局集团有限公司 | Bridge construction deformation monitoring system based on big data |
CN115953605B (en) * | 2023-03-14 | 2023-06-06 | 深圳中集智能科技有限公司 | Machine vision multi-target image coordinate matching method |
CN115953605A (en) * | 2023-03-14 | 2023-04-11 | 深圳中集智能科技有限公司 | Machine vision multi-target image coordinate matching method |
CN116754039B (en) * | 2023-08-16 | 2023-10-20 | 四川吉埃智能科技有限公司 | Method for detecting earthwork volume in ground pits |
CN116754039A (en) * | 2023-08-16 | 2023-09-15 | 四川吉埃智能科技有限公司 | Method for detecting earthwork volume in ground pits |
CN117491355A (en) * | 2023-11-06 | 2024-02-02 | 广州航海学院 | A visual detection method for the wear amount of three-dimensional curved surfaces of large components of rake teeth |
CN117491355B (en) * | 2023-11-06 | 2024-07-02 | 广州航海学院 | Visual detection method for abrasion loss of three-dimensional curved surface of rake teeth type large component |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108317953A (en) | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane | |
US10311297B2 (en) | Determination of position from images and associated camera positions | |
CN107314762B (en) | Method for detecting ground object distance below power line based on monocular sequence images of unmanned aerial vehicle | |
CN109579843A (en) | Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method | |
CN104501779A (en) | High-accuracy target positioning method of unmanned plane on basis of multi-station measurement | |
CN107808407A (en) | Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera | |
CN106529538A (en) | Method and device for positioning aircraft | |
CN107560603B (en) | Unmanned aerial vehicle oblique photography measurement system and measurement method | |
CN112419374A (en) | A UAV Localization Method Based on Image Registration | |
CN106096207B (en) | A kind of rotor wing unmanned aerial vehicle wind resistance appraisal procedure and system based on multi-vision visual | |
CN104268935A (en) | Feature-based airborne laser point cloud and image data fusion system and method | |
CN109739254A (en) | An unmanned aerial vehicle using visual image positioning in electric power inspection and its positioning method | |
CN108780577A (en) | Image processing method and equipment | |
CN107677274A (en) | Unmanned plane independent landing navigation information real-time resolving method based on binocular vision | |
CN115371673A (en) | A binocular camera target location method based on Bundle Adjustment in an unknown environment | |
CN109214254A (en) | A kind of method and device of determining robot displacement | |
CN109883400B (en) | Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL | |
CN113554754A (en) | Indoor positioning method based on computer vision | |
US20160125267A1 (en) | Method and system for coordinating between image sensors | |
CN112950671A (en) | Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle | |
CN111402324B (en) | Target measurement method, electronic equipment and computer storage medium | |
CN115690623A (en) | Remote target damage assessment method based on three-dimensional reconstruction | |
Knyaz et al. | Joint geometric calibration of color and thermal cameras for synchronized multimodal dataset creating | |
CN115597592A (en) | Comprehensive positioning method applied to unmanned aerial vehicle inspection | |
Majdik et al. | Micro air vehicle localization and position tracking from textured 3d cadastral models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180724 |
|
RJ01 | Rejection of invention patent application after publication |