[go: up one dir, main page]

CN112529936B - A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs - Google Patents

A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs Download PDF

Info

Publication number
CN112529936B
CN112529936B CN202011291009.4A CN202011291009A CN112529936B CN 112529936 B CN112529936 B CN 112529936B CN 202011291009 A CN202011291009 A CN 202011291009A CN 112529936 B CN112529936 B CN 112529936B
Authority
CN
China
Prior art keywords
point set
model
aerial vehicle
unmanned aerial
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011291009.4A
Other languages
Chinese (zh)
Other versions
CN112529936A (en
Inventor
欧阳子臻
杨煜基
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011291009.4A priority Critical patent/CN112529936B/en
Publication of CN112529936A publication Critical patent/CN112529936A/en
Application granted granted Critical
Publication of CN112529936B publication Critical patent/CN112529936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target tracking, in particular to a monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, which comprises the following steps of: s1: extracting and tracking characteristic points of two adjacent frames of gray images, and marking the characteristic points as an original sample point set; s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best The method comprises the steps of carrying out a first treatment on the surface of the S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system; s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system. According to the scheme, invalid characteristic points in the original sample point set are removed, so that the influence of the characteristic points generated by the shadow of the unmanned aerial vehicle in the flight process can be eliminated, and the speed estimation of the unmanned aerial vehicle in the outdoor flight is more accurate.

Description

一种用于室外无人机的单目稀疏光流算法A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs

技术领域technical field

本发明涉及目标跟踪技术领域,更具体地是涉及一种用于室外无人机的单目稀疏光流算法。The invention relates to the technical field of target tracking, and more particularly relates to a monocular sparse optical flow algorithm for outdoor drones.

背景技术Background technique

光流的概念由James J.Gibson于20世纪40年代首先提出的,是指时变图像中模式运动速度。因为当物体在运动时,它在图像上对应点的亮度模式也在运动,这种图像亮度目视的表观运动就是光流,由于它包含了目标运动的信息,因此可被观察者用来确定目标的运动情况。The concept of optical flow was first proposed by James J.Gibson in the 1940s, which refers to the speed of pattern motion in time-varying images. Because when an object is moving, the brightness pattern of its corresponding point on the image is also moving. The apparent movement of the brightness of this image is the optical flow. Since it contains the information of the target movement, it can be used by the observer. Determine the movement of the target.

光流法常被用于运动图像分析。它是空间运动物体在观察成像平面上的像素运动的瞬时速度,利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧与相邻两帧灰度图像中当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息。光流算法从求解规模上可可分为稠密光流和稀疏光流。稠密光流需要对图像上每个像素点求解,计算量较大;而稀疏光流只关注图像上的特征点如角点或边沿点,计算量较小,因此适用于嵌入式设备,目前也较多的应用于无人机飞控系统中,实施计算获取无人机自身的速度。经典的稀疏光流算法由Lucas和Kanade于1981年首次提出,因此也被称作L-K光流。L-K光流算法需要满足3个基本假设:第一个是亮度恒定假设。亮度恒定假设指出,连续两帧图片中代表相同物体的点的亮度保持不变。第二个是时间连续假设。时间连续假设指出,连续两帧图片中,物体的位移比较小。第三个是空间一致性假设。空间一致性假设指出,图中临近的点属于相同的表面,具有相似的运动。然后再利用最小二乘法对邻域中的所有像素点求解基本的光流方程。Optical flow is often used in motion image analysis. It is the instantaneous speed of the pixel movement of the spatial moving object on the observation imaging plane, and the grayscale image of the previous frame and the adjacent two frames is found by using the change of pixels in the image sequence in the time domain and the correlation between adjacent frames. The corresponding relationship between the current frames in the frame can be used to calculate the motion information of the object between adjacent frames. The optical flow algorithm can be divided into dense optical flow and sparse optical flow in terms of solution scale. Dense optical flow needs to be solved for each pixel on the image, and the amount of calculation is large; while sparse optical flow only focuses on the feature points on the image such as corner points or edge points, and the amount of calculation is small, so it is suitable for embedded devices. It is mostly used in the UAV flight control system to implement calculations to obtain the speed of the UAV itself. The classic sparse optical flow algorithm was first proposed by Lucas and Kanade in 1981, so it is also called L-K optical flow. The L-K optical flow algorithm needs to satisfy three basic assumptions: the first one is the assumption of constant brightness. The assumption of constant brightness states that the brightness of points representing the same object in two consecutive frames remains constant. The second is the time-continuity assumption. The temporal continuity hypothesis points out that in two consecutive frames of pictures, the displacement of the object is relatively small. The third is the spatial consistency assumption. The spatial consistency assumption states that adjacent points in the graph belong to the same surface and have similar motions. Then use the least square method to solve the basic optical flow equation for all pixels in the neighborhood.

中国专利CN106989744A公开了一种获取光流速度的方法,其对每帧灰度图采用Shi-Tomasi角点检测的方法,选取100个纹理信息最明显的特征点,以特征点为中心选取3*3的像素窗口作为一个像素单元,将相邻两帧灰度图像中前一帧灰度图中的像素窗口位置作为后一帧灰度图的像素窗口的初始位置,建立一个搜索区域,利用Lucas-Kanade反向相乘算法,采用五层光流金字塔模型,利用最小二乘法,通过令相邻两帧灰度图像中前一帧的像素窗口在后一阵的灰度图的搜索区域内搜索灰度差和最小,求得后一帧的像素窗口位置,两帧像素窗口的距离差,即为光流向量,通过差分,获取光流速度。Chinese patent CN106989744A discloses a method for obtaining optical flow velocity, which adopts the method of Shi-Tomasi corner point detection for each frame of grayscale image, selects 100 feature points with the most obvious texture information, and selects 3* The pixel window of 3 is used as a pixel unit, and the pixel window position in the previous frame grayscale image in the two adjacent grayscale images is used as the initial position of the pixel window in the next frame grayscale image to establish a search area, using Lucas -Kanade reverse multiplication algorithm, using a five-layer optical flow pyramid model, using the least squares method, by making the pixel window of the previous frame in two adjacent grayscale images search for gray in the search area of the grayscale image of the next frame The minimum degree difference and the minimum position of the pixel window of the next frame are obtained, and the distance difference between the pixel windows of the two frames is the optical flow vector, and the optical flow velocity is obtained through the difference.

但是无人机在室外飞行时,会在太阳的照射下产生影子,当无人机自身影子的边缘点和角点被选中作为特征点时,则无法正确反映无人机相对于底面的运动速度。因此,这个方法并不适用于室外的无人机飞行控制。However, when the UAV is flying outdoors, it will produce shadows under the sunlight. When the edge points and corner points of the UAV’s own shadow are selected as feature points, it cannot correctly reflect the movement speed of the UAV relative to the bottom surface. . Therefore, this method is not suitable for outdoor UAV flight control.

发明内容Contents of the invention

本发明为克服上述现有技术中的不足,提供了一种用于室外无人机的单目稀疏光流算法,可以消除无人机速度识别过程中自身影子的影响,提高无人机速度的估算精度。In order to overcome the deficiencies in the above-mentioned prior art, the present invention provides a monocular sparse optical flow algorithm for outdoor drones, which can eliminate the influence of its own shadow in the speed recognition process of the drone, and improve the speed of the drone. Estimation accuracy.

在本技术方案中,提供了一种用于室外无人机的单目稀疏光流算法,包括以下步骤:In this technical solution, a monocular sparse optical flow algorithm for outdoor drones is provided, including the following steps:

S1:提取跟踪相邻两帧灰度图像的特征点,记为原始样本点集合;S1: Extract and track the feature points of two adjacent grayscale images, which are recorded as the original sample point set;

S2:剔除原始样本点集合中的无效特征点,得到抽样样本点集合,并通过抽样样本点集合得到最优模型HbestS2: Eliminate invalid feature points in the original sample point set to obtain a sampling sample point set, and obtain the optimal model H best by sampling the sample point set;

S3:根据最优模型Hbest,得到无人机在图像坐标系下的速度;S3: Obtain the speed of the UAV in the image coordinate system according to the optimal model H best ;

S4:将图像坐标系下的速度转换为摄像机坐标系下的速度。S4: Convert the speed in the image coordinate system to the speed in the camera coordinate system.

本方案中通过对原始样本点集合中的无效特征点进行剔除,即可以消除无人机飞行过程中其自身影子产生的特征点影响,使得无人机在室外飞行时速度估算更加精准。In this solution, by eliminating invalid feature points in the original sample point set, the influence of feature points generated by the UAV’s own shadow during flight can be eliminated, making the speed estimation of the UAV more accurate when flying outdoors.

优选地,上述的步骤S1中将相邻两帧特征点的相对运动方程表示为:Preferably, in the above-mentioned step S1, the relative motion equations of the feature points of two adjacent frames are expressed as:

p′[i]=Hp[i]p' [i] = Hp [i] ,

其中,p′[i]=[x′[i],y′[i],1]为相邻两帧灰度图像中当前帧第i个特征点,p[i]=[x[i],y[i],1]为相邻两帧灰度图像中前一帧第i个特征点,H为灰度图像中最多特征点的模型,为单应性矩阵,h13和h23分别表示无人机在摄像机坐标系中x方向和y方向上的平移量。求解出h13和h23则可以得到无人机在图像坐标下的速度。Among them, p' [i] = [x' [i] , y' [i] , 1] is the i-th feature point of the current frame in two adjacent grayscale images, p [i] = [x [i] , y [i] , 1] is the i-th feature point of the previous frame in two adjacent gray-scale images, H is the model with the most feature points in the gray-scale image, and is the homography matrix, h 13 and h 23 respectively Indicates the translation of the drone in the x-direction and y-direction in the camera coordinate system. Solve h 13 and h 23 to get the speed of the UAV in the image coordinates.

优选地,上述的步骤S2中运用随机抽样一致性算法得到最优模型Hbest,具体包括以下步骤:Preferably, in the above step S2, the optimal model H best is obtained by using the random sampling consensus algorithm, which specifically includes the following steps:

S21:在原始样本点集合中随机寻找mi对样本点;S21: Randomly search for mi pairs of sample points in the original sample point set;

S22:利用mi对样本点求得模型H;S22: Use m i to obtain model H for sample points;

S23:根据模型H计算得到局内点集Ii,并根据局内点集Ii得到当前最优局内点集IbestS23: Calculate the intra-office point set I i according to the model H, and obtain the current optimal intra-office point set I best according to the intra-office point set I i ;

S24:根据当前最优局内点集Ibest更新模型H,记为当前最优模型HbestS24: Update the model H according to the current optimal intra-office point set I best , and record it as the current optimal model H best ;

S25:计算原始样本点集合内所有样本点对当前最优模型Hbest的误差和Jsum,判断Jsum是否小于总模型误差阈值E,若是,则输出当前最优模型Hbest,若否,则进入步骤S26;S25: Calculate the error sum J sum of all sample points in the original sample point set to the current optimal model H best , and judge whether J sum is smaller than the total model error threshold E, if so, output the current optimal model H best , if not, then Go to step S26;

S26:设置最大循环次数K,循环执行步骤S21到步骤S25,得到并输出当前最优模型HbestS26: Set the maximum number of cycles K, execute step S21 to step S25 cyclically, obtain and output the current optimal model H best .

优选地,上述的步骤S22中,利用齐次方程组求得模型H,具体公式为:Preferably, in the above-mentioned step S22, the model H is obtained by using a homogeneous equation system, and the specific formula is:

其中,(x[n],y[n])和(x′[n],y′[n])为第n组样本对,x[1]为相邻两帧灰度图像中前一帧第一个特征点在图像中的x轴位置,y[1]为相邻两帧灰度图像中前一帧第一个特征点在图像中的y轴位置,x[n]为相邻两帧灰度图像中前一帧第m个特征点在图像中的x轴位置,y[1]为相邻两帧灰度图像中前一帧第n个特征点在图像中的y轴位置,x′(1)为相邻两帧灰度图像中当前帧第一个特征点在图像中的x轴位置,y′(1]相邻两帧灰度图像中当前帧第一个特征点在图像中的y轴位置,x′[n]为相邻两帧灰度图像中当前帧第m个特征点在图像中的x轴位置,y′[n]为相邻两帧灰度图像中前一帧第n个特征点在图像中的y轴位置。Among them, (x [n] , y [n] ) and (x′ [n] , y′ [n ]) are the nth group of sample pairs, and x [1] is the previous frame of two adjacent grayscale images The x-axis position of the first feature point in the image, y [1] is the y-axis position of the first feature point in the image of the previous frame in two adjacent grayscale images, and x [n] is the two adjacent gray-scale images The x-axis position of the mth feature point in the previous frame in the frame grayscale image, and y [1] is the y-axis position of the nth feature point in the previous frame in the two adjacent grayscale images, x' (1) is the x-axis position of the first feature point in the current frame in the two adjacent grayscale images, and y' (1) is the first feature point in the current frame in the two adjacent grayscale images The y-axis position in the image, x' [n] is the x-axis position of the mth feature point in the image in the current frame in the two adjacent grayscale images, and y' [n] is the x-axis position in the two adjacent grayscale images The y-axis position of the nth feature point in the image in the previous frame.

优选地,上述的模型H中的h33设置为1,这是由于使用齐次方程对模型H进行计算。Preferably, h 33 in the above-mentioned model H is set to 1, since model H is calculated using a homogeneous equation.

优选地,上述的m大于等于4,这样设置是因为h33设置为1后,模型H还有8个未知数,就需要8个点对其进行求解。Preferably, the aforementioned m is greater than or equal to 4, which is set because after h 33 is set to 1, the model H still has 8 unknowns, and 8 points are needed to solve it.

优选地,上述的步骤S23中,具体包括以下步骤:Preferably, the above step S23 specifically includes the following steps:

S231:设置最大循环次数N;S231: set the maximum number of cycles N;

S232:计算原始样本点集合中的所有样本对与模型H的误差J[n]具体公式为:S232: Calculate the error J [n] between all sample pairs in the original sample point set and the model H. The specific formula is:

J[n]=||p[n]′-Hp[n]||2J [n] =||p [n] '-Hp [n] || 2 ;

其中,p[n]′为相邻两帧灰度图像中前一帧第n个特征点,p[n]为相邻两帧灰度图像中当前帧第n个特征点;Wherein, p [n] ' is the nth feature point of the previous frame in two adjacent grayscale images, and p [n] is the nth feature point of the current frame in the adjacent two grayscale images;

S233:判断J[n]是否小于误差阈值e,若是,则将第n个特征点归入到最优局内点集Ibest,若否,则将第n个特征点归入到局外点集进行剔除;S233: Determine whether J [n] is smaller than the error threshold e, if so, classify the nth feature point into the optimal intra-office point set I best , if not, classify the n-th feature point into the out-of-field point set to remove;

S234:重复执行步骤S232到步骤S233,得到局内点集IiS234: Repeat step S232 to step S233 to obtain intra-office point set I i ;

S235:分别求得局内点集Ii和最优局内点集Ibest中所有元素数量Si和Sbest,判断Si是否小于Sbest,若是,则使Ibest=Ibest,若否,则使Ibest=IiS235: Obtain the quantities S i and S best of all elements in the intra-office point set I i and the optimal intra-office point set I best respectively, and judge whether S i is smaller than S best , if so, make I best = I best , if not, then Let I best = I i ;

其中,i为循环执行次数,i≤N。Among them, i is the number of loop execution times, i≤N.

优选地,由于无人机的光流摄像头一般安装于底部,且镜头垂直向下,无人机在中低速飞行时,始终保持这较小的俯仰角和横滚角,因此认为摄像机坐标系下的速度是无人机相对于下表面的速度,上述的步骤S4中转换的具体公式为:Preferably, since the optical flow camera of the UAV is generally installed at the bottom, and the lens is vertically downward, when the UAV is flying at medium and low speeds, it always maintains this small pitch angle and roll angle, so it is considered that the camera coordinate system is The speed of is the speed of the UAV relative to the lower surface, and the specific formula converted in the above step S4 is:

vc=Kvpv c = Kv p ,

其中,vp为图像坐标系下的无人机速度,vc为摄像机坐标系下的无人机速度,K为摄像机的内参矩阵。Among them, v p is the speed of the UAV in the image coordinate system, v c is the speed of the UAV in the camera coordinate system, and K is the internal parameter matrix of the camera.

为了提高无人机速度估算的可靠性,还包括步骤S5:利用卡尔曼滤波融合估计无人机最佳速度。In order to improve the reliability of the speed estimation of the UAV, a step S5 is also included: using Kalman filter fusion to estimate the optimal speed of the UAV.

优选地,上述的步骤S5具体包括以下步骤:Preferably, the above step S5 specifically includes the following steps:

S51:无人机一般会携带惯性测量单元,此时根据无人机机载惯性测量单元得到加速度,通过对加速度积分得到无人机速度viS51: UAVs generally carry an inertial measurement unit. At this time, the acceleration is obtained according to the onboard inertial measurement unit of the UAV, and the UAV velocity v i is obtained by integrating the acceleration;

S52:将速度模型用于卡尔曼滤波的预测阶段,将步骤S4得到的vc用于卡尔曼滤波的更新阶段,得到无人机的最佳速度估计v。S52: Use the velocity model in the prediction stage of the Kalman filter, and use the vc obtained in step S4 in the update stage of the Kalman filter to obtain the optimal velocity estimate v of the UAV.

与现有技术相比,有益效果是:本发明通过使用随即抽样一致性算法对原始样本点集合中的无效特征点进行剔除,即可以消除无人机飞行过程中其自身影子产生的特征点影响,使得无人机在室外飞行时速度估算更加精准;另外在求出无人机速度后再利用惯性测量单元构建无人机速度模型,通过卡尔曼滤波进行融合无人机速度和速度模型估计无人机速度,进一步提高速度估算的可靠性和精准性,有利于无人机飞控系统的更好应用。Compared with the prior art, the beneficial effect is: the present invention eliminates the invalid feature points in the original sample point set by using the random sampling consensus algorithm, which can eliminate the influence of the feature points produced by the UAV's own shadow during the flight process , which makes the speed estimation of UAVs more accurate when flying outdoors; in addition, after calculating the speed of UAVs, the inertial measurement unit is used to construct the speed model of UAVs, and the fusion of UAV speeds and speed models is estimated through Kalman filtering. The speed of man-machine further improves the reliability and accuracy of speed estimation, which is conducive to the better application of UAV flight control system.

附图说明Description of drawings

图1为本发明用于室外无人机的单目稀疏光流算法的流程示意图;Fig. 1 is the schematic flow chart of the monocular sparse optical flow algorithm that the present invention is used for outdoor UAV;

图2为本发明用于室外无人机的单目稀疏光流算法中提取跟踪特征点的示意图,其中矩形框内为无人机影子产生的特征点;Fig. 2 is a schematic diagram of extracting and tracking feature points in the monocular sparse optical flow algorithm used for outdoor drones in the present invention, wherein the feature points generated by the shadow of the drone are inside the rectangular frame;

图3为利用经典光流算法对图2计算得到的可视化效果图;Figure 3 is the visualization effect diagram calculated by using the classic optical flow algorithm on Figure 2;

图4为利用本发明用于室外无人机的单目稀疏光流算法对图2计算得到的可视化效果图。Fig. 4 is a visualization effect diagram calculated on Fig. 2 by using the monocular sparse optical flow algorithm for outdoor drones of the present invention.

具体实施方式Detailed ways

附图仅用于示例性说明,不能理解为对本专利的限制;为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。附图中描述位置关系仅用于示例性说明,不能理解为对本专利的限制。The accompanying drawings are for illustrative purposes only, and should not be construed as limitations on this patent; in order to better illustrate this embodiment, certain components in the accompanying drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product; for those skilled in the art It is understandable that some well-known structures and descriptions thereof may be omitted in the drawings. The positional relationship described in the drawings is for illustrative purposes only, and should not be construed as a limitation on this patent.

本发明实施例的附图中相同或相似的标号对应相同或相似的部件;在本发明的描述中,需要理解的是,若有术语“上”、“下”、“左”、“右”“长”“短”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此附图中描述位置关系的用语仅用于示例性说明,不能理解为对本专利的限制,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。In the drawings of the embodiments of the present invention, the same or similar symbols correspond to the same or similar components; The orientation or positional relationship indicated by "long" and "short" are based on the orientation or positional relationship shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific Orientation, construction and operation in a specific orientation, so the terms describing the positional relationship in the drawings are for illustrative purposes only, and should not be construed as limitations on this patent. For those of ordinary skill in the art, it can be understood according to specific circumstances The specific meaning of the above terms.

下面通过具体实施例,并结合附图,对本发明的技术方案作进一步的具体描述:Below by specific embodiment, in conjunction with accompanying drawing, the technical solution of the present invention is described in further detail:

实施例1Example 1

如图1为一种用于室外无人机的单目稀疏光流算法的实施例,包括以下步骤:Figure 1 is an embodiment of a monocular sparse optical flow algorithm for outdoor drones, including the following steps:

S1:采用Shi-Tomasi角点检测的方法,提取跟踪相邻两帧灰度图像的特征点,记为原始样本点集合;当然可以采用其他的方法检测灰度图像内的特征点,这里不做限定;S1: Use the Shi-Tomasi corner detection method to extract and track the feature points of two adjacent grayscale images, which are recorded as the original sample point set; of course, other methods can be used to detect feature points in the grayscale image, which will not be done here limited;

S2:剔除原始样本点集合中的无效特征点,得到抽样样本点集合,并通过抽样样本点集合得到最优模型HbestS2: Eliminate invalid feature points in the original sample point set to obtain a sampling sample point set, and obtain the optimal model H best by sampling the sample point set;

S3:根据最优模型Hbest,得到无人机在图像坐标系下的速度;S3: Obtain the speed of the UAV in the image coordinate system according to the optimal model H best ;

S4:将图像坐标系下的速度转换为摄像机坐标系下的速度。S4: Convert the speed in the image coordinate system to the speed in the camera coordinate system.

本实施例中的步骤S1中将相邻两帧特征点的相对运动方程表示为:In step S1 in the present embodiment, the relative motion equations of the feature points of two adjacent frames are expressed as:

p′[i]=Hp[i]p' [i] = Hp [i] ,

其中,p′[i]=[x′[i],y′[i],1]为相邻两帧灰度图像中当前帧第i个特征点,p[i]=[x[i],y[i],1]为相邻两帧灰度图像中前一帧第i个特征点,H为灰度图像中最多特征点的模型,为单应性矩阵,由于无人机的俯仰角和横滚角接近零,所以h12,h22,h13,h23都接近于0,所以h13和h23分别表示无人机在摄像机坐标系中x方向和y方向上的平移量,求解出h13和h23则可以得到无人机在图像坐标下的速度。Among them, p' [i] = [x' [i] , y' [i] , 1] is the i-th feature point of the current frame in two adjacent grayscale images, p [i] = [x [i] , y [i] , 1] is the i-th feature point of the previous frame in two adjacent grayscale images, H is the model with the most feature points in the grayscale image, and is a homography matrix. Due to the pitch of the UAV Angle and roll angle are close to zero, so h 12 , h 22 , h 13 , h 23 are all close to 0, so h 13 and h 23 represent the translation amount of the drone in the x direction and y direction in the camera coordinate system respectively , solve the h 13 and h 23 to get the speed of the UAV in the image coordinates.

本实施例中的步骤S2中运用随机抽样一致性算法得到最优模型Hbest,具体包括以下步骤:In step S2 in this embodiment, the optimal model H best is obtained by using the random sampling consensus algorithm, which specifically includes the following steps:

S21:在原始样本点集合中随机寻找mi对样本点;S21: Randomly search for mi pairs of sample points in the original sample point set;

S22:利用mi对样本点求得模型H;S22: Use m i to obtain model H for sample points;

S23:根据模型H计算得到局内点集Ii,并根据局内点集Ii得到当前最优局内点集IbestS23: Calculate the intra-office point set I i according to the model H, and obtain the current optimal intra-office point set I best according to the intra-office point set I i ;

S24:根据当前最优局内点集Ibest更新模型H,记为当前最优模型HbestS24: Update the model H according to the current optimal intra-office point set I best , and record it as the current optimal model H best ;

S25:计算原始样本点集合内所有样本点对当前最优模型Hbest的误差和Jsum,判断Jsum是否小于总模型误差阈值E,若是,则输出当前最优模型Hbest,若否,则进入步骤S26;S25: Calculate the error sum J sum of all sample points in the original sample point set to the current optimal model H best , and judge whether J sum is smaller than the total model error threshold E, if so, output the current optimal model H best , if not, then Go to step S26;

S26:设置最大循环次数K,循环执行步骤S21到步骤S25,得到并输出当前最优模型HbestS26: Set the maximum number of cycles K, execute step S21 to step S25 cyclically, obtain and output the current optimal model H best .

本实施例中的步骤S22中,利用齐次方程组求得模型H,具体公式为:In the step S22 among the present embodiment, utilize homogeneous equations to obtain model H, concrete formula is:

其中,(x[n],y[n])和(x′[n],y′[n])为第n组样本对,x[1]为相邻两帧灰度图像中前一帧第一个特征点在图像中的x轴位置,y[1]为相邻两帧灰度图像中前一帧第一个特征点在图像中的y轴位置,x[n]为相邻两帧灰度图像中前一帧第m个特征点在图像中的x轴位置,y[1]为相邻两帧灰度图像中前一帧第n个特征点在图像中的y轴位置,x′(1)为相邻两帧灰度图像中当前帧第一个特征点在图像中的x轴位置,y′(1]为相邻两帧灰度图像中当前帧第一个特征点在图像中的y轴位置,x′[n]为相邻两帧灰度图像中当前帧第m个特征点在图像中的x轴位置,y′[n]为相邻两帧灰度图像中前一帧第n个特征点在图像中的y轴位置。Among them, (x [n] , y [n] ) and (x′ [n ], y′ [n ]) are the nth group of sample pairs, and x [1] is the previous frame of two adjacent grayscale images The x-axis position of the first feature point in the image, y [1] is the y-axis position of the first feature point in the image of the previous frame in two adjacent grayscale images, and x [n] is the two adjacent gray-scale images The x-axis position of the mth feature point in the previous frame in the frame grayscale image, and y [1] is the y-axis position of the nth feature point in the previous frame in the two adjacent grayscale images, x' (1) is the x-axis position of the first feature point of the current frame in the two adjacent grayscale images, and y' (1] is the first feature point of the current frame in the two adjacent grayscale images In the y-axis position in the image, x' [n] is the x-axis position of the mth feature point in the image in the current frame of the two adjacent grayscale images, and y' [n] is the two adjacent grayscale images The y-axis position of the nth feature point in the image in the previous frame.

本实施例中的模型H中的h33设置为1,这是由于使用齐次方程对模型H进行计算。h 33 in the model H in this embodiment is set to 1, because the model H is calculated using a homogeneous equation.

本实施例中的m大于等于4,这样设置是因为h33设置为1后,模型H还有8个未知数,就需要8个点对其进行求解,即至少需要4对样本点。In this embodiment, m is greater than or equal to 4. This setting is because after h 33 is set to 1, the model H still has 8 unknowns, and 8 points are needed to solve it, that is, at least 4 pairs of sample points are required.

本实施例中的步骤S23中,具体包括以下步骤:In step S23 in this embodiment, specifically include the following steps:

S231:设置最大循环次数N;S231: set the maximum number of cycles N;

S232:计算原始样本点集合中的所有样本对与模型H的误差J[n]具体公式为:S232: Calculate the error J [n] between all sample pairs in the original sample point set and the model H. The specific formula is:

J[n]=||p[n]′-Hp[n]||2J [n] =||p [n] '-Hp [n] || 2 ;

其中,p[n]′为相邻两帧灰度图像中前一帧第n个特征点,p[n]为相邻两帧灰度图像中当前帧第n个特征点;Wherein, p [n] ' is the nth feature point of the previous frame in two adjacent grayscale images, and p [n] is the nth feature point of the current frame in the adjacent two grayscale images;

S233:判断J[n]是否小于误差阈值e,若是,则将第n个特征点归入到最优局内点集Ibest,若否,则将第n个特征点归入到局外点集进行剔除;S233: Determine whether J [n] is smaller than the error threshold e, if so, classify the nth feature point into the optimal intra-office point set I best , if not, classify the n-th feature point into the out-of-field point set to remove;

S234:重复执行步骤S232到步骤S233,得到局内点集IiS234: Repeat step S232 to step S233 to obtain intra-office point set I i ;

S235:分别求得局内点集Ii和最优局内点集Ibest中所有元素数量Si和sbest,判断Si是否小于Sbest,若是,则使Ibest=Ibest,若否,则使Ibest=IiS235: Obtain the numbers S i and s best of all elements in the intra-office point set I i and the optimal intra-office point set I best respectively, and judge whether S i is smaller than S best , if so, make I best = I best , if not, then Let I best = I i ;

其中,i为循环执行次数,i≤N。Among them, i is the number of loop execution times, i≤N.

本实施例中的,由于无人机的光流摄像头一般安装于底部,且镜头垂直向下,无人机在中低速飞行时,认为无人机俯仰角和横滚角为零,因此认为摄像机坐标系下的速度是无人机相对于下表面的速度步骤S4中转换的具体公式为:In this embodiment, since the optical flow camera of the UAV is generally installed at the bottom, and the lens is vertically downward, when the UAV is flying at medium and low speeds, the pitch angle and roll angle of the UAV are considered to be zero, so the camera is considered The speed under the coordinate system is the speed of the UAV relative to the lower surface. The specific formula converted in step S4 is:

vc=Kvpv c = Kv p ,

其中,vp为图像坐标系下的无人机速度,vc为摄像机坐标系下的无人机速度,K为摄像机的内参矩阵。Among them, v p is the speed of the UAV in the image coordinate system, v c is the speed of the UAV in the camera coordinate system, and K is the internal parameter matrix of the camera.

实施例2Example 2

本实施例与实施例1的区别仅在于,还包括步骤S5:利用卡尔曼滤波融合估计无人机最佳速度。这样可以进一步提高无人机速度估算的可靠性。The only difference between this embodiment and Embodiment 1 is that step S5 is further included: using Kalman filter fusion to estimate the optimal speed of the UAV. This further improves the reliability of the drone's velocity estimate.

由于无人机一般会携带惯性测量单元,本实施例中的步骤S5具体包括以下步骤:Since the UAV generally carries an inertial measurement unit, step S5 in this embodiment specifically includes the following steps:

S51:根据无人机机载惯性测量单元得到加速度,通过对加速度积分得到无人机速度viS51: Obtain the acceleration according to the onboard inertial measurement unit of the drone, and obtain the velocity v i of the drone by integrating the acceleration;

S52:将速度模型用于卡尔曼滤波的预测阶段,将步骤S4得到的vc用于卡尔曼滤波的更新阶段,得到无人机的最佳速度估计v。S52: Use the velocity model in the prediction stage of the Kalman filter, and use the vc obtained in step S4 in the update stage of the Kalman filter to obtain the optimal velocity estimate v of the UAV.

如图2至图4所示,使用本发明用于室外无人机的单目稀疏光流算法对无人机光流摄像机获取的图片处理,其中通过使用随机抽样一致性算法对原始样本点集合中的无效特征点,即光流摄像机获取的灰度图像中的无人机影子产生的特征点,进行剔除,即可以消除无人机飞行过程中其自身影子产生的特征点,避免其影响,使得无人机在室外飞行时速度估算更加精准。As shown in Figures 2 to 4, use the monocular sparse optical flow algorithm for outdoor drones of the present invention to process the pictures acquired by the optical flow camera of the drone, wherein the original sample point set is processed by using the random sampling consensus algorithm Invalid feature points, that is, the feature points produced by the shadow of the drone in the grayscale image acquired by the optical flow camera, can be eliminated, that is, the feature points produced by the shadow of the drone during its flight can be eliminated, and its influence can be avoided. This makes the speed estimation of the UAV more accurate when flying outdoors.

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Apparently, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, rather than limiting the implementation of the present invention. For those of ordinary skill in the art, other changes or changes in different forms can be made on the basis of the above description. It is not necessary and impossible to exhaustively list all the implementation manners here. All modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (7)

1. A monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle, comprising the steps of:
s1: extracting and tracking characteristic points of two adjacent frames of gray images, and marking the characteristic points as an original sample point set;
s2: removing invalid characteristic points in the original sample point set to obtain a sampling sample point set, and obtaining an optimal model H through the sampling sample point set best The method comprises the steps of carrying out a first treatment on the surface of the Obtaining an optimal model H by using a random sampling consistency algorithm best The method specifically comprises the following steps:
s21: randomly finding m in the original sample point set i Sample points;
s22: by m i Obtaining a model H for the sample points; the homogeneous equation set is utilized to calculate a model H, and the specific formula is as follows:
wherein, (x) [n] ,y [n] ) And (x' [n] ,y′ [n] ) For the nth group of sample pairs, x [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the x axis and y [1] For the position of the first characteristic point of the previous frame in the adjacent two frames of gray level images in the y axis of the image, x [n] For the position of the x-axis and y of the m-th feature point of the previous frame in the adjacent two frames of gray level images in the image [n] For the position of the nth characteristic point of the previous frame in the adjacent two frames of gray images in the y axis of the image, x' (1) For the x-axis position, y of the first characteristic point of the current frame in the image in two adjacent frames of gray scale images ′(1] For the position of the first characteristic point of the current frame in the two adjacent frames of gray images in the y axis of the image, x' [n] For the position of the m-th characteristic point of the current frame in the adjacent two frames of gray images in the x-axis of the image, y' [n] Is the first frame in two adjacent frames of gray level imagesThe y-axis positions of the n feature points in the image;
s23: calculating according to the model H to obtain an intra-local point set I i And according to the intra-office point set I i Obtaining a current optimal local point set I best The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
s231: setting the maximum cycle number N;
s232: calculating the error J between all pairs of samples in the original sample point set and the model H [n] The specific formula is as follows:
J [n] =||p [n] ′-Hp [n] || 2
wherein p is [n] ' is the nth characteristic point of the previous frame in two adjacent frames of gray level images, p [n] The n-th feature point of the current frame in the gray level images of two adjacent frames;
s233: judgment of J [n] If the error threshold value is smaller than the error threshold value e, if so, the nth characteristic point is classified into an optimal local point set I best If not, classifying the nth characteristic point into an outlier set for eliminating;
s234: repeatedly executing the steps S232 to S233 to obtain the local point set I i
S235: respectively find out the point set I in the office i And an optimal local Point set I best All element numbers S in (3) i And S is best Judgment of s i Whether or not it is smaller than s best If yes, let I best =I best If not, make I best =I i
Wherein i is the number of times of cyclic execution, i is less than or equal to N;
s24: according to the current optimal local point set I best Updating the model H and recording the model H as a current optimal model H best
S25: calculating the current optimal model H of all sample points in the original sample point set best Error sum J of (2) sum Judgment of J sum Whether the error threshold value is smaller than the total model error threshold value E, if so, outputting a current optimal model H best If not, go to step S26;
s26: setting the maximum number of loops K, and circularly executing the steps S21 to S25 to obtainAnd outputs the current optimal model H best
S3: according to the optimal model H best Obtaining the speed of the unmanned aerial vehicle under an image coordinate system;
s4: the speed in the image coordinate system is converted to the speed in the camera coordinate system.
2. The monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle of claim 1, wherein the step S1 represents the relative motion equation of the feature points of two adjacent frames as:
p′ [i] =Hp [i] ,
wherein p' [i] =[x′ [i] ,y′ [i] ,1]For the ith feature point, p of the current frame in two adjacent frames of gray level images [i] =[x [i] ,y [i] ,1]The i-th characteristic point of the previous frame in two adjacent frames of gray images is represented by H, the model of the most characteristic points in the gray images is represented by homography matrix, and H 13 And h 23 Respectively represent the translation amount of the unmanned aerial vehicle in the x direction and the y direction in the image coordinate system.
3. The monocular sparse optical flow algorithm for an outdoor drone of claim 1, wherein H of H in the model 33 Set to 1.
4. The monocular sparse optical flow algorithm for an outdoor unmanned aerial vehicle of claim 1, wherein m in step S21 is 4 or more.
5. The monocular sparse optical flow algorithm for an outdoor drone of claim 1, wherein the specific formula of the conversion in step S4 is:
v c =v p
wherein v is p For the speed of the unmanned aerial vehicle in the image coordinate system, v c The speed of the unmanned aerial vehicle under the camera coordinate system is represented by K, which is an internal reference matrix of the camera.
6. The monocular sparse optical flow algorithm for an outdoor drone of claim 5, further comprising step S5: and estimating the optimal speed of the unmanned aerial vehicle by using Kalman filtering fusion.
7. The monocular sparse optical flow algorithm for an outdoor drone of claim 6, wherein step S5 specifically comprises the steps of:
s51: the acceleration is obtained according to the unmanned aerial vehicle-mounted inertial measurement unit, and the unmanned aerial vehicle speed v is obtained by integrating the acceleration i
S52: using the velocity model in the prediction stage of Kalman filtering, and using v obtained in the step S4 c And the updating stage is used for Kalman filtering to obtain the optimal speed estimation v of the unmanned aerial vehicle.
CN202011291009.4A 2020-11-17 2020-11-17 A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs Active CN112529936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291009.4A CN112529936B (en) 2020-11-17 2020-11-17 A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs

Publications (2)

Publication Number Publication Date
CN112529936A CN112529936A (en) 2021-03-19
CN112529936B true CN112529936B (en) 2023-09-05

Family

ID=74981117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291009.4A Active CN112529936B (en) 2020-11-17 2020-11-17 A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs

Country Status (1)

Country Link
CN (1) CN112529936B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789125A (en) * 2010-01-26 2010-07-28 北京航空航天大学 Method for tracking human skeleton motion in unmarked monocular video
CN102156991A (en) * 2011-04-11 2011-08-17 上海交通大学 Quaternion based object optical flow tracking method
CN103426008A (en) * 2013-08-29 2013-12-04 北京大学深圳研究生院 Vision human hand tracking method and system based on on-line machine learning
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进克隆卡尔曼滤波的视觉惯性里程计实现;徐之航;欧阳威;武元新;;信息技术(第05期);第1-4页 *

Also Published As

Publication number Publication date
CN112529936A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN107423698B (en) A Gesture Estimation Method Based on Parallel Convolutional Neural Network
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
CN110287826B (en) Video target detection method based on attention mechanism
CN103530619B (en) Gesture identification method based on a small amount of training sample that RGB-D data are constituted
CN105488811B (en) A kind of method for tracking target and system based on concentration gradient
CN113686314B (en) Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
CN110276785A (en) An anti-occlusion infrared target tracking method
CN105976402A (en) Real scale obtaining method of monocular vision odometer
CN106780560B (en) A visual tracking method of bionic robotic fish based on feature fusion particle filter
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN114979489A (en) Gyroscope-based heavy equipment production scene video monitoring and image stabilizing method and system
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
CN103985143A (en) Discriminative online target tracking method based on videos in dictionary learning
CN110647836A (en) A Robust Deep Learning-Based Single Target Tracking Method
CN118172399A (en) Target ranging system based on self-supervision monocular depth estimation method
CN109242019A (en) A kind of water surface optics Small object quickly detects and tracking
CN116245949B (en) A high-precision visual SLAM method based on improved quadtree feature point extraction
CN111144415B (en) A Detection Method for Tiny Pedestrian Targets
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN112529936B (en) A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs
CN111192290B (en) Block Processing Method for Pedestrian Image Detection
CN117213464B (en) Real-time simultaneous localization and mapping system based on implicit characterization
CN116883897A (en) Low-resolution target identification method
CN108492308B (en) A method and system for determining variational optical flow based on mutual structure-guided filtering
CN114613002B (en) Dynamic object detection method and system under motion visual angle based on light projection principle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant