[go: up one dir, main page]

CN107808362A - A kind of image split-joint method combined based on unmanned plane POS information with image SURF features - Google Patents

A kind of image split-joint method combined based on unmanned plane POS information with image SURF features Download PDF

Info

Publication number
CN107808362A
CN107808362A CN201711132452.5A CN201711132452A CN107808362A CN 107808362 A CN107808362 A CN 107808362A CN 201711132452 A CN201711132452 A CN 201711132452A CN 107808362 A CN107808362 A CN 107808362A
Authority
CN
China
Prior art keywords
mrow
image
msub
mtd
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711132452.5A
Other languages
Chinese (zh)
Inventor
赵德群
王亚洲
孙光民
邓钱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201711132452.5A priority Critical patent/CN107808362A/en
Publication of CN107808362A publication Critical patent/CN107808362A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于无人机POS信息与图像SURF特征结合的图像拼接方法,涉及到数字图像处理领域、GIS领域以及测绘领域等相关领域。首先对图像进行几何校正,然后计算出图像四角点的地理坐标,以第一张图像地理坐标为基础,通过提取相邻图像重叠区域的SURF特征去得到同名匹配目标点的位置关系,从而依次修正后面图像的地理坐标,最后采用自适应渐入渐出的融合算法,得到了一副视觉效果良好的全景图像,完成图像的良好拼接。本方法将图像特征提取算法与图像的地理坐标相结合,在拼接效率及视觉效果上比传统的特征提取拼接算法均有较大的提高,且拼接后的图像有地理信息,具有一定的实用价值。

The invention discloses an image splicing method based on the combination of UAV POS information and image SURF features, and relates to the fields of digital image processing, GIS, surveying and mapping, and other related fields. First, the image is geometrically corrected, and then the geographic coordinates of the four corners of the image are calculated. Based on the geographic coordinates of the first image, the positional relationship of the matching target points with the same name is obtained by extracting the SURF features of the overlapping areas of adjacent images, and then corrected in turn. The geographic coordinates of the following images, and finally adopt the fusion algorithm of adaptive gradual in and out, and obtain a panoramic image with good visual effect, and complete the good splicing of the images. This method combines the image feature extraction algorithm with the geographic coordinates of the image, which has a greater improvement in stitching efficiency and visual effects than the traditional feature extraction stitching algorithm, and the stitched image has geographic information, which has certain practical value .

Description

一种基于无人机POS信息与图像SURF特征结合的图像拼接 方法An image mosaic based on the combination of drone POS information and image SURF features method

技术领域technical field

本发明涉及一种基于无人机POS(Position and Orientation System)信息与图像SURF特征相结合的拼接方法,属于数字图像处理领域、GIS领域以及测绘领域等相关领域。The invention relates to a splicing method based on the combination of unmanned aerial vehicle POS (Position and Orientation System) information and image SURF features, and belongs to the fields of digital image processing, GIS, surveying and mapping, and other related fields.

背景技术Background technique

UAV(Unmanned Aerial Vehicle无人机)具有运行简单、反应迅速、飞行灵活、成本低等特点,已被广泛应用于抗灾救灾、军事侦察、海上检测、环境保护和土地动态监测等区域。而由于无人机航拍序列图片拍摄的视角有限,受到无人机飞行的高度及相机的参数的限制。为了对拍摄的区域进行全局的把握与分析,得到更多的目标区域信息,图像的快速拼接显得十分迫切与需要。UAV (Unmanned Aerial Vehicle) has the characteristics of simple operation, quick response, flexible flight, and low cost. It has been widely used in disaster relief, military reconnaissance, maritime detection, environmental protection, and land dynamic monitoring. However, due to the limited viewing angle of UAV aerial photography sequence pictures, it is limited by the flying height of UAV and the parameters of the camera. In order to fully grasp and analyze the captured area and get more information about the target area, it is very urgent and necessary to quickly stitch images.

目前无人机的航拍图像拼接方法主要有两种:一种是基于图像特征的拼接方法,另一种是基于无人机POS信息的拼接方法。At present, there are two main methods for stitching aerial images of drones: one is a stitching method based on image features, and the other is a stitching method based on UAV POS information.

基于特征的图像拼接主要包括两个主要的步骤,分别为图像配准,图像融合。其中图像配准为核心步骤,在基于特征检测的图像配准算法中,1988年CHris Harris提出了Harris 角点检测算子,SIFT特征匹配算法(scale invariant feature transform)是一种经典的算法,它通过在空间尺度中寻找极值点,并提取出其位置、尺度、旋转不变量,并且提出来的特征比较稳定,但是缺点为提取出的特征点数量较多,计算量大,耗时长。Bay于2006年提出了SURF算法,意为加速鲁棒性特征,是对SIFT算法进行的一种改进,SURF算法利用Hessian矩阵确定候选点然后进行非极大抑制,降低了计算复杂度,一般来说,标准的SURF算子比SIFT算子快好几倍,并且在多副图片下具有更好的鲁棒性,但是基于特征的图像拼接方法误差容易积累,并且在针对海域等区域等特征不容易提取的情况下,传统特征拼接方法并不适用。Feature-based image stitching mainly includes two main steps, namely image registration and image fusion. Among them, image registration is the core step. In the image registration algorithm based on feature detection, in 1988, CHris Harris proposed the Harris corner detection operator. The SIFT feature matching algorithm (scale invariant feature transform) is a classic algorithm. By looking for extreme points in the spatial scale, and extracting its position, scale, and rotation invariants, and the proposed features are relatively stable, but the disadvantage is that the number of extracted feature points is large, the amount of calculation is large, and it takes a long time. Bay proposed the SURF algorithm in 2006, which means to speed up the robustness feature. It is an improvement on the SIFT algorithm. The SURF algorithm uses the Hessian matrix to determine the candidate points and then performs non-maximum suppression, which reduces the computational complexity. In other words, the standard SURF operator is several times faster than the SIFT operator, and it has better robustness under multiple pictures, but the error of the feature-based image stitching method is easy to accumulate, and it is not easy to target features such as sea areas. In the case of extraction, traditional feature stitching methods are not applicable.

基于POS的信息拼接方法主要是利用航拍图像的地理坐标关系来进行配准,无人机的航行速度,航行高度,当前曝光点的飞机经纬度信息,都可以通过无人机携带的POS系统获取到。这些信息记录了飞机飞行时在每个曝光时刻飞行时的姿态,但是由于无人机在进行航拍的时候,姿态并不固定,因此需要相邻曝光点的两组POS数据来进行空三计算,从而计算出图像像素点的经纬度,随着建立模型的不同,误差也有所不同。The POS-based information splicing method mainly uses the geographical coordinate relationship of aerial images for registration. The flight speed, flight altitude, and aircraft latitude and longitude information of the current exposure point can all be obtained through the POS system carried by the UAV. . This information records the attitude of the aircraft at each exposure time when it is flying. However, since the attitude of the drone is not fixed when it is taking aerial photography, two sets of POS data of adjacent exposure points are needed for aerotriangulation calculation. Thus, the latitude and longitude of the pixel points of the image are calculated, and the error varies with the establishment of the model.

本方法将特征提取算法与图像的地理坐标相结合,在拼接效率及视觉效果上比传统的特征提取拼接算法均有较大的提高,且拼接后的图像有地理信息,具有一定的实用价值。This method combines the feature extraction algorithm with the geographic coordinates of the image, which has a greater improvement in stitching efficiency and visual effects than the traditional feature extraction stitching algorithm, and the stitched image has geographic information, which has certain practical value.

发明内容Contents of the invention

针对以上两种方法的缺点,为了提高拼接精度以及对增加对各种航拍图像的适应性,提出了一种基于无人机POS信息与传统特征拼接相结合的方法,首先对图像进行几何校正,然后计算出图像四角点的地理坐标,以第一张图像地理坐标为基础,通过提取相邻图像重叠区域的SURF特征去得到同名匹配目标点的位置关系,从而依次修正后面图像的地理坐标,最后采用自适应渐入渐出的融合算法,得到了一副视觉效果良好的全景图像,完成图像的良好拼接。In view of the shortcomings of the above two methods, in order to improve the stitching accuracy and increase the adaptability to various aerial images, a method based on the combination of UAV POS information and traditional feature stitching is proposed. First, the image is geometrically corrected. Then calculate the geographic coordinates of the four corners of the image. Based on the geographic coordinates of the first image, extract the SURF features of the overlapping areas of adjacent images to obtain the positional relationship of the matching target points with the same name, thereby sequentially correcting the geographic coordinates of the subsequent images. Finally By adopting the fusion algorithm of adaptive gradual in and gradual out, a panoramic image with good visual effect is obtained, and the good splicing of the images is completed.

本发明采用的技术方案为一种基于无人机POS信息与图像SURF特征结合的图像拼接方法,该方法包括如下步骤:The technical scheme adopted in the present invention is a method of image splicing based on the combination of UAV POS information and image SURF features, the method comprising the following steps:

S1:航拍图像预处理:S1: Aerial image preprocessing:

无人机在执行航拍任务中,由于受到飞行器的姿态、高度、速度以及地球自转等因素的影响,造成图像相对于地面目标实际位置发生挤压、扭曲、拉伸和偏移等,因此对航拍的图像进行几何矫正,以得到基于同一基准投影面的遥感图像。During the aerial photography mission of the UAV, due to the influence of the aircraft's attitude, height, speed, and the rotation of the earth, the image is squeezed, twisted, stretched, and offset relative to the actual position of the ground target. The images are geometrically rectified to obtain remote sensing images based on the same reference projection plane.

S2:航拍图像重叠区域计算:S2: Calculation of overlapping area of aerial images:

根据POS信息计算相邻图像的重叠区域,根据POS信息得到序列图像曝光点的时间以及飞行的实际距离,根据飞机当前飞行的角度将速度进行正方向上的分解,通过计算相邻拍摄图像的重叠区域以减少拼接的计算量,并且随着特征点的寻找面积越小,可能出现的误匹配概率就越小,提高检测效率。Calculate the overlapping area of adjacent images according to the POS information, obtain the exposure time of the sequence image and the actual distance of the flight according to the POS information, decompose the speed in the positive direction according to the current flight angle of the aircraft, and calculate the overlapping area of adjacent captured images In order to reduce the calculation amount of splicing, and as the search area of the feature points is smaller, the probability of false matching is smaller, and the detection efficiency is improved.

S3:图像的地理坐标计算:S3: Geographical coordinate calculation of the image:

将图像进行几何矫正后,根据当前飞机飞行的姿态角度计算出图像中心点的经纬度坐标,根据相机内外方位元素计算图像地面分辨率,由此计算出图像四点的经纬度坐标,再将地理坐标转换到空间直角坐标系下,从而按照空间直角坐标系进行投影,得到图像之间的相对位置关系。After the image is geometrically corrected, the latitude and longitude coordinates of the center point of the image are calculated according to the attitude angle of the current aircraft flight, and the ground resolution of the image is calculated according to the internal and external orientation elements of the camera, and the latitude and longitude coordinates of the four points of the image are calculated, and then the geographic coordinates are converted To the space Cartesian coordinate system, so as to project according to the space Cartesian coordinate system, and obtain the relative positional relationship between the images.

S4:提取图像的SURF特征:S4: Extract the SURF feature of the image:

SURF特征的提取包括:构建Hessian矩阵,生成所有兴趣点、构建尺度空间、对特征点进行定位、特征点主方向分配、生成特征点描述子、特征点匹配等步骤,通过这些步骤计算得到相邻图像的SURF特征匹配点对。The extraction of SURF features includes: constructing a Hessian matrix, generating all interest points, constructing a scale space, locating feature points, assigning the main direction of feature points, generating feature point descriptors, and matching feature points. Through these steps, the adjacent The SURF features of the image match point pairs.

S5:利用提取到的图像SURF特征点对修正地理坐标:S5: Use the extracted image SURF feature point pairs to correct the geographic coordinates:

由于已经知道每张图像的地理坐标,因此得到的同名点对的地理坐标应该为同一点,若计算出来的不一样,则以第一张图像为基准,依次修正后面的地理坐标,从而进行位置的配准,提高直接按地理坐标进行放置的准确性。Since the geographic coordinates of each image are already known, the geographic coordinates of the obtained point pairs with the same name should be the same point. If the calculated ones are different, use the first image as the reference, and correct the subsequent geographic coordinates in order to determine the position. registration, improving the accuracy of placement directly by geographic coordinates.

S6:图像融合策略:S6: Image fusion strategy:

对于修正坐标后的图像,由于每张图像之间有一定的痕迹,因此需要采用一些策略来解决拼接缝隙之间的颜色过渡差异较大的问题,采用渐入渐出自适应权重融合方法来对图像之间的缝隙进行平缓过渡,使图片之间看着自然,视觉效果良好。For the image after correcting the coordinates, since there are certain traces between each image, some strategies need to be adopted to solve the problem of large color transition differences between stitching gaps, and the gradual in and out adaptive weight fusion method is used to image The transition between the gaps is smooth, so that the pictures look natural and the visual effect is good.

与现有技术相比,本发明具有以下优点:本文提出的方法将图像特征提取算法与图像的地理坐标相结合,在拼接效率及视觉效果上比传统的特征提取拼接算法均有较大的提高,且拼接后的图像有地理信息,具有一定的实用价值。Compared with the prior art, the present invention has the following advantages: the method proposed in this paper combines the image feature extraction algorithm with the geographic coordinates of the image, and has a greater improvement in stitching efficiency and visual effects than the traditional feature extraction stitching algorithm , and the spliced image has geographic information, which has certain practical value.

附图说明Description of drawings

图1为设计方法流程图;Figure 1 is a flow chart of the design method;

图2无人机携带POS信息格式;Figure 2 UAV carries POS information format;

图3为相邻图像重叠区域流程,其中图3.1为速度分解图,图3.2为重叠区域计算分析图;Figure 3 is the flow chart of the overlapping area of adjacent images, in which Figure 3.1 is the velocity decomposition diagram, and Figure 3.2 is the calculation and analysis diagram of the overlapping area;

图4提取相邻图像的SURF特征点过程图,其中图4.1为检测特征点图,图4.2为特征点方向计算图,图4.3为检测到的相邻图像的特征点匹配点对;Figure 4 is a process diagram of extracting SURF feature points of adjacent images, where Figure 4.1 is a detection feature point map, Figure 4.2 is a feature point direction calculation map, and Figure 4.3 is a detected feature point matching point pair of an adjacent image;

图5根据SURF特征点对修正地理坐标过程,其中图5.1为按照地理坐标放置的两张航拍图像,图5.2为按照本文方法修正后的地理坐标放置效果图,图5.3为多张图片修正后的放置效果;Figure 5 is the process of correcting geographic coordinates according to SURF feature point pairs, in which Figure 5.1 shows two aerial images placed according to geographic coordinates, Figure 5.2 shows the effect of geographical coordinates placed after correction according to the method in this paper, and Figure 5.3 shows the correction of multiple pictures placement effect;

图6融合后效果比对图;Figure 6: Effect comparison chart after fusion;

具体实施方式Detailed ways

下面结合附图对本发明做进一步说明;The present invention will be further described below in conjunction with accompanying drawing;

本发明的整体流程如图1所示,主要分为六大环节:航拍图像预处理环节、航拍图像重叠区域计算环节、图像地理坐标计算环节、提取图像SURF特征环节、结合相邻图像SURF特征匹配点对修正地理坐标环节和图像融合环节。其中,结合相邻图像SURF特征匹配点对修正图像地理坐标是本方法的创新层,结合算法执行顺序,下面对上述各个内容的具体实施方式进行说明:The overall process of the present invention is shown in Figure 1, and is mainly divided into six major links: aerial photography image preprocessing link, aerial photography image overlapping area calculation link, image geographic coordinates calculation link, image SURF feature extraction link, and adjacent image SURF feature matching Point pair correction of geographic coordinates and image fusion. Among them, combining the adjacent image SURF feature matching points to correct the geographic coordinates of the image is the innovation layer of this method. In combination with the algorithm execution sequence, the specific implementation of the above-mentioned contents will be described below:

S1航拍图像预处理:S1 aerial image preprocessing:

在预处理几何矫正的过程中,需要依次建立的坐标系为:地球坐标系,地理坐标系,机体坐标系,光电平台坐标系,相机坐标系。传统校正的具体步骤如下:设光电平台坐标系到数码相机坐标系的变换矩阵为R1,由机体坐标系到光电平台坐标系的变换矩阵为R2,由地理坐标系到机体坐标系的变换矩阵为R3,由大地坐标系到空间直角坐标系的变换矩阵为R4,则有:In the process of preprocessing geometric correction, the coordinate systems that need to be established in sequence are: earth coordinate system, geographic coordinate system, body coordinate system, photoelectric platform coordinate system, and camera coordinate system. The specific steps of the traditional correction are as follows: Let the transformation matrix from the photoelectric platform coordinate system to the digital camera coordinate system be R1, the transformation matrix from the body coordinate system to the photoelectric platform coordinate system is R2, and the transformation matrix from the geographic coordinate system to the body coordinate system is R3, the transformation matrix from the geodetic coordinate system to the spatial rectangular coordinate system is R4, then:

其中Rx(λ)Ry(λ)Rz(λ)分别为绕x轴、y轴、z轴的旋转的旋转矩阵,其中αP为像元扫描角,βP为像元偏角,χ为方位角,ω为平台滚动角,β为飞机横滚角,θ为航向角,B为经度,L为纬度,H为高度。因此从相机坐标系(Xc,Yc,Zc)到大地坐标系(Xe,Ye,Ze)之间的转换可以用如下关系表示:Among them, R x (λ)R y (λ)R z (λ) are the rotation matrices around the x-axis, y-axis, and z-axis respectively, where α P is the pixel scanning angle, β P is the pixel deflection angle, χ is the azimuth, ω is the roll angle of the platform, β is the roll angle of the aircraft, θ is the heading angle, B is the longitude, L is the latitude, and H is the altitude. Therefore, the conversion from the camera coordinate system (Xc, Yc, Zc) to the earth coordinate system (Xe, Ye, Ze) can be expressed by the following relationship:

常规几何矫正由式(2)逐像素点进行计算和采样。考虑到本课题主要对单一航带进行拼接,且无人机平台横滚角和俯仰角比较小时(近似垂直向下拍摄时)可以认为图像分辨率是一致的,因此为了减少运算量,提高速度,几何矫正只对水平方向上的飞机航向角和平台方位角进行校正。矫正模型如下所示:Conventional geometric correction is calculated and sampled pixel by pixel by formula (2). Considering that this project mainly stitches together a single flight belt, and the roll angle and pitch angle of the UAV platform are relatively small (when shooting approximately vertically downward), it can be considered that the image resolution is consistent, so in order to reduce the amount of calculation, improve the speed , the geometric correction only corrects the aircraft heading angle and platform azimuth angle in the horizontal direction. The rectified model looks like this:

其中χ为方位角,θ为航向角,RT(χ+θ)为变换矩阵,完成模型转换后再通过双线性差值重采样方法计算出像元在新的坐标系下的灰度值,从而生成新的图像矩阵,完成几何矫正。Among them, χ is the azimuth angle, θ is the heading angle, and R T (χ+θ) is the transformation matrix. After the model conversion is completed, the gray value of the pixel in the new coordinate system is calculated by the bilinear difference resampling method. , so as to generate a new image matrix and complete the geometric correction.

S2:航拍图像重叠区域计算S2: Calculation of overlapping area of aerial images

通过POS信息计算相邻图像的重叠区域减少了拼接的计算量,并且随着特征点的寻找面积越小,可能出现的误匹配概率就越小,提高了检测的效率计算相邻图像的重叠区域,具体步骤如下:Calculating the overlapping area of adjacent images through POS information reduces the amount of calculation for splicing, and as the search area of feature points becomes smaller, the probability of mismatching may be smaller, which improves the efficiency of detection Calculate the overlapping area of adjacent images ,Specific steps are as follows:

1)假设无人机航线为偏离正北的角度为θ,其航速为V,将速度分解为正北方向上和正东方向上两个方向上的分量。分别表示为V1和V2,如图3.1所示的速度分解。1) Assuming that the UAV route deviates from true north at an angle of θ, and its speed is V, decompose the speed into two components in the direction of true north and true east. Denoted as V1 and V2 respectively, the velocity decomposition is shown in Figure 3.1.

2)连续拍摄相邻两张曝光点的图像分别编号为Pic1和Pic2,间隔时间为t,根据POS 航迹文件得到两张图片的中心点经纬度,分别记为LatA、LonA、LatB、LonB,则有:2) The images of two adjacent exposure points taken continuously are numbered Pic1 and Pic2 respectively, and the interval time is t. According to the POS track file, the longitude and latitude of the center points of the two pictures are recorded as LatA, LonA, LatB, and LonB respectively, then Have:

其中R=6371.004千米,pi取3.1415926,C为两张图片中心点之间的经纬度夹角,L为计算出来的两点的实际距离。Among them, R=6371.004 kilometers, pi is 3.1415926, C is the longitude and latitude angle between the center points of the two pictures, and L is the calculated actual distance between the two points.

3)以pic1作为基准,pic2与pic1之间的重叠区域表示为如图3.2所示的矩形区域S,将其重叠区域扩大定义到规则区域,将重叠区域顶点投影到x和y方向上则表示为:3) Taking pic1 as the benchmark, the overlapping area between pic2 and pic1 is expressed as a rectangular area S as shown in Figure 3.2, and the overlapping area is expanded to define a regular area, and the vertices of the overlapping area are projected to the x and y directions to represent for:

S=(W-P1)(H-P2) (5)S=(W-P1)(H-P2) (5)

其中W和H为矩形区域的宽和高,P1和P2在图中所标长度。Among them, W and H are the width and height of the rectangular area, and the lengths of P1 and P2 are marked in the figure.

通过计算重叠区域减少拼接的计算量,并且随着特征点的寻找面积越小,可能出现的误匹配概率就越小,提高检测的效率。具体计算重叠区域示意图如图3.2所示。S3:图像的地理坐标计算:对于几何纠正过的航拍图像,航拍图像的每点像素坐标都有对应的地理坐标,无人机飞行时候的携带的POS信息(如图2所示)记录了当前曝光点图像的飞机的经纬度以及飞机的高度、翻滚角、俯仰角、航向角等信息可以为计算地理坐标所用,图像的地理坐标的计算过程分为以下几步:By calculating the overlapping area, the calculation amount of splicing is reduced, and as the search area of feature points is smaller, the probability of false matching is smaller, and the detection efficiency is improved. The schematic diagram of the specific calculation overlapping area is shown in Figure 3.2. S3: Geographical coordinate calculation of the image: For the geometrically corrected aerial image, each pixel coordinate of the aerial image has a corresponding geographic coordinate, and the POS information (as shown in Figure 2) carried by the UAV during flight records the current The longitude and latitude of the aircraft in the exposure point image, as well as the altitude, roll angle, pitch angle, heading angle and other information of the aircraft can be used to calculate the geographic coordinates. The calculation process of the geographic coordinates of the image is divided into the following steps:

1)计算地面分辨率,计算公式如下:1) Calculate the ground resolution, the calculation formula is as follows:

其中GSD表示地面分辨率(m),f为镜头焦距(mm),P为成像传感器的像元尺寸(mm),H为无人机对应的航高(m)。Among them, GSD represents the ground resolution (m), f is the focal length of the lens (mm), P is the pixel size of the imaging sensor (mm), and H is the flying height corresponding to the UAV (m).

2)计算图像对角线实际地面距离:2) Calculate the actual ground distance of the image diagonal:

其中w和h为图像的宽度和高度,L为图像对角线之间的实际距离。where w and h are the width and height of the image, and L is the actual distance between the diagonals of the image.

3)计算图像四角点地理坐标,根据图像中心点经纬度及另一点相对中心点的距离及方向角,求得以图像中点为圆心,半径为L/2的对应四角点地理坐标。具体计算公式为:3) Calculate the geographic coordinates of the four corners of the image, and obtain the corresponding geographic coordinates of the four corners with the center point of the image as the center of the circle and a radius of L/2 according to the latitude and longitude of the center point of the image and the distance and direction angle of another point relative to the center point. The specific calculation formula is:

其中θi∈(0,2pi),LonaLata为图像中心点的经纬度,Ri为赤道半径取6378137m,Rj为极半径取6356725m,pi取3.1415925。Where θ i ∈ (0,2pi), Lon a Lat a is the latitude and longitude of the center point of the image, Ri is the equatorial radius, which is 6378137m, Rj is the polar radius, which is 6356725m, and pi is 3.1415925.

4)地理坐标转换到空间之间坐标系之间的转换,转换公式如下:4) Geographical coordinate conversion to conversion between coordinate systems between spaces, the conversion formula is as follows:

其中N为曲率半径,Lon、Lat、H为图像上任一点的经度、纬度和高度,由此将图像转换到空间坐标系,至此,得到完整的图像坐标。Among them, N is the radius of curvature, and Lon, Lat, and H are the longitude, latitude, and height of any point on the image, so that the image is transformed into a space coordinate system, and the complete image coordinates are obtained so far.

S4:图像的SURF特征提取:S4: SURF feature extraction of image:

SURF采用近似Hessian矩阵检测特征点,并使用积分图像进行卷积运算,大幅减少运算,从而提高特征提取速度。SURF描述子包含两个主要的部分:检测特征点和计算特征;具体实现分为以下几步:SURF uses an approximate Hessian matrix to detect feature points, and uses the integral image to perform convolution operations, which greatly reduces operations and thus improves feature extraction speed. The SURF descriptor consists of two main parts: detecting feature points and calculating features; the specific implementation is divided into the following steps:

1)构建Hessian矩阵,构造尺度空间。1) Construct the Hessian matrix and construct the scale space.

假设图像上某点为X(x,y),在σ尺度下的矩阵M定义为:Assuming that a point on the image is X(x, y), the matrix M at the σ scale is defined as:

其中Lxx是高斯滤波二阶导同X卷积的结果,Lxy等的含义类似,σ为空间尺度。当Hessian 矩阵的的判别式取得局部极大值时,认为定位到关键点的位置。Among them, L xx is the result of the convolution of the second derivative of the Gaussian filter with X, and the meaning of L xy is similar, and σ is the spatial scale. When the discriminant of the Hessian matrix obtains a local maximum value, it is considered to locate the position of the key point.

2)检测特征点2) Detect feature points

在上述得到的尺度空间中,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤去除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点,检测过程如图4.1所示:In the scale space obtained above, each pixel processed by the Hessian matrix is compared with 26 points in the two-dimensional image space and the scale space neighborhood, and the key points are initially located, and then the key points with relatively weak energy are filtered out. Points and misplaced key points, and screen out the final stable feature points. The detection process is shown in Figure 4.1:

3)确定特征点主方向3) Determine the main direction of the feature point

以特征点为圆心,以6σ为半径,求XY方向上的Haar小波响应,统计60度扇形内所有点的水平haar小波特征和垂直haar小波特征总和,并设置haar小波的尺寸变长为4s,使得每个扇形都得到了对应值。然后将60度扇形以一定间隔进行旋转,最后将锁定最大值扇形的方向作为该特征点的主方向。该过程的示意图如图4.2所示:With the feature point as the center and 6σ as the radius, find the Haar wavelet response in the XY direction, count the sum of the horizontal haar wavelet features and vertical haar wavelet features of all points in the 60-degree sector, and set the size of the haar wavelet to 4s. So that each sector has a corresponding value. Then rotate the 60-degree sector at a certain interval, and finally lock the direction of the maximum sector as the main direction of the feature point. A schematic diagram of the process is shown in Figure 4.2:

4)计算特征描述符4) Calculate feature descriptors

在特征点周围取一个正方形框,然后把该框分为16个子区域,每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的,所以每个特征点就是16*4=64维的向量,这在特征匹配过程中会大大加快匹配速度,提取到的相邻SURF特征图像匹配点对示例图如图4.3所示。Take a square frame around the feature point, and then divide the frame into 16 sub-regions, each sub-region counts the haar wavelet features of 25 pixels in the horizontal direction and vertical direction, where the horizontal and vertical directions are relative to the main direction Yes, so each feature point is a 16*4=64-dimensional vector, which will greatly speed up the matching speed during the feature matching process. An example of matching point pairs extracted from adjacent SURF feature images is shown in Figure 4.3.

S5:利用相邻图像特征匹配点对修正地理坐标S5: Use adjacent image features to match point pairs to correct geographic coordinates

由于POS精度低以及几何校正中存在一定的误差,计算出的坐标映射关系有一定的误差,此时利用特征匹配算法去修正图像的地理坐标,具体过程如下:Due to the low precision of POS and certain errors in geometric correction, the calculated coordinate mapping relationship has certain errors. At this time, the feature matching algorithm is used to correct the geographic coordinates of the image. The specific process is as follows:

假设图像1的地理坐标为P1(x1,y1),图像2的地理坐标为P2(x2,y2),提取图像特征匹、配对后则得到同一个点在不同两张图像间的像素坐标位置,由此得到两张图像中同一个目标点的经纬度坐标(Lon1,Lat1)(Lon2,Lat2),最后以第一张图像地理坐标为基准,求得第二张图像中目标点与第一张图像之间的偏移量,公式如下:Assuming that the geographic coordinates of image 1 are P1 (x1, y1), and the geographic coordinates of image 2 are P2 (x2, y2), after extracting image feature matching and pairing, the pixel coordinate position of the same point between two different images can be obtained. Thus, the latitude and longitude coordinates (Lon1, Lat1) (Lon2, Lat2) of the same target point in the two images are obtained, and finally, based on the geographic coordinates of the first image, the relationship between the target point in the second image and the first image is obtained. The offset between , the formula is as follows:

然后用求得的偏移量去修正图像2的地理坐标P2:Then use the obtained offset to correct the geographic coordinates P2 of image 2:

然后将地理坐标投图到空间直角坐标系下,从而完成图像精确的配准,得到修正坐标后的图像;图5.1与5.2所示是直接按照地理坐标进行放置的效果以及进行修正后放置的结果对比,图5.3是多张图像按照修正后的地理坐标的放置结果。Then the geographical coordinates are projected into the spatial rectangular coordinate system, so as to complete the precise registration of the image and obtain the image after corrected coordinates; Figures 5.1 and 5.2 show the effect of placing directly according to the geographical coordinates and the result of placing after correction In contrast, Figure 5.3 is the placement result of multiple images according to the corrected geographic coordinates.

S5图像融合策略S5 image fusion strategy

对于修正坐标后的图像,由于每张图像之间有一定的痕迹,因此需要采用一些策略来解决拼接缝隙之间的颜色过渡差异较大的问题,使拼接图像更加平滑自然,渐入渐出自适应权重融合过程如下所示:For the image after correcting the coordinates, since there are certain traces between each image, some strategies need to be used to solve the problem of large color transition differences between the stitching gaps, so that the stitching image is smoother and more natural, and the fade-in and fade-out are adaptive The weight fusion process is as follows:

假设I1、I2、I分别为融合前图像1、图像2和融合后的图像3,则通过公式(11),完成图像融合,Assuming that I1, I2, and I are image 1 before fusion, image 2 and image 3 after fusion respectively, the image fusion is completed by formula (11),

式中,W是两幅不同画面重复的总宽度;w是重合区域左边缘与当前像素点之间的横向距离。融合前与融合后的对比示例图如图6所示。In the formula, W is the total width of the repetition of two different pictures; w is the horizontal distance between the left edge of the overlapping area and the current pixel. An example diagram of comparison before and after fusion is shown in Figure 6.

综上所述,本发明针对无人机航拍图像的特点,提出了一张基于无人机POS信息的拼接方法,首先根据POS信息计算图像的四角点的地理坐标,然后提取图像的重叠区域的SURF 特征,利用相邻图像的相同特征点的位置关系去修正地理坐标,从而完成配准,最后采用自适应渐入渐出融合算法对图像进行平滑过渡,得到了一副视觉效果良好的全景图像,且拼接完成后的图像具有地理坐标,具有使用价值。To sum up, the present invention proposes a splicing method based on the POS information of the UAV based on the characteristics of the aerial image of the UAV. First, the geographical coordinates of the four corners of the image are calculated according to the POS information, and then the overlapping area of the image is extracted. SURF features, using the positional relationship of the same feature points of adjacent images to correct the geographic coordinates, so as to complete the registration, and finally use the adaptive gradual in and out fusion algorithm to make a smooth transition of the image, and obtain a panoramic image with good visual effect , and the spliced image has geographical coordinates and has use value.

Claims (2)

1.一种基于无人机POS信息与图像SURF特征结合的图像拼接方法,其特征在于:该方法包括如下步骤:1. an image mosaic method based on unmanned aerial vehicle POS information and image SURF feature combination, it is characterized in that: the method comprises the steps: S1:航拍图像预处理:S1: Aerial image preprocessing: 无人机在执行航拍任务中,由于受到飞行器的姿态、高度、速度以及地球自转因素的影响,造成图像相对于地面目标实际位置发生挤压、扭曲、拉伸和偏移,因此对航拍的图像进行几何矫正,以得到基于同一基准投影面的遥感图像;During the aerial photography mission of the UAV, due to the influence of the aircraft's attitude, altitude, speed and the earth's rotation, the image is squeezed, twisted, stretched and offset relative to the actual position of the ground target. Carry out geometric correction to obtain remote sensing images based on the same reference projection plane; S2:航拍图像重叠区域计算:S2: Calculation of overlapping area of aerial images: 根据POS信息计算相邻图像的重叠区域,根据POS信息得到序列图像曝光点的时间以及飞行的实际距离,根据飞机当前飞行的角度将速度进行正方向上的分解,通过计算相邻拍摄图像的重叠区域以减少拼接的计算量,并且随着特征点的寻找面积越小,可能出现的误匹配概率就越小,提高检测效率;Calculate the overlapping area of adjacent images according to the POS information, obtain the exposure time of the sequence image and the actual distance of the flight according to the POS information, decompose the speed in the positive direction according to the current flight angle of the aircraft, and calculate the overlapping area of adjacent captured images In order to reduce the calculation amount of splicing, and as the search area of the feature points is smaller, the probability of false matching is smaller, and the detection efficiency is improved; S3:图像的地理坐标计算:S3: Geographical coordinate calculation of the image: 将图像进行几何矫正后,根据当前飞机飞行的姿态角度计算出图像中心点的经纬度坐标,根据相机内外方位元素计算图像地面分辨率,由此计算出图像四点的经纬度坐标,再将地理坐标转换到空间直角坐标系下,从而按照空间直角坐标系进行投影,得到图像之间的相对位置关系;After the image is geometrically corrected, the latitude and longitude coordinates of the center point of the image are calculated according to the attitude angle of the current aircraft flight, and the ground resolution of the image is calculated according to the internal and external orientation elements of the camera, and the latitude and longitude coordinates of the four points of the image are calculated, and then the geographic coordinates are converted To the space Cartesian coordinate system, so as to project according to the space Cartesian coordinate system, to obtain the relative positional relationship between the images; S4:提取图像的SURF特征:S4: Extract the SURF feature of the image: SURF特征的提取包括:构建Hessian矩阵,生成所有兴趣点、构建尺度空间、对特征点进行定位、特征点主方向分配、生成特征点描述子、特征点匹配等步骤,通过这些步骤计算得到相邻图像的SURF特征匹配点对;The extraction of SURF features includes: constructing a Hessian matrix, generating all interest points, constructing a scale space, locating feature points, assigning the main direction of feature points, generating feature point descriptors, and matching feature points. Through these steps, the adjacent The SURF feature of the image matches the point pair; S5:利用提取到的图像SURF特征点对修正地理坐标:S5: Use the extracted image SURF feature point pairs to correct the geographic coordinates: 由于已经知道每张图像的地理坐标,因此得到的同名点对的地理坐标应该为同一点,若计算出来的不一样,则以第一张图像为基准,依次修正后面的地理坐标,从而进行位置的配准,提高直接按地理坐标进行放置的准确性;Since the geographic coordinates of each image are already known, the geographic coordinates of the obtained point pairs with the same name should be the same point. If the calculated ones are different, use the first image as the reference, and correct the subsequent geographic coordinates in order to determine the position. registration to improve the accuracy of placing directly according to geographic coordinates; S6:图像融合策略:S6: Image fusion strategy: 对于修正坐标后的图像,由于每张图像之间有一定的痕迹,采用策略解决拼接缝隙之间的颜色过渡差异问题,采用渐入渐出自适应权重融合方法来对图像之间的缝隙进行平缓过渡。For the image after correcting the coordinates, since there are certain traces between each image, a strategy is used to solve the problem of color transition differences between stitching gaps, and the gradual in and gradual out adaptive weight fusion method is used to smoothly transition the gaps between images . 2.根据权利要求1所述的一种基于无人机POS信息与图像SURF特征结合的图像拼接方法,其特征在于:主要分为六大环节:航拍图像预处理环节、航拍图像重叠区域计算环节、图像地理坐标计算环节、提取图像SURF特征环节、结合相邻图像SURF特征匹配点对修正地理坐标环节和图像融合环节;其中,结合相邻图像SURF特征匹配点对修正图像地理坐标是本方法的创新层;2. The image stitching method based on the combination of UAV POS information and image SURF features according to claim 1, characterized in that: it is mainly divided into six major links: aerial image preprocessing link, aerial image overlapping area calculation link , image geographic coordinates calculation link, image SURF feature extraction link, combined with adjacent image SURF feature matching point pair correction geographic coordinates link and image fusion link; wherein, combined with adjacent image SURF feature matching point pair correction image geographic coordinates is the method innovation layer; S1航拍图像预处理:S1 aerial image preprocessing: 在预处理几何矫正的过程中,需要依次建立的坐标系为:地球坐标系,地理坐标系,机体坐标系,光电平台坐标系,相机坐标系;传统校正的具体步骤如下:设光电平台坐标系到数码相机坐标系的变换矩阵为R1,由机体坐标系到光电平台坐标系的变换矩阵为R2,由地理坐标系到机体坐标系的变换矩阵为R3,由大地坐标系到空间直角坐标系的变换矩阵为R4,则有:In the process of preprocessing geometric correction, the coordinate systems that need to be established in turn are: earth coordinate system, geographic coordinate system, body coordinate system, photoelectric platform coordinate system, camera coordinate system; the specific steps of traditional correction are as follows: set the photoelectric platform coordinate system The transformation matrix to the digital camera coordinate system is R1, the transformation matrix from the body coordinate system to the photoelectric platform coordinate system is R2, the transformation matrix from the geographic coordinate system to the body coordinate system is R3, and the transformation matrix from the earth coordinate system to the spatial rectangular coordinate system The transformation matrix is R4, then there are: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mn>1</mn> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;beta;</mi> <mi>P</mi> </msub> <mo>)</mo> </mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;alpha;</mi> <mi>P</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>2</mn> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <mi>&amp;chi;</mi> <mo>)</mo> </mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>3</mn> <mo>=</mo> <msub> <mi>R</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;beta;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <mi>&amp;lambda;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mn>90</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>4</mn> <mo>=</mo> <msup> <msub> <mi>R</mi> <mi>x</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mo>-</mo> <mo>(</mo> <mn>90</mn> <mo>-</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>)</mo> <msup> <msub> <mi>R</mi> <mi>z</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mn>90</mn> <mo>+</mo> <mi>L</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>R</mi><mn>1</mn><mo>=</mo><mi>R</mi><mrow><mo>(</mo><msub><mi>&amp;beta;</mi><mi>P</mi></msub><mo>)</mo></mrow><mi>R</mi><mrow><mo>(</mo><msub><mi>&amp;alpha;</mi><mi>P</mi></msub><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>R</mi><mn>2</mn><mo>=</mo><mi>R</mi><mrow><mo>(</mo><mi>&amp;chi;</mi><mo>)</mo></mrow><mi>R</mi><mrow><mo>(</mo><mi>&amp;omega;</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>R</mi><mn>3</mn><mo>=</mo><msub><mi>R</mi><mi>x</mi></msub><mrow><mo>(</mo><mi>&amp;beta;</mi><mo>)</mo></mrow><msub><mi>R</mi><mi>z</mi></msub><mrow><mo>(</mo><mi>&amp;theta;</mi><mo>)</mo></mrow><msub><mi>R</mi><mi>y</mi></msub><mrow><mo>(</mo><mn>90</mn><mo>-</mo><mi>&amp;lambda;</mi><mo>)</mo></mrow><msub><mi>R</mi><mi>x</mi></msub><mrow><mo>(</mo><mn>90</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>R</mi><mn>4</mn><mo>=</mo><msup><msub><mi>R</mi><mi>x</mi></msub><mi>T</mi></msup><mrow><mo>(</mo><mo>-</mo><mo>(</mo><mn>90</mn><mo>-</mo><mi>B</mi><mo>)</mo></mrow><mo>)</mo><msup><msub><mi>R</mi><mi>z</mi></msub><mi>T</mi></msup><mrow><mo>(</mo><mn>90</mn><mo>+</mo><mi>L</mi><mo>)</mo></mrow></mrow></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow> 其中Rx(λ)Ry(λ)Rz(λ)分别为绕x轴、y轴、z轴的旋转的旋转矩阵,其中αP为像元扫描角,βP为像元偏角,χ为方位角,ω为平台滚动角,β为飞机横滚角,θ为航向角,B为经度,L为纬度,H为高度;因此从相机坐标系(Xc,Yc,Zc)到大地坐标系(Xe,Ye,Ze)之间的转换用如下关系表示:Among them, R x (λ)R y (λ)R z (λ) are the rotation matrices around the x-axis, y-axis, and z-axis respectively, where α P is the pixel scanning angle, β P is the pixel deflection angle, χ is the azimuth, ω is the roll angle of the platform, β is the aircraft roll angle, θ is the heading angle, B is longitude, L is latitude, and H is height; therefore, from the camera coordinate system (Xc, Yc, Zc) to the earth coordinate The conversion between systems (Xe, Ye, Ze) is expressed by the following relationship: <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>e</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <msub> <mi>R</mi> <mn>1</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>2</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>3</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>4</mn> </msub> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "[" close = "]"><mtable><mtr><mtd><mi>X</mi><mi>e</mi></mtd></mtr><mtr><mtd><mi>Y</mi><mi>e</mi></mtd></mtr><mtr><mtd><mi>Z</mi><mi>e</mi></mtd></mtr></mtable></mfenced><mo>=</mo><msup><msub><mi>R</mi><mn>1</mn></msub><mi>T</mi></msup><msup><msub><mi>R</mi><mn>2</mn></msub><mi>T</mi></msup><msup><msub><mi>R</mi><mn>3</mn></msub><mi>T</mi></msup><msup><msub><mi>R</mi><mn>4</mn></msub><mi>T</mi></msup><mfenced open = "[" close = "]"><mtable><mtr><mtd><mi>X</mi><mi>c</mi></mtd></mtr><mtr><mtd><mi>Y</mi><mi>c</mi></mtd></mtr><mtr><mtd><mi>Z</mi><mi>c</mi></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mrow> 常规几何矫正由式(2)逐像素点进行计算和采样;本方法针对是的单一航带进行拼接,且无人机平台横滚角和俯仰角比较小时即垂直向下拍摄时认为图像分辨率是一致的,几何矫正只对水平方向上的飞机航向角和平台方位角进行校正;矫正模型如下所示:Conventional geometric correction is calculated and sampled pixel by pixel according to formula (2); this method is aimed at splicing a single flight belt, and the roll angle and pitch angle of the UAV platform are relatively small, that is, the image resolution is considered to be high when shooting vertically downward. are consistent, the geometric correction only corrects the aircraft heading angle and platform azimuth in the horizontal direction; the correction model is as follows: <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>e</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mi>R</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;chi;</mi> <mo>+</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "[" close = "]"><mtable><mtr><mtd><mi>X</mi><mi>e</mi></mtd></mtr><mtr><mtd><mi>Y</mi><mi>e</mi></mtd></mtr><mtr><mtd><mi>Z</mi><mi>e</mi></mtd></mtr></mtable></mfenced><mo>=</mo><msup><mi>R</mi><mi>T</mi></msup><mrow><mo>(</mo><mi>&amp;chi;</mi><mo>+</mo><mi>&amp;theta;</mi><mo>)</mo></mrow><mfenced open = "[" close = "]"><mtable><mtr><mtd><mi>X</mi><mi>c</mi></mtd></mtr><mtr><mtd><mi>Y</mi><mi>c</mi></mtd></mtr><mtr><mtd><mi>Z</mi><mi>c</mi></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow> 其中χ为方位角,θ为航向角,RT(χ+θ)为变换矩阵,完成模型转换后再通过双线性差值重采样方法计算出像元在新的坐标系下的灰度值,从而生成新的图像矩阵,完成几何矫正;Among them, χ is the azimuth angle, θ is the heading angle, and R T (χ+θ) is the transformation matrix. After the model conversion is completed, the gray value of the pixel in the new coordinate system is calculated by the bilinear difference resampling method. , so as to generate a new image matrix and complete the geometric correction; S2:航拍图像重叠区域计算S2: Calculation of overlapping area of aerial images 1)假设无人机航线为偏离正北的角度为θ,其航速为V,将速度分解为正北方向上和正东方向上两个方向上的分量;分别表示为V1和V2;1) Assuming that the UAV flight path deviates from true north at an angle of θ, and its speed is V, decompose the speed into two components in the direction of true north and true east; denoted as V1 and V2 respectively; 2)连续拍摄相邻两张曝光点的图像分别编号为Pic1和Pic2,间隔时间为t,根据POS航迹文件得到两张图片的中心点经纬度,分别记为LatA、LonA、LatB、LonB,则有:2) Continuously shoot the images of two adjacent exposure points respectively numbered as Pic1 and Pic2, and the interval time is t. According to the POS track file, the longitude and latitude of the center point of the two images are recorded as LatA, LonA, LatB, and LonB respectively, then Have: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>C</mi> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>sin</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>B</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mi>A</mi> <mo>-</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mi>B</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>B</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>L</mi> <mo>=</mo> <mi>R</mi> <mo>*</mo> <mi>A</mi> <mi>r</mi> <mi>c</mi> <mi>cos</mi> <mrow> <mo>(</mo> <mi>C</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>P</mi> <mi>i</mi> <mo>/</mo> <mn>180</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>C</mi><mo>=</mo><mi>sin</mi><mrow><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mi>A</mi><mo>)</mo></mrow><mo>*</mo><mi>sin</mi><mrow><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mi>B</mi><mo>)</mo></mrow><mo>*</mo><mi>cos</mi><mrow><mo>(</mo><mi>L</mi><mi>o</mi><mi>n</mi><mi>A</mi><mo>-</mo><mi>L</mo>mi><mi>o</mi><mi>n</mi><mi>B</mi><mo>)</mo></mrow><mo>+</mo><mi>cos</mi><mrow><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mi>A</mi><mo>)</mo></mrow><mo>*</mo><mi>cos</mi><mrow><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mi>B</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>L</mi><mo>=</mo><mi>R</mi><mo>*</mo><mi>A</mi><mi>r</mi><mi>c</mi><mi>cos</mi><mrow><mo>(</mo><mi>C</mi><mo>)</mo></mrow><mo>*</mo><mi>P</mi><mi>i</mi><mo>/</mo><mn>180</mn></mrow></mtd></mtr></mo>mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow></mrow> 其中R=6371.004千米,pi取3.1415926,C为两张图片中心点之间的经纬度夹角,L为计算出来的两点的实际距离;Among them, R=6371.004 kilometers, pi is 3.1415926, C is the longitude and latitude angle between the center points of the two pictures, and L is the calculated actual distance between the two points; 3)以pic1作为基准,pic2与pic1之间的重叠区域表示为矩形区域S,将其重叠区域扩大定义到规则区域,将重叠区域顶点投影到x和y方向上则表示为:3) Taking pic1 as the benchmark, the overlapping area between pic2 and pic1 is expressed as a rectangular area S, and the overlapping area is expanded and defined into a regular area, and the overlapping area vertices are projected to the x and y directions as follows: S=(W-P1)(H-P2) (5)S=(W-P1)(H-P2) (5) 其中W和H为矩形区域的宽和高,P1和P2在图中所标长度;Among them, W and H are the width and height of the rectangular area, and the lengths of P1 and P2 are marked in the figure; 通过计算重叠区域减少拼接的计算量,并且随着特征点的寻找面积越小,可能出现的误匹配概率就越小,提高检测的效率;By calculating the overlapping area to reduce the amount of calculation of splicing, and as the search area of the feature points is smaller, the probability of mismatching may be smaller, improving the efficiency of detection; S3:图像的地理坐标计算:对于几何纠正过的航拍图像,航拍图像的每点像素坐标都有对应的地理坐标,无人机飞行时候的携带的POS信息记录了当前曝光点图像的飞机的经纬度以及飞机的高度、翻滚角、俯仰角、航向角信息为计算地理坐标所用,图像的地理坐标的计算过程分为以下几步:S3: Calculation of geographic coordinates of the image: For the geometrically corrected aerial image, each pixel coordinate of the aerial image has a corresponding geographic coordinate, and the POS information carried by the drone during flight records the latitude and longitude of the aircraft that is currently exposed to the image And the altitude, roll angle, pitch angle, and heading angle information of the aircraft are used to calculate the geographic coordinates. The calculation process of the geographic coordinates of the image is divided into the following steps: 1)计算地面分辨率,计算公式如下:1) Calculate the ground resolution, the calculation formula is as follows: <mrow> <mi>G</mi> <mi>S</mi> <mi>D</mi> <mo>=</mo> <mfrac> <mrow> <mi>H</mi> <mo>*</mo> <mi>P</mi> </mrow> <mi>f</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>G</mi><mi>S</mi><mi>D</mi><mo>=</mo><mfrac><mrow><mi>H</mi><mo>*</mo><mi>P</mi></mrow><mi>f</mi></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mrow> 其中GSD表示地面分辨率,m;f为镜头焦距,mm;P为成像传感器的像元尺寸,mm;H为无人机对应的航高,m;Among them, GSD represents the ground resolution, m; f is the focal length of the lens, mm; P is the pixel size of the imaging sensor, mm; H is the flight height corresponding to the UAV, m; 2)计算图像对角线实际地面距离:2) Calculate the actual ground distance of the image diagonal: <mrow> <mi>L</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mi>w</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>h</mi> <mn>2</mn> </msup> </mrow> </msqrt> <mo>*</mo> <mi>G</mi> <mi>S</mi> <mi>D</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>L</mi><mo>=</mo><msqrt><mrow><msup><mi>w</mi><mn>2</mn></msup><mo>+</mo><msup><mi>h</mi><mn>2</mn></msup></mrow></msqrt><mo>*</mo><mi>G</mi><mi>S</mi><mi>D</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mrow> 其中w和h为图像的宽度和高度,L为图像对角线之间的实际距离;Where w and h are the width and height of the image, and L is the actual distance between the diagonals of the image; 3)计算图像四角点地理坐标,根据图像中心点经纬度及另一点相对中心点的距离及方向角,求得以图像中点为圆心,半径为L/2的对应四角点地理坐标;具体计算公式为:3) Calculate the geographical coordinates of the four corners of the image, according to the longitude and latitude of the center point of the image and the distance and direction angle of another point relative to the center point, obtain the corresponding four corners geographical coordinates with the center point of the image as the center and a radius of L/2; the specific calculation formula is : <mrow> <msub> <mi>Lon</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>L</mi> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mn>90</mn> <mo>)</mo> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>+</mo> <msub> <mi>Lon</mi> <mi>a</mi> </msub> <mo>)</mo> <mo>*</mo> <mn>180</mn> </mrow> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>Lon</mi><mi>i</mi></msub><mo>=</mo><mfrac><mrow><mo>(</mo><mfrac><mrow><mi>L</mi><mi>sin</mi><mrow><mo>(</mo><msub><mi>&amp;theta;</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow><mrow><mn>2</mn><mrow><mo>(</mo><msub><mi>R</mi><mi>j</mi></msub><mo>+</mo><mo>(</mo><msub><mi>R</mi><mi>c</mi></msub><mo>-</mo><msub><mi>R</mi><mi>j</mi></msub><mo>)</mo></mrow><mo>*</mo><mrow><mo>(</mo><mn>90</mn><mo>-</mo><msub><mi>Lat</mi><mi>a</mi></msub><mo>)</mo></mrow><mo>/</mo><mn>90</mn><mo>)</mo><mo>*</mo><mi>cos</mi><mrow><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mi>a</mi><mo>)</mo></mrow></mrow></mfrac><mo>+</mo><msub><mi>Lon</mi><mi>a</mi></msub><mo>)</mo><mo>*</mo><mn>180</mn></mrow><mrow><mi>p</mi><mi>i</mi></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow> <mrow> <msub> <mi>Lat</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>L</mi> <mi> </mi> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mn>90</mn> <mo>)</mo> </mrow> </mfrac> <mo>+</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> <mo>*</mo> <mn>180</mn> </mrow> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>Lat</mi><mi>i</mi></msub><mo>=</mo><mfrac><mrow><mo>(</mo><mfrac><mrow><mi>L</mi><mi></mi><mi>c</mi><mi>o</mi><mi>s</mi><mrow><mo>(</mo><msub><mi>&amp;theta;</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow><mrow><mn>2</mn><mrow><mo>(</mo><msub><mi>R</mi><mi>j</mi></msub><mo>+</mo><mo>(</mo><msub><mi>R</mi><mi>c</mi></msub><mo>-</mo><msub><mi>R</mi><mi>j</mi></msub><mo>)</mo></mrow><mo>*</mo><mrow><mo>(</mo><mn>90</mn><mo>-</mo><msub><mi>Lat</mi><mi>a</mi></msub><mo>)</mo></mrow><mo>/</mo><mn>90</mn><mo>)</mo></mrow></mfrac><mo>+</mo><msub><mi>Lat</mi><mi>a</mi></msub><mo>)</mo><mo>*</mo><mn>180</mn></mrow><mrow><mi>p</mi><mi>i</mi></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mrow> 其中θi∈(0,2pi),LonaLata为图像中心点的经纬度,Ri为赤道半径取6378137m,Rj为极半径取6356725m,pi取3.1415925;Where θ i ∈ (0,2pi), Lon a Lat a is the latitude and longitude of the center point of the image, Ri is the equatorial radius, which is 6378137m, Rj is the polar radius, which is 6356725m, and pi is 3.1415925; 4)地理坐标转换到空间之间坐标系之间的转换,转换公式如下:4) Geographical coordinate conversion to conversion between coordinate systems between spaces, the conversion formula is as follows: <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>s</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>s</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>s</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mi>H</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mi>H</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mo>)</mo> <mi>sin</mi> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mo>&amp;lsqb;</mo> <mi>N</mi> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mi>e</mi> <mn>2</mn> </msup> <mo>)</mo> <mo>+</mo> <mi>H</mi> <mo>&amp;rsqb;</mo> <mi>sin</mi> <mi> </mi> <mi>L</mi> <mi>o</mi> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "[" close = "]"><mtable><mtr><mtd><msub><mi>X</mi><mi>s</mi></msub></mrow>mtd></mtr><mtr><mtd><msub><mi>Y</mi><mi>s</mi></msub></mtd></mtr><mtr><mtd><msub><mi>Z</mi><mi>s</mi></msub></mtd></mtr></mtable></mfenced><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><mo>(</mo><mi>N</mi><mo>+</mo><mi>H</mi><mo>)</mo><mi>c</mi><mi>o</mi><mi>s</mi><mo>(</mo><mi>L</mi><mi>o</mi><mi>n</mi><mo>)</mo><mi>c</mi><mi>o</mi><mi>s</mi><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mo>)</mo></mtd></mtr><mtr><mtd><mo>(</mo><mi>N</mi><mo>+</mo><mi>H</mi><mo>)</mo><mi>c</mi><mi>o</mi><mi>s</mi><mo>(</mo><mi>L</mi><mi>o</mi><mi>n</mi><mo>)</mo><mi>sin</mi><mo>(</mo><mi>L</mi><mi>a</mi><mi>t</mi><mo>)</mo>mo></mtd></mtr><mtr><mtd><mo>&amp;lsqb;</mo><mi>N</mi><mo>(</mo><mn>1</mn><mo>-</mo><msup><mi>e</mi><mn>2</mn></msup><mo>)</mo><mo>+</mo><mi>H</mi><mo>&amp;rsqb;</mo><mi>sin</mi><mi></mi><mi>L</mi><mi>o</mi><mi>n</mi></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow></mrow> 其中N为曲率半径,Lon、Lat、H为图像上任一点的经度、纬度和高度,将图像转换到空间坐标系,至此得到完整的图像坐标;Among them, N is the radius of curvature, Lon, Lat, and H are the longitude, latitude and height of any point on the image, and the image is converted to the space coordinate system, so far the complete image coordinates are obtained; S4:图像的SURF特征提取:S4: SURF feature extraction of image: SURF采用近似Hessian矩阵检测特征点,并使用积分图像进行卷积运算,减少运算从而提高特征提取速度;SURF描述子包含两个部分:检测特征点和计算特征;具体实现分为以下几步:SURF uses an approximate Hessian matrix to detect feature points, and uses integral images for convolution operations, reducing operations and improving feature extraction speed; SURF descriptors include two parts: detecting feature points and calculating features; the specific implementation is divided into the following steps: 1)构建Hessian矩阵,构造尺度空间;1) Construct the Hessian matrix and construct the scale space; 假设图像上某点为X(x,y),在σ尺度下的矩阵M定义为:Assuming that a point on the image is X(x, y), the matrix M at the σ scale is defined as: <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>x</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>M</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>&amp;sigma;</mi><mo>)</mo></mrow><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><msub><mi>L</mi><mrow><mi>x</mi><mi>x</mi></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>&amp;sigma;</mi><mo>)</mo></mrow></mrow></mtd><mtd><mrow><msub><mi>L</mi><mrow><mi>x</mi><mi>y</mi></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>&amp;sigma;</mi><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><msub><mi>L</mi><mrow><mi>x</mi><mi>y</mi></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>&amp;sigma;</mi><mo>)</mo></mrow></mrow></mtd><mtd><mrow><msub><mi>L</mi><mrow><mi>y</mi><mi>y</mi></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>&amp;sigma;</mi><mo>)</mo></mrow></mrow></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>11</mn><mo>)</mo></mrow></mrow> 其中Lxx是高斯滤波二阶导同X卷积的结果,Lxy等的含义类似,σ为空间尺度;当Hessian矩阵的的判别式取得局部极大值时,认为定位到关键点的位置;Among them, Lxx is the result of Gaussian filter second-order derivative and X convolution, Lxy , etc. have similar meanings, and σ is the spatial scale; when the discriminant of the Hessian matrix obtains a local maximum value, it is considered to be located at the position of the key point; 2)检测特征点2) Detect feature points 在得到的尺度空间中,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤去除能量弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;In the obtained scale space, each pixel processed by the Hessian matrix is compared with 26 points in the two-dimensional image space and the scale space neighborhood, and the key points are initially located, and then the key points with weak energy are removed by filtering and The key points of wrong positioning are screened out to the final stable feature points; 3)确定特征点主方向3) Determine the main direction of the feature point 以特征点为圆心,以6σ为半径,求XY方向上的Haar小波响应,统计60度扇形内所有点的水平haar小波特征和垂直haar小波特征总和,并设置haar小波的尺寸变长为4s,使得每个扇形都得到了对应值;然后将60度扇形以一定间隔进行旋转,最后将锁定最大值扇形的方向作为该特征点的主方向;With the feature point as the center and 6σ as the radius, find the Haar wavelet response in the XY direction, count the sum of the horizontal haar wavelet features and vertical haar wavelet features of all points in the 60-degree sector, and set the size of the haar wavelet to 4s. So that each sector has a corresponding value; then rotate the 60-degree sector at a certain interval, and finally lock the direction of the maximum sector as the main direction of the feature point; 4)计算特征描述符4) Calculate feature descriptors 在特征点周围取一个正方形框,然后把该框分为16个子区域,每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的,所以每个特征点就是16*4=64维的向量;Take a square frame around the feature point, and then divide the frame into 16 sub-regions, each sub-region counts the haar wavelet features of 25 pixels in the horizontal direction and vertical direction, where the horizontal and vertical directions are relative to the main direction , so each feature point is a 16*4=64-dimensional vector; S5:利用相邻图像特征匹配点对修正地理坐标S5: Use adjacent image features to match point pairs to correct geographic coordinates 由于POS精度低以及几何校正中存在一定的误差,计算出的坐标映射关系有一定的误差,此时利用特征匹配算法去修正图像的地理坐标,具体过程如下:Due to the low precision of POS and certain errors in geometric correction, the calculated coordinate mapping relationship has certain errors. At this time, the feature matching algorithm is used to correct the geographic coordinates of the image. The specific process is as follows: 假设图像1的地理坐标为P1(x1,y1),图像2的地理坐标为P2(x2,y2),提取图像特征匹、配对后则得到同一个点在不同两张图像间的像素坐标位置,由此得到两张图像中同一个目标点的经纬度坐标(Lon1,Lat1)(Lon2,Lat2),最后以第一张图像地理坐标为基准,求得第二张图像中目标点与第一张图像之间的偏移量,公式如下:Assuming that the geographic coordinates of image 1 are P1 (x1, y1), and the geographic coordinates of image 2 are P2 (x2, y2), after extracting image feature matching and pairing, the pixel coordinate position of the same point between two different images can be obtained. Thus, the latitude and longitude coordinates (Lon1, Lat1) (Lon2, Lat2) of the same target point in the two images are obtained, and finally, based on the geographic coordinates of the first image, the relationship between the target point in the second image and the first image is obtained. The offset between , the formula is as follows: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mn>2</mn> <mo>-</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mo>=</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mn>2</mn> <mo>-</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>x</mi><mo>=</mo><mi>L</mi><mi>o</mi><mi>n</mi><mn>2</mn><mo>-</mo><mi>L</mi><mi>o</mi><mi>n</mi><mn>1</mn></mrow></mtd></mtr><mtr><mtd><mrow><mi>y</mi><mo>=</mo><mi>L</mi><mi>a</mi><mi>t</mi><mn>2</mn><mo>-</mo><mi>L</mi><mi>a</mi><mi>t</mi><mn>1</mn></mrow></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>12</mn><mo>)</mo></mrow></mrow> 然后用求得的偏移量去修正图像2的地理坐标P2:Then use the obtained offset to correct the geographic coordinates P2 of image 2: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mn>2</mn> <mo>=</mo> <mi>x</mi> <mn>2</mn> <mo>+</mo> <mi>x</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mn>2</mn> <mo>=</mo> <mi>y</mi> <mn>2</mn> <mo>+</mo> <mi>y</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> <mrow><mfenced open = "{" close = ""><mtable><mtr><mtd><mrow><mi>x</mi><mn>2</mn><mo>=</mo><mi>x</mi><mn>2</mn><mo>+</mo><mi>x</mi></mrow></mtd></mtr><mtr><mtd><mrow><mi>y</mi><mn>2</mn><mo>=</mo><mi>y</mi><mn>2</mn><mo>+</mo><mi>y</mi></mrow></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>13</mn><mo>)</mo></mrow></mrow> 然后将地理坐标投图到空间直角坐标系下,从而完成图像精确的配准,得到修正坐标后的图像;Then the geographic coordinates are projected into the space Cartesian coordinate system, so as to complete the precise registration of the image and obtain the image after correcting the coordinates; S5图像融合策略S5 image fusion strategy 对于修正坐标后的图像,由于每张图像之间有一定的痕迹,因此需要采用一些策略来解决拼接缝隙之间的颜色过渡差异较大的问题,使拼接图像更加平滑自然,渐入渐出自适应权重融合过程如下所示:For the image after correcting the coordinates, since there are certain traces between each image, some strategies need to be used to solve the problem of large color transition differences between the stitching gaps, so that the stitching image is smoother and more natural, and the fade-in and fade-out are adaptive The weight fusion process is as follows: 假设I1、I2、I分别为融合前图像1、图像2和融合后的图像3,则通过公式(11),完成图像融合,Assuming that I1, I2, and I are image 1 before fusion, image 2 and image 3 after fusion respectively, the image fusion is completed by formula (11), 式中,W是两幅不同画面重复的总宽度;w是重合区域左边缘与当前像素点之间的横向距离。In the formula, W is the total width of the repetition of two different pictures; w is the horizontal distance between the left edge of the overlapping area and the current pixel.
CN201711132452.5A 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features Pending CN107808362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711132452.5A CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711132452.5A CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Publications (1)

Publication Number Publication Date
CN107808362A true CN107808362A (en) 2018-03-16

Family

ID=61580456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711132452.5A Pending CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Country Status (1)

Country Link
CN (1) CN107808362A (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965855A (en) * 2018-07-12 2018-12-07 深圳超多维科技有限公司 A kind of stereoprojection method, apparatus, equipment and storage medium
CN109238240A (en) * 2018-10-22 2019-01-18 武汉大势智慧科技有限公司 A kind of unmanned plane oblique photograph method that taking landform into account and its camera chain
CN109325913A (en) * 2018-09-05 2019-02-12 北京悦图遥感科技发展有限公司 Unmanned plane image split-joint method and device
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109472752A (en) * 2018-10-30 2019-03-15 北京工业大学 Aerial image-based multi-exposure fusion system
CN109544455A (en) * 2018-11-22 2019-03-29 重庆市勘测院 A kind of overlength high-definition live-action long paper seamless integration method
CN109712071A (en) * 2018-12-14 2019-05-03 电子科技大学 UAV image stitching and positioning method based on track constraints
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN109858527A (en) * 2019-01-09 2019-06-07 北京全路通信信号研究设计院集团有限公司 Image fusion method
CN110033411A (en) * 2019-04-12 2019-07-19 哈尔滨工业大学 The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110097498A (en) * 2019-01-25 2019-08-06 电子科技大学 More air strips image mosaics and localization method based on unmanned aerial vehicle flight path constraint
CN110111250A (en) * 2019-04-11 2019-08-09 中国地质大学(武汉) A kind of automatic panorama unmanned plane image split-joint method and device of robust
CN110223233A (en) * 2019-06-11 2019-09-10 西北工业大学 A kind of unmanned plane based on image mosaic builds drawing method
CN110473236A (en) * 2019-06-25 2019-11-19 上海圭目机器人有限公司 A kind of measurement method of the offset position of road face image detection camera
CN110490830A (en) * 2019-08-22 2019-11-22 中国农业科学院农业信息研究所 A kind of agricultural remote sensing method for correcting image and system
CN110596740A (en) * 2019-09-29 2019-12-20 中国矿业大学(北京) Rapid positioning method suitable for geological exploration
CN110738599A (en) * 2019-10-14 2020-01-31 北京百度网讯科技有限公司 Image splicing method and device, electronic equipment and storage medium
CN110910432A (en) * 2019-12-09 2020-03-24 珠海大横琴科技发展有限公司 Remote sensing image matching method and device, electronic equipment and readable storage medium
CN110992261A (en) * 2019-11-15 2020-04-10 国网福建省电力有限公司漳州供电公司 A method for fast stitching of UAV images of transmission lines
CN111383205A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111510684A (en) * 2020-04-24 2020-08-07 安徽比特文化传媒有限公司 VR auxiliary aerial photography method
CN111507901A (en) * 2020-04-15 2020-08-07 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial belt GPS and scale invariant constraint
CN111583312A (en) * 2019-12-26 2020-08-25 珠海大横琴科技发展有限公司 Method and device for accurately matching remote sensing images, electronic equipment and storage medium
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111612828A (en) * 2019-12-27 2020-09-01 珠海大横琴科技发展有限公司 Remote sensing image correction matching method and device, electronic equipment and storage medium
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111640142A (en) * 2019-12-25 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image multi-feature matching method and device and electronic equipment
CN111652915A (en) * 2019-12-09 2020-09-11 珠海大横琴科技发展有限公司 Remote sensing image overlapping area calculation method and device and electronic equipment
CN111667405A (en) * 2019-03-06 2020-09-15 西安邮电大学 Image splicing method and device
CN111681190A (en) * 2020-06-18 2020-09-18 深圳天海宸光科技有限公司 High-precision coordinate mapping method for panoramic video
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images
CN112184703A (en) * 2020-10-27 2021-01-05 广东技术师范大学 Corn ear period unmanned aerial vehicle image alignment method and system based on space-time backtracking
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles
CN112414375A (en) * 2020-10-08 2021-02-26 武汉大学 A UAV image attitude recovery method for flood disaster emergency quick puzzle production
CN112799430A (en) * 2021-01-13 2021-05-14 东南大学 Intelligent acquisition method of road image based on programmable unmanned aerial vehicle
CN112837378A (en) * 2021-02-03 2021-05-25 江南大学 An external dynamic calibration and mapping method of aerial camera attitude based on multi-UAV formation
CN113012047A (en) * 2021-03-26 2021-06-22 广州市赋安电子科技有限公司 Dynamic camera coordinate mapping establishing method and device and readable storage medium
WO2021120389A1 (en) * 2019-12-19 2021-06-24 广州启量信息科技有限公司 Coordinate transformation method and apparatus for aerial panoramic roaming data
CN113099266A (en) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113191946A (en) * 2021-03-02 2021-07-30 中国人民解放军空军航空大学 Aviation three-step area array image splicing method
CN113570720A (en) * 2021-08-04 2021-10-29 西安万飞控制科技有限公司 Gis technology-based real-time display method and system for unmanned aerial vehicle video petroleum pipeline
CN113706723A (en) * 2021-08-23 2021-11-26 维沃移动通信有限公司 Image processing method and device
CN113706389A (en) * 2021-09-30 2021-11-26 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
CN114066733A (en) * 2021-11-22 2022-02-18 杭州东信北邮信息技术有限公司 A UAV image stitching method based on image convolution
CN114202583A (en) * 2021-12-10 2022-03-18 中国科学院空间应用工程与技术中心 A visual positioning method and system for unmanned aerial vehicles
CN114241372A (en) * 2021-12-09 2022-03-25 江苏和正特种装备有限公司 Target identification method applied to sector-scan splicing
CN114445274A (en) * 2020-11-06 2022-05-06 中煤航测遥感集团有限公司 Image splicing method and device, electronic equipment and storage medium
CN114757846A (en) * 2022-04-25 2022-07-15 浙江大华技术股份有限公司 Image correction method and device, storage medium and electronic device
CN114897966A (en) * 2022-04-13 2022-08-12 深圳市路远智能装备有限公司 Visual identification method for large element
CN114936971A (en) * 2022-06-08 2022-08-23 浙江理工大学 A water-oriented UAV remote sensing multispectral image stitching method and system
CN114999335A (en) * 2022-06-10 2022-09-02 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wide band and one-dimensional envelope peak
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115423681A (en) * 2022-08-12 2022-12-02 西南交通大学 Unmanned aerial vehicle image splicing method based on IB-SURF and neighborhood matching method
CN115601417A (en) * 2022-09-30 2023-01-13 中星电子股份有限公司(Cn) Azimuth information generation method, device, electronic device and medium
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, device, device and storage medium for generating panoramic images
US20230145875A1 (en) * 2021-11-02 2023-05-11 Samsung Electronics Co., Ltd. Apparatus for accelerating simultaneous localization and mapping and electronic device including the same
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN119418030A (en) * 2025-01-09 2025-02-11 上海博珖机器人有限公司 Image processing method, device, storage medium, program product and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732482A (en) * 2015-03-30 2015-06-24 中国人民解放军63655部队 Multi-resolution image stitching method based on control points
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732482A (en) * 2015-03-30 2015-06-24 中国人民解放军63655部队 Multi-resolution image stitching method based on control points
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王亚洲等: "基于无人机POS信息的拼接方法", 《地理国情监测云平台》 *

Cited By (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965855A (en) * 2018-07-12 2018-12-07 深圳超多维科技有限公司 A kind of stereoprojection method, apparatus, equipment and storage medium
CN109325913B (en) * 2018-09-05 2022-12-16 北京悦图遥感科技发展有限公司 Unmanned aerial vehicle image splicing method and device
CN109325913A (en) * 2018-09-05 2019-02-12 北京悦图遥感科技发展有限公司 Unmanned plane image split-joint method and device
CN109238240A (en) * 2018-10-22 2019-01-18 武汉大势智慧科技有限公司 A kind of unmanned plane oblique photograph method that taking landform into account and its camera chain
CN109238240B (en) * 2018-10-22 2021-01-08 武汉大势智慧科技有限公司 Unmanned aerial vehicle oblique photography method considering terrain and photography system thereof
CN109472752A (en) * 2018-10-30 2019-03-15 北京工业大学 Aerial image-based multi-exposure fusion system
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109461121B (en) * 2018-11-06 2022-11-04 中国林业科学研究院资源信息研究所 Image fusion splicing method based on parallel computing algorithm
CN109544455A (en) * 2018-11-22 2019-03-29 重庆市勘测院 A kind of overlength high-definition live-action long paper seamless integration method
CN109544455B (en) * 2018-11-22 2023-05-02 重庆市勘测院 A method for seamless fusion of ultra-long high-definition real-scene scrolls
CN109712071A (en) * 2018-12-14 2019-05-03 电子科技大学 UAV image stitching and positioning method based on track constraints
CN109712071B (en) * 2018-12-14 2022-11-29 电子科技大学 UAV image stitching and positioning method based on track constraints
CN109858527A (en) * 2019-01-09 2019-06-07 北京全路通信信号研究设计院集团有限公司 Image fusion method
CN109858527B (en) * 2019-01-09 2021-08-24 北京全路通信信号研究设计院集团有限公司 Image fusion method
CN110084743B (en) * 2019-01-25 2023-04-14 电子科技大学 Image mosaic and positioning method based on multi-strip initial track constraints
CN110097498A (en) * 2019-01-25 2019-08-06 电子科技大学 More air strips image mosaics and localization method based on unmanned aerial vehicle flight path constraint
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN109782786B (en) * 2019-02-12 2021-09-28 上海戴世智能科技有限公司 Positioning method based on image processing and unmanned aerial vehicle
CN111667405A (en) * 2019-03-06 2020-09-15 西安邮电大学 Image splicing method and device
CN110111250A (en) * 2019-04-11 2019-08-09 中国地质大学(武汉) A kind of automatic panorama unmanned plane image split-joint method and device of robust
CN110033411A (en) * 2019-04-12 2019-07-19 哈尔滨工业大学 The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN110223233B (en) * 2019-06-11 2022-04-05 西北工业大学 Unmanned aerial vehicle aerial photography image building method based on image splicing
CN110223233A (en) * 2019-06-11 2019-09-10 西北工业大学 A kind of unmanned plane based on image mosaic builds drawing method
CN110473236A (en) * 2019-06-25 2019-11-19 上海圭目机器人有限公司 A kind of measurement method of the offset position of road face image detection camera
CN110473236B (en) * 2019-06-25 2022-03-15 上海圭目机器人有限公司 Method for measuring offset position of camera for road surface image detection
CN110490830B (en) * 2019-08-22 2021-09-24 中国农业科学院农业信息研究所 A kind of agricultural remote sensing image correction method and system
CN110490830A (en) * 2019-08-22 2019-11-22 中国农业科学院农业信息研究所 A kind of agricultural remote sensing method for correcting image and system
CN110596740A (en) * 2019-09-29 2019-12-20 中国矿业大学(北京) Rapid positioning method suitable for geological exploration
CN110738599B (en) * 2019-10-14 2023-04-25 北京百度网讯科技有限公司 Image stitching method and device, electronic equipment and storage medium
CN110738599A (en) * 2019-10-14 2020-01-31 北京百度网讯科技有限公司 Image splicing method and device, electronic equipment and storage medium
CN110992261A (en) * 2019-11-15 2020-04-10 国网福建省电力有限公司漳州供电公司 A method for fast stitching of UAV images of transmission lines
CN110910432A (en) * 2019-12-09 2020-03-24 珠海大横琴科技发展有限公司 Remote sensing image matching method and device, electronic equipment and readable storage medium
CN111652915A (en) * 2019-12-09 2020-09-11 珠海大横琴科技发展有限公司 Remote sensing image overlapping area calculation method and device and electronic equipment
WO2021120389A1 (en) * 2019-12-19 2021-06-24 广州启量信息科技有限公司 Coordinate transformation method and apparatus for aerial panoramic roaming data
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111640142A (en) * 2019-12-25 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image multi-feature matching method and device and electronic equipment
CN111583312A (en) * 2019-12-26 2020-08-25 珠海大横琴科技发展有限公司 Method and device for accurately matching remote sensing images, electronic equipment and storage medium
CN111612828A (en) * 2019-12-27 2020-09-01 珠海大横琴科技发展有限公司 Remote sensing image correction matching method and device, electronic equipment and storage medium
CN111383205A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111383205B (en) * 2020-03-11 2023-03-24 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111401385B (en) * 2020-03-19 2022-06-17 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111507901A (en) * 2020-04-15 2020-08-07 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial belt GPS and scale invariant constraint
CN111507901B (en) * 2020-04-15 2023-08-15 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111510684A (en) * 2020-04-24 2020-08-07 安徽比特文化传媒有限公司 VR auxiliary aerial photography method
CN111583110B (en) * 2020-04-24 2023-05-23 华南理工大学 A Stitching Method of Aerial Images
CN111681190A (en) * 2020-06-18 2020-09-18 深圳天海宸光科技有限公司 High-precision coordinate mapping method for panoramic video
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images
CN112414375B (en) * 2020-10-08 2021-09-03 武汉大学 Unmanned aerial vehicle image posture recovery method for flood disaster emergency quick jigsaw making
CN112414375A (en) * 2020-10-08 2021-02-26 武汉大学 A UAV image attitude recovery method for flood disaster emergency quick puzzle production
CN112184703A (en) * 2020-10-27 2021-01-05 广东技术师范大学 Corn ear period unmanned aerial vehicle image alignment method and system based on space-time backtracking
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN114445274B (en) * 2020-11-06 2025-03-25 中煤航测遥感集团有限公司 Image stitching method, device, electronic device and storage medium
CN114445274A (en) * 2020-11-06 2022-05-06 中煤航测遥感集团有限公司 Image splicing method and device, electronic equipment and storage medium
CN112799430A (en) * 2021-01-13 2021-05-14 东南大学 Intelligent acquisition method of road image based on programmable unmanned aerial vehicle
CN112837378B (en) * 2021-02-03 2024-04-30 江南大学 Aerial camera attitude external dynamic calibration and mapping method based on multi-unmanned aerial vehicle formation
CN112837378A (en) * 2021-02-03 2021-05-25 江南大学 An external dynamic calibration and mapping method of aerial camera attitude based on multi-UAV formation
CN113191946A (en) * 2021-03-02 2021-07-30 中国人民解放军空军航空大学 Aviation three-step area array image splicing method
CN113012047A (en) * 2021-03-26 2021-06-22 广州市赋安电子科技有限公司 Dynamic camera coordinate mapping establishing method and device and readable storage medium
CN113099266A (en) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113099266B (en) * 2021-04-02 2023-05-26 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113570720A (en) * 2021-08-04 2021-10-29 西安万飞控制科技有限公司 Gis technology-based real-time display method and system for unmanned aerial vehicle video petroleum pipeline
CN113570720B (en) * 2021-08-04 2024-02-27 西安万飞控制科技有限公司 Unmanned plane video oil pipeline real-time display method and system based on gis technology
CN113706723A (en) * 2021-08-23 2021-11-26 维沃移动通信有限公司 Image processing method and device
CN113706389B (en) * 2021-09-30 2023-03-28 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
CN113706389A (en) * 2021-09-30 2021-11-26 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
US20230145875A1 (en) * 2021-11-02 2023-05-11 Samsung Electronics Co., Ltd. Apparatus for accelerating simultaneous localization and mapping and electronic device including the same
CN114066733B (en) * 2021-11-22 2024-11-22 新讯数字科技(杭州)有限公司 A UAV image stitching method based on image convolution
CN114066733A (en) * 2021-11-22 2022-02-18 杭州东信北邮信息技术有限公司 A UAV image stitching method based on image convolution
CN114241372A (en) * 2021-12-09 2022-03-25 江苏和正特种装备有限公司 Target identification method applied to sector-scan splicing
CN114202583A (en) * 2021-12-10 2022-03-18 中国科学院空间应用工程与技术中心 A visual positioning method and system for unmanned aerial vehicles
CN114897966B (en) * 2022-04-13 2024-04-09 深圳市路远智能装备有限公司 Visual identification method for large element
CN114897966A (en) * 2022-04-13 2022-08-12 深圳市路远智能装备有限公司 Visual identification method for large element
CN114757846A (en) * 2022-04-25 2022-07-15 浙江大华技术股份有限公司 Image correction method and device, storage medium and electronic device
CN114936971A (en) * 2022-06-08 2022-08-23 浙江理工大学 A water-oriented UAV remote sensing multispectral image stitching method and system
CN114999335B (en) * 2022-06-10 2023-08-15 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wideband and one-dimensional envelope peak value
CN114999335A (en) * 2022-06-10 2022-09-02 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wide band and one-dimensional envelope peak
CN115423681A (en) * 2022-08-12 2022-12-02 西南交通大学 Unmanned aerial vehicle image splicing method based on IB-SURF and neighborhood matching method
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, device, device and storage medium for generating panoramic images
CN115601417A (en) * 2022-09-30 2023-01-13 中星电子股份有限公司(Cn) Azimuth information generation method, device, electronic device and medium
CN115393196B (en) * 2022-10-25 2023-03-24 之江实验室 A method for seamless mosaic of infrared multi-sequence images by UAV area sweep
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 UAV low-altitude positioning method based on inter-frame image stitching
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN117291980B (en) * 2023-10-09 2024-03-15 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN119418030A (en) * 2025-01-09 2025-02-11 上海博珖机器人有限公司 Image processing method, device, storage medium, program product and electronic device

Similar Documents

Publication Publication Date Title
CN107808362A (en) A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN114936971B (en) A method and system for stitching multispectral images of unmanned aerial vehicle remote sensing for water areas
CN110966991B (en) A single UAV image positioning method without control points
CN111583110B (en) A Stitching Method of Aerial Images
CN113222820B (en) Pose information-assisted aerial remote sensing image stitching method
US8723953B2 (en) Generation of aerial images
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
CN104835115A (en) Imaging method for aerial camera, and system thereof
JP7220785B2 (en) Survey sampling point planning method, device, control terminal and storage medium
CN110084743B (en) Image mosaic and positioning method based on multi-strip initial track constraints
CN106127697A (en) Unmanned aerial vehicle onboard imaging EO-1 hyperion geometric correction method
Chen et al. Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles
CN110246082B (en) A remote sensing panorama image stitching method
CN109214254B (en) Method and device for determining displacement of robot
CN110097498B (en) Multi-flight-zone image splicing and positioning method based on unmanned aerial vehicle flight path constraint
CN107274380A (en) A kind of quick joining method of unmanned plane multispectral image
US12254600B2 (en) Joint imaging system based on unmanned aerial vehicle platform and image enhancement fusion method
Liu et al. A new approach to fast mosaic UAV images
Moussa et al. A fast approach for stitching of aerial images
CN116563699A (en) Forest fire positioning method combining sky map and mobile phone image
CN112750075A (en) Low-altitude remote sensing image splicing method and device
CN115456870A (en) Multi-image splicing method based on external parameter estimation
CN110940318A (en) Aerial remote sensing real-time imaging method, electronic equipment and storage medium
CN119090716A (en) Marine remote sensing surveying and mapping method and surveying and mapping system
CN117036666B (en) UAV low-altitude positioning method based on inter-frame image stitching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180316

RJ01 Rejection of invention patent application after publication