[go: up one dir, main page]

CN110992263A - Image stitching method and system - Google Patents

Image stitching method and system Download PDF

Info

Publication number
CN110992263A
CN110992263A CN201911183719.2A CN201911183719A CN110992263A CN 110992263 A CN110992263 A CN 110992263A CN 201911183719 A CN201911183719 A CN 201911183719A CN 110992263 A CN110992263 A CN 110992263A
Authority
CN
China
Prior art keywords
feature
image
matching
grid
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911183719.2A
Other languages
Chinese (zh)
Other versions
CN110992263B (en
Inventor
李振宇
王万国
王振利
许玮
李建祥
刘广秀
刘丕玉
杨月琛
李猷民
杨立超
鉴庆之
杨波
孙晓斌
黄振宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Intelligent Technology Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
State Grid Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd, State Grid Intelligent Technology Co Ltd filed Critical Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority to CN201911183719.2A priority Critical patent/CN110992263B/en
Publication of CN110992263A publication Critical patent/CN110992263A/en
Application granted granted Critical
Publication of CN110992263B publication Critical patent/CN110992263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图像拼接方法及系统,包括:提取待拼接图像的特征点,对特征进行描述,得到粗匹配的特征点;在粗匹配点集相邻的像素区域中统计符合匹配关系的特征点数量,通过计算特征邻域分数,剔除错误的匹配点;根据精确匹配的特征点计算待拼接图像的单应性变换矩阵;通过加权平滑算法,实现图像无缝拼接。本发明有益效果:在特征精匹配速度和配准精度上得到较大的提升,合成的图像无明显的几何错位和模糊问题,重叠区域的边缘过渡良好。

Figure 201911183719

The invention discloses an image splicing method and system, comprising: extracting feature points of images to be spliced, describing the features, and obtaining coarse matching feature points; and counting the pixels in the adjacent pixel regions of the coarse matching point set that conform to the matching relationship The number of feature points, by calculating the feature neighborhood score, eliminates the wrong matching points; calculates the homography transformation matrix of the image to be stitched according to the accurately matched feature points; realizes the seamless image stitching through the weighted smoothing algorithm. The invention has the beneficial effects: the feature precision matching speed and registration accuracy are greatly improved, the synthesized image has no obvious geometric dislocation and blurring problems, and the edge transition of the overlapping area is good.

Figure 201911183719

Description

Image splicing method and system
Technical Field
The invention relates to the technical field of digital image processing, in particular to an image splicing method and system based on grid motion statistics and weighted projection transformation.
Background
Image stitching technology has been widely studied in the fields of computer vision and graphics, and means that two or more images having an overlapping region are combined into a complete image with a wide viewing angle and little distortion. Common image splicing algorithms include an image gray scale method, a phase correlation method, an image feature method and the like. The image gray scale method has low complexity but poor robustness; the phase correlation method is fast, but is susceptible to image scale. The image characteristic method has better robustness to changes of illumination, scale and the like of an image, and is the mainstream image splicing technology at present.
Image characterization methods are generally divided into four steps: extracting characteristic points, matching the characteristic points, estimating and fusing a transformation model. The characteristic point matching comprises characteristic point rough matching and characteristic point fine matching. In the traditional image splicing based on the feature points, a RANSAC (random Sample consensus) algorithm is adopted to eliminate wrong rough matching, matching point pairs are randomly extracted by the method, the quality of the rough matching point pairs is not considered, the running time is in direct proportion to the iteration times, and the overall fine matching speed is slow. The problems of low feature matching speed, distortion of panoramic images and the like can be caused by high complexity of feature fine matching and low image registration precision in the traditional image splicing.
Such as: when the remote unmanned aerial vehicle is adopted to patrol the overhead transmission line, the remote unmanned aerial vehicle has the characteristics of large patrol range, multiple images, wide coverage and the like, so that the obtained image data volume is large, the splicing difficulty is high, and the operation speed of the splicing algorithm needs to be focused. In an image splicing algorithm based on an image feature method, such as an SIFT feature algorithm, although the splicing quality is high and the robustness is good, the algorithm is very complex and consumes much time, so that an algorithm with high splicing speed and splicing precision is required to realize image splicing of an overhead transmission line shot by an unmanned aerial vehicle.
Disclosure of Invention
The invention aims to solve the problems and provides an image splicing method and an image splicing system, which can realize rapid and high-quality image splicing.
In order to achieve the purpose, the invention adopts the following specific scheme:
an image stitching method disclosed in one or more embodiments includes:
extracting the feature points of the images to be spliced, and describing and matching the feature points to obtain rough matched feature points;
counting the number of feature points which accord with the matching relation in the adjacent pixel regions of the rough matching point set, and eliminating wrong matching points by calculating feature neighborhood scores;
calculating a homography transformation matrix of the images to be spliced according to the accurately matched characteristic points, and carrying out image registration; and realizing smooth image splicing by a weighted smoothing algorithm.
Further, the image feature extraction and description are performed by using an ORB algorithm.
Further, carrying out rough matching on the feature points by using a violent matching algorithm, and filtering wrong rough matching by using a cross matching method.
Further, dividing the image into a plurality of non-overlapping grids, counting the feature score of each grid, and meanwhile, counting the feature neighborhood scores of the grids adjacent to the grids in a set number; rotating the grids and the adjacent grids to obtain the maximum grid characteristic score; when the maximum grid feature score is larger than the grid feature score threshold value, judging that the matching is correct; otherwise, it is an error match.
Further, counting the feature neighborhood scores of eight grids adjacent to the current grid to obtain a Sudoku feature score.
Further, the average value of the number of the rough matching features of the grids in the current nine-square grid is counted, and the grid feature score threshold is determined according to the average value.
Further, a global homography matrix between the image sequences is constructed through the matching point pairs; calculating a homography matrix of local dependence by adding weight coefficients; the weight coefficient is determined according to the Gaussian distances from the current feature point to all feature points on the image.
Further, according to the obtained homography matrix of the local dependence, the corresponding images are transformed to determine the overlapping area between the images, and the image to be fused is mapped to a new blank image to form a splicing map.
In one or more embodiments, an image stitching system includes a server including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the image stitching method when executing the computer program.
A computer-readable storage medium is disclosed in one or more embodiments, on which a computer program is stored, which when executed by a processor performs the image stitching method described above.
The invention has the beneficial effects that:
(1) the invention provides a coarse matching feature point screening method for grid neighborhood feature statistics, which is characterized in that the number of feature points meeting the matching relation is counted in adjacent pixel regions of a coarse matching point set, mismatching feature points are removed, and the precise matching of the feature points is realized.
(2) The invention provides an image registration method based on a weighted projection transformation model, wherein a locally dependent homography transformation matrix is calculated by adding a Gaussian weight coefficient, and the ghost effect and parallax error caused by registration of a global homography matrix are solved.
(3) The invention greatly improves the precision matching speed and the registration precision of the characteristics, the synthesized image has no obvious problems of geometric dislocation and blurring, and the edge transition of the overlapping area is good.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of distribution characteristics of correct matching and incorrect matching;
FIG. 3 is a schematic diagram of grid division and Sudoku grid neighborhood;
fig. 4(a) - (c) are schematic diagrams of the squared figure, squared figure rotated 1 time and squared figure rotated 4 times respectively.
The specific implementation mode is as follows:
the invention is described in detail below with reference to the accompanying drawings:
it is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
In one or more embodiments, an image stitching method is disclosed, as shown in fig. 1, a feature extraction ORB algorithm is applied to image stitching, correct matching and incorrect matching are distinguished by counting the number of features meeting a matching relationship in pixel regions adjacent to coarse matching feature points, and the fine matching speed of the feature points is improved; and then, a weighted projection transformation method is adopted, the pixel distance relation is introduced to optimize the weight coefficient of the transformation model, and the registration precision of the overlapped region is improved. The method comprises the following specific implementation processes:
1. feature extraction
The method is realized by the following steps:
(1) feature point extraction
ORB (organized FAST and rotaed BRIEF) is an algorithm for FAST feature point extraction and description. The ORB algorithm is divided into two parts, namely feature point extraction and feature point description. The feature extraction is developed by fast (features from acquired Segment test) algorithm, and the feature point description is improved according to brief (binary robustindendementelementary features) feature description algorithm.
A point P is selected from the image, and whether the point is a characteristic point is judged by the FAST algorithm, namely, a circle with the radius of r is drawn by taking the point P as the center of the circle. If the gray value of n continuous pixel points on the circumference is larger or smaller than the gray value of the P point, the P point is considered as the characteristic point. In order to solve the problem of a plurality of characteristic points at adjacent positions, the invention sorts the detected FAST characteristic points according to Harris corner response values, and selects the points with the response values of the first 80% (if 100 FAST characteristic points are detected in total, the 80 points with the Harris corner response values ranked in the front are extracted) as the extracted characteristic points.
In order to realize multi-scale invariance of the characteristic points, the invention adopts a form of building a pyramid image. By setting a scaling factor scaleFactor (the value of the invention is 1.2) and the number of pyramid layers nlevels (the value of the invention is 8). And then the original image is reduced into n levels of images according to the scale factor. The scaled image is: i ═ I/scalefactor (k ═ 1,2, …, nlevels). And (4) extracting the sum of the characteristic points of the n images with different proportions as the characteristic point of the image.
Since FAST features do not have directionality, the ORB algorithm uses moment (moment) methods to determine the direction of FAST feature points. And calculating the centroid of the feature point in a radius range by using the r as the moment, wherein a vector is formed from the coordinates of the feature point to the centroid as the direction of the feature point. Moments are defined as follows:
Figure BDA0002291917110000041
wherein, I (x, y) is the pixel value of the image pixel (x, y). The centroid of this moment is:
Figure BDA0002291917110000042
a direction vector is formed between the image feature point and the centroid, so that the direction of the feature point can be characterized by the angle of the direction vector, and the calculation formula is as follows:
Figure BDA0002291917110000043
(2) description of characteristic points
The ORB algorithm is characterized by adding twiddle factor improvement on the basis of BRIEF characterization. The BRIEF algorithm calculates a binary string of feature descriptors. It selects n pairs of pixel points pi, qi (i ═ 1,2, …, n) in the neighborhood of a feature point. The magnitude of the gray value of each point pair is then compared. If I (pi) > I (qi), the value of the corresponding position in the generated binary string is 1, otherwise, the value is 0. All the point pairs are compared, and a binary string with the length of n is generated. In order to increase the noise resistance of the feature descriptors, the method firstly performs Gaussian smoothing on the image, replaces the value of a certain point pair with mxm neighborhood gray level average value of a certain point in the neighborhood, and then compares the sizes of the point pair, so that the feature values have higher noise resistance.
Because the BRIEF descriptor lacks rotation invariance, the capability of resisting image plane rotation is poor, and the rotation invariance needs to be added to enhance the capability of resisting noise. For a feature point, the BRIEF descriptor is formed by comparing n point pairs, combining n point pairs (2n points) together to form a matrix B, which is in the form:
Figure BDA0002291917110000044
and then constructing a rotation matrix by using the directions of the characteristic points, wherein the rotation matrix is in the form of:
Figure BDA0002291917110000051
correcting the matrix B through the rotation matrix to enable the matrix B to have rotation invariance, wherein a correction formula is as follows:
Sθ=RθS
and a rotation invariant descriptor is extracted according to the direction of the feature point, so that the sensitivity of a BRIEF operator to the direction is overcome.
2. Coarse matching of features
The invention utilizes a Brute Force (BF) algorithm to carry out rough matching on the characteristic points to obtain N groups of rough matching characteristic point pairs which are marked as { Fa,FbIn which Fa={fa1,fa2,...,faNAnd Fb={fb1,fb2,...,fbN}. The violence matching algorithm compares descriptors of feature points one by one in a vector space, and selects a pair with a smaller distance (Hamming distance is adopted by the invention) as a matching point. After violence matching is performed, the invention uses a cross matching method to roughly filter the wrong matching. The idea of cross-filtering is to use the matched points to match in reverse after a match is made, and to consider a correct match if the matched point is still the first matched point. For example, the first-time feature point a uses a violent matching method, and the matched feature point is the feature point B; and in turn, using feature point B for matching, if feature point a is still matched,this is considered a correct match, otherwise it is an incorrect match.
3. Fine matching of features
(1) Feature neighborhood score definition
For image matching, a proper match has a certain number of matching feature points in the neighboring regions on the two images, while the neighboring regions that are mistakenly matched on the two images are different, so the number of feature point matches in the neighboring regions is usually zero, as shown in fig. 2. Therefore, the number of the feature points with the matching relation can be counted in the neighborhood of the rough matching feature point set, and the elimination of the error matching is realized. Therefore, the invention distinguishes correct matching and wrong matching by counting the number of feature points which accord with matching relation in the pixel areas adjacent to the rough matching point set.
Will { Ia,IbThe neighborhood of the matching feature point set in { N } is denoted as { Na,NbIn which N isa={Na1,Na2,...,NaN},Nb={Nb1,Nb2,...,NbN}. For the ith set of matching point pairs { fai,fbiGet statistics of faiNeighborhood NaiFeature point set of (1) { f)a1,fa2,...,faMiAnd total number of feature points MiAnd counting the feature point set corresponding to the coarse matching of the feature points
Figure BDA0002291917110000052
And the corresponding feature point sets are located at fbiNeighborhood NbiS ofi. According to MiAnd SiIs set as a score threshold STTo determine the ith matching point pair { fai,fbiWhether it is a correct match. Finally, all matched feature point sets are traversed to eliminate error matching, and a fine matched feature point set { F is obtaineda',Fb'}. For convenience of description, S will beiCalled feature neighborhood score, the calculation formula is as follows:
Figure BDA0002291917110000061
Figure BDA0002291917110000062
in the formula sikRepresents and NaiWhether the characteristic point of the k-th characteristic point rough matching is positioned in NbiIf in NbiIf so, the value is determined as 1, otherwise, the value is determined as 0.
(2) Grid feature statistics
To speed up statistics, the invention divides an image into grids that do not overlap G-P × Q, i.e., { I ═ Q }a,IbDivide it into a set of grid blocks { a, B }, where a ═ a }1,a2,...,ai,...,aG},B={b1,b2,...,bj,...,bGIs a and aiIs represented byaThe ith grid, bjIs represented bybThe jth grid in (1), as shown in fig. 3. In order to increase robustness, the feature neighborhood score of each grid is counted, and the grid feature neighborhood scores of 8 grids adjacent to the grid are counted, and the count is called as a Sudoku feature neighborhood score S:
Figure BDA0002291917110000063
in the formula, Si,jIs the j-th grid feature score in the nine-square grid in which the ith grid is located.
To avoid the effect of inter-image rotation on statistics, pair IbThe upper nine-square grid is rotated clockwise as shown in fig. 4(a) - (c). Wherein the grid G5The rotation does not change its position, but the adjacent grid moves clockwise, e.g. grid G in the upper left corner1After the 1 st rotation, grid G4After 4 th rotation is grid G9When the rotation is performed for the 9 th time in sequence, the distribution situation of the grid features is consistent with that of the grid features shown in the graph (a) in FIG. 4, and the maximum grid feature neighborhood fraction under the condition of 8 rotations is counted
Figure BDA0002291917110000064
Figure BDA0002291917110000065
In the formula (I), the compound is shown in the specification,
Figure BDA0002291917110000066
and (4) rotating the nine-square lattice where the ith grid is positioned for k times to obtain the characteristic score of the jth grid. Then, the average value of the number of the rough matching features of the grid in the current nine-square grid is counted:
Figure BDA0002291917110000067
in the formula, Mi,jAnd the number of the rough matching feature points in the jth grid in the nine-square grid in which the ith grid is positioned is shown. When grid feature scores
Figure BDA0002291917110000068
Greater than a grid feature score threshold STWhen it is, determine { fai,fbiIt is a correct match, otherwise it is an incorrect match.
Figure BDA0002291917110000069
Figure BDA0002291917110000071
In the formula, α is the weight of the feature point number average.
4. Image registration stitching
The image registration is a technology for determining an overlapping area and an overlapping position between images to be spliced, and the image registration method based on characteristic points is adopted, namely a transformation matrix between image sequences is constructed through matching point pairs, so that the splicing of panoramic images is completed. According to the inter-image transformation matrix H, the corresponding images can be transformed to determine the overlapping area between the images, and the images to be fused are mapped to a new blank image to form a splicing map.
If p isa=[xa,ya]TAnd pb=[xb,yb]TRepresenting an image { Ia,IbA pair of matching points in (c). N' groups of fine matching point pairs obtained by the steps
Figure BDA0002291917110000072
The global homography matrix H can be solved:
Figure BDA0002291917110000073
in the formula (I), the compound is shown in the specification,
Figure BDA0002291917110000074
and
Figure BDA0002291917110000075
are each paAnd pbOf (d) homogeneous coordinates of (d), H ∈ e3×3
When I isaAnd IbWhen the image is not obtained by rotating the camera around the optical center of the camera or the image background cannot approximate a plane scene, the global homography matrix H is used as a transformation model, and a ghost effect or parallax error is caused after registration. To solve this problem, the present invention calculates the locally dependent homography matrix H by adding weight coefficients*Then to IbEach point p inb*The transformation is carried out, and the transformation is carried out,
Figure BDA0002291917110000076
wherein H*Calculated from the following equation:
Figure BDA0002291917110000077
in the formula, a weight matrix
Figure BDA0002291917110000078
A∈2N×9Is a matrix of direct linear transformation equations, the vector H is a variant of the matrix H, H ═ H00h01h02h10h11h12h20h21h22]Coefficient of weight
Figure BDA0002291917110000079
Is based on the current point pb*To IbAll the characteristic points of
Figure BDA00022919171100000710
Is determined by the Gaussian distance of (1), set to pb*The closer the neighborhood is, the larger the pixel weight is, the relatively far pixel weight takes a corresponding smaller value:
Figure BDA00022919171100000711
where σ is a scale parameter. To prevent the weighting coefficients from being too sparse, a default compensation value γ ∈ [0,1] is introduced.
In the fusion process, the overlapped area has suture lines, so that processing is needed, and a weighted smoothing algorithm is adopted, wherein the main idea of the algorithm is as follows: the gray value Pixel of the Pixel point in the image overlapping region is obtained by weighted average of the gray values Pixel _ L and Pixel _ R of the corresponding points in the two images, namely, Pixel is k × Pixel _ L + (1-k) × Pixel _ R, wherein k is an adjustable factor, and in the overlapping region, along the image splicing direction, k gradually changes from 1 to 0, thereby realizing smooth splicing of the overlapping region.
Example two
The embodiment discloses an image splicing system, which comprises a server, wherein the server comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the image splicing method of the embodiment is realized when the processor executes the program.
EXAMPLE III
The present embodiment discloses a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, performs the image stitching method described in the first embodiment.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1.一种图像拼接方法,其特征在于,包括:1. an image stitching method, is characterized in that, comprises: 提取待拼接图像的特征点,对特征点进行描述与匹配,得到粗匹配的特征点;Extract the feature points of the image to be spliced, describe and match the feature points, and obtain the coarse matching feature points; 在粗匹配点集相邻的像素区域中统计符合匹配关系的特征点数量,通过计算特征邻域分数,剔除错误的匹配点;Count the number of feature points that match the matching relationship in the adjacent pixel area of the coarse matching point set, and eliminate the wrong matching points by calculating the feature neighborhood score; 根据精确匹配的特征点计算待拼接图像的单应性变换矩阵,进行图像配准;通过加权平滑算法,实现图像无缝拼接。According to the accurately matched feature points, the homography transformation matrix of the image to be spliced is calculated, and the image is registered; the weighted smoothing algorithm is used to realize the seamless image splicing. 2.如权利要求1所述的一种图像拼接方法,其特征在于,利用ORB算法进行图像特征提取和描述。2. An image stitching method as claimed in claim 1, characterized in that, ORB algorithm is used to extract and describe image features. 3.如权利要求1所述的一种图像拼接方法,其特征在于,利用暴力匹配算法进行特征点粗匹配,使用交叉匹配的方法过滤误匹配特征点。3. An image stitching method according to claim 1, characterized in that, a brute force matching algorithm is used to perform rough matching of feature points, and a cross-matching method is used to filter mismatched feature points. 4.如权利要求1所述的一种图像拼接方法,其特征在于,将图像划分为多个不重叠的网格,统计每个网格中特征点的数量,同时统计与其相邻的设定数量网格的特征邻域分数;对所述网格及其相邻网格进行旋转,得到最大的网格特征邻域分数;当最大的网格特征邻域分数大于网格特征分数阈值时,判定为正确匹配;否则,为错误匹配。4. a kind of image stitching method as claimed in claim 1 is characterized in that, divides the image into a plurality of non-overlapping grids, counts the number of feature points in each grid, and counts the setting adjacent to it simultaneously The feature neighborhood score of the number grid; the grid and its adjacent grids are rotated to obtain the maximum grid feature neighborhood score; when the maximum grid feature neighborhood score is greater than the grid feature score threshold, It is judged as a correct match; otherwise, it is a false match. 将{Ia,Ib}中匹配特征点集的邻域表示为{Na,Nb},其中Na={Na1,Na2,...,NaN},Nb={Nb1,Nb2,...,NbN}。对于第i组匹配点对{fai,fbi},统计fai邻域Nai中的特征点集
Figure FDA0002291917100000011
和特征点总数量Mi,并统计这些特征点粗匹配对应的特征点集
Figure FDA0002291917100000012
和这些对应特征点集位于fbi邻域Nbi中的数量Si,定义Si为特征邻域分数。
Denote the neighborhood of the matching feature point set in {I a ,I b } as {N a ,N b }, where Na ={N a1 , N a2 ,...,N aN }, N b ={N b1 ,N b2 ,...,N bN }. For the i-th matched point pair {f ai ,f bi }, count the feature point set in the f ai neighborhood Nai
Figure FDA0002291917100000011
and the total number of feature points M i , and count the feature point sets corresponding to the rough matching of these feature points
Figure FDA0002291917100000012
and the number S i of these corresponding feature point sets located in the neighborhood N bi of f bi , define Si as the feature neighborhood score.
5.如权利要求4所述的一种图像拼接方法,其特征在于,在统计特征点所在网格特征分数的同时,统计与其相邻八个网格的网格特征邻域分数,得到九宫格特征邻域分数。5. a kind of image stitching method as claimed in claim 4, it is characterized in that, while the grid feature score where the feature point is located, count the grid feature neighborhood scores of eight adjacent grids with it, and obtain the nine-square grid feature Neighborhood Score.
Figure FDA0002291917100000013
Figure FDA0002291917100000013
式中,Si,j是图像Ia中第i个网格所在的九宫格对应Ib第j个网格的特征邻域分数。In the formula, S i,j is the feature neighborhood score of the i-th grid in the image I a corresponding to the j-th grid of I b .
6.如权利要求5所述的一种图像拼接方法,其特征在于,统计当前九宫格网格内的粗匹配特征数量的均值Mi,j,定义网格特征分数阈值ST为:6. a kind of image stitching method as claimed in claim 5 is characterized in that, the mean value Mi ,j of the rough matching feature quantity in the current nine-square grid grid is counted, and the grid feature score threshold S T is defined as:
Figure FDA0002291917100000014
Figure FDA0002291917100000014
对九宫格进行旋转,得到最大的网格特征邻域分数
Figure FDA0002291917100000015
Figure FDA0002291917100000016
大于阈值ST时,判定{fai,fbi}为正确匹配,反之,为错误匹配。
Rotate the nine-square grid to get the largest grid feature neighborhood score
Figure FDA0002291917100000015
when
Figure FDA0002291917100000016
When it is greater than the threshold value S T , it is determined that {f ai , f bi } is a correct match, otherwise, it is an incorrect match.
7.如权利要求1所述的一种图像拼接方法,其特征在于,通过匹配点对构建图像序列之间的全局单应性矩阵;通过加入权重系数计算局部依赖的单应性矩阵;其中,权重系数根据当前特征点到图像上所有特征点的高斯距离确定。7. A kind of image stitching method as claimed in claim 1, it is characterized in that, build the global homography matrix between image sequences by matching point pairs; Calculate the homography matrix of local dependence by adding weight coefficient; Wherein, The weight coefficient is determined according to the Gaussian distance from the current feature point to all feature points on the image. 8.如权利要求7所述的一种图像拼接方法,其特征在于,根据得到的局部依赖的单应性矩阵,对相应图像进行变换以确定图像间的重叠区域,并将待融和图像映射到到一幅新的空白图像中形成拼接图。8. A kind of image stitching method as claimed in claim 7, it is characterized in that, according to the homography matrix of obtained local dependence, transform corresponding image to determine the overlapping area between the images, and map the image to be fused to . to a new blank image to form a mosaic. 9.一种图像拼接系统,其特征在于,包括服务器,所述服务器包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1-8任一项所述的图像拼接方法。9. An image stitching system, characterized in that, comprising a server, the server comprising a memory, a processor and a computer program stored on the memory and running on the processor, and the processor implements the rights when executing the program. The image stitching method described in any one of requirements 1-8. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时执行权利要求1-8任一项所述的图像拼接方法。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the image stitching method according to any one of claims 1-8 is executed.
CN201911183719.2A 2019-11-27 2019-11-27 Image stitching method and system Active CN110992263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911183719.2A CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911183719.2A CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Publications (2)

Publication Number Publication Date
CN110992263A true CN110992263A (en) 2020-04-10
CN110992263B CN110992263B (en) 2023-07-11

Family

ID=70087473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911183719.2A Active CN110992263B (en) 2019-11-27 2019-11-27 Image stitching method and system

Country Status (1)

Country Link
CN (1) CN110992263B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111529063A (en) * 2020-05-26 2020-08-14 广州狄卡视觉科技有限公司 Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
CN111784576A (en) * 2020-06-11 2020-10-16 长安大学 An Image Mosaic Method Based on Improved ORB Feature Algorithm
CN112308779A (en) * 2020-10-29 2021-02-02 上海电机学院 An image stitching method for power transmission lines
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN113298720A (en) * 2021-04-21 2021-08-24 重庆邮电大学 Self-adaptive overlapped image rotation method
CN114119437A (en) * 2021-11-10 2022-03-01 哈尔滨工程大学 GMS-based image stitching method for improving moving object distortion
CN115100444A (en) * 2022-05-20 2022-09-23 莆田学院 An image mismatch filtering method and an image matching device therefor
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117333368A (en) * 2023-10-10 2024-01-02 南京矩视科技有限公司 Image stitching method, device and storage medium based on local edge analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002335531A (en) * 2001-05-10 2002-11-22 Sony Corp Moving picture encoding device, method therefor program thereof, and storage medium thereof
CN106919944A (en) * 2017-01-20 2017-07-04 南京航空航天大学 A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN108805812A (en) * 2018-06-04 2018-11-13 东北林业大学 Multiple dimensioned constant ORB algorithms for image mosaic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002335531A (en) * 2001-05-10 2002-11-22 Sony Corp Moving picture encoding device, method therefor program thereof, and storage medium thereof
CN106919944A (en) * 2017-01-20 2017-07-04 南京航空航天大学 A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN108805812A (en) * 2018-06-04 2018-11-13 东北林业大学 Multiple dimensioned constant ORB algorithms for image mosaic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李为;李为相;张;揭伟;: "基于运动平滑约束项的快速误匹配剔除算法" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111529063A (en) * 2020-05-26 2020-08-14 广州狄卡视觉科技有限公司 Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
CN111784576A (en) * 2020-06-11 2020-10-16 长安大学 An Image Mosaic Method Based on Improved ORB Feature Algorithm
CN111784576B (en) * 2020-06-11 2024-05-28 上海研视信息科技有限公司 Image stitching method based on improved ORB feature algorithm
CN112308779A (en) * 2020-10-29 2021-02-02 上海电机学院 An image stitching method for power transmission lines
CN113298720A (en) * 2021-04-21 2021-08-24 重庆邮电大学 Self-adaptive overlapped image rotation method
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN114119437B (en) * 2021-11-10 2024-05-14 哈尔滨工程大学 GMS-based image stitching method for improving distortion of moving object
CN114119437A (en) * 2021-11-10 2022-03-01 哈尔滨工程大学 GMS-based image stitching method for improving moving object distortion
CN115100444A (en) * 2022-05-20 2022-09-23 莆田学院 An image mismatch filtering method and an image matching device therefor
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 UAV low-altitude positioning method based on inter-frame image stitching
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117333368A (en) * 2023-10-10 2024-01-02 南京矩视科技有限公司 Image stitching method, device and storage medium based on local edge analysis
CN117333368B (en) * 2023-10-10 2024-05-21 南京矩视科技有限公司 Image stitching method, device and storage medium based on local edge analysis

Also Published As

Publication number Publication date
CN110992263B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN110992263A (en) Image stitching method and system
CN111080529A (en) A Robust UAV Aerial Image Mosaic Method
CN111553939B (en) An Image Registration Algorithm for Multi-camera Cameras
Xue et al. Learning to calibrate straight lines for fisheye image rectification
CN110211043B (en) A Registration Method Based on Grid Optimization for Panoramic Image Stitching
CN110111248B (en) Image splicing method based on feature points, virtual reality system and camera
CN109829853B (en) Unmanned aerial vehicle aerial image splicing method
CN107016646A (en) One kind approaches projective transformation image split-joint method based on improved
CN109389555B (en) Panoramic image splicing method and device
CN110310310B (en) Improved method for aerial image registration
CN102521816A (en) Real-time wide-scene monitoring synthesis method for cloud data center room
CN110175011B (en) A seamless stitching method for panoramic images
CN108682026A (en) A kind of binocular vision solid matching method based on the fusion of more Matching units
CN106886748B (en) TLD-based variable-scale target tracking method applicable to unmanned aerial vehicle
CN109559273B (en) Quick splicing method for vehicle bottom images
CN106910208A (en) A kind of scene image joining method that there is moving target
CN109003307B (en) Size design method of fishing nets based on underwater binocular vision measurement
CN107220955A (en) A kind of brightness of image equalization methods based on overlapping region characteristic point pair
CN115205118A (en) An underwater image stitching method, device, computer equipment and storage medium
CN108416801B (en) A Har-SURF-RAN Feature Point Matching Method for Stereo Vision 3D Reconstruction
CN110276717B (en) Image stitching method and terminal
CN103428408B (en) A kind of image digital image stabilization method being applicable to interframe
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN109376641A (en) A moving vehicle detection method based on UAV aerial video
CN110084743A (en) Image mosaic and localization method based on more air strips starting track constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201029

Address after: 250101 Electric Power Intelligent Robot Production Project 101 in Jinan City, Shandong Province, South of Feiyue Avenue and East of No. 26 Road (ICT Industrial Park)

Applicant after: National Network Intelligent Technology Co.,Ltd.

Address before: Wang Yue Central Road Ji'nan City, Shandong province 250002 City No. 2000

Applicant before: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co.

Applicant before: National Network Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant