CN114119437B - GMS-based image stitching method for improving distortion of moving object - Google Patents
GMS-based image stitching method for improving distortion of moving object Download PDFInfo
- Publication number
- CN114119437B CN114119437B CN202111328375.7A CN202111328375A CN114119437B CN 114119437 B CN114119437 B CN 114119437B CN 202111328375 A CN202111328375 A CN 202111328375A CN 114119437 B CN114119437 B CN 114119437B
- Authority
- CN
- China
- Prior art keywords
- image
- points
- matching
- images
- matching points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000009466 transformation Effects 0.000 claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims abstract description 33
- 230000004927 fusion Effects 0.000 claims abstract description 32
- 238000012216 screening Methods 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 238000013507 mapping Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 8
- 238000005286 illumination Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image stitching method for improving distortion of a moving object based on GMS, which comprises the following steps: extracting a large number of coarse matching points which are uniformly distributed in the images to be spliced, dividing grids on the images, screening the coarse matching points, and removing points which are in wrong matching in the coarse matching points to obtain fine matching points; randomly and uniformly selecting a part of the fine matching points in each grid to obtain an initial matching point group, calculating a transformation matrix of the initial matching point group, and removing the matching points on the moving object in the fine matching points by using the matrix to obtain the matching points which can be used for image splicing; and calculating a homography matrix between the two images through the obtained matching points to perform image coordinate transformation. And performing difference on the converted images to be spliced to obtain a difference image, and performing threshold segmentation on the difference image to obtain a region with obvious difference between the two images. And (3) self-adaptively determining a fusion area of the images by calculating an energy function of the difference image, and finally, fusing by adopting a gradual-in gradual-out method.
Description
Technical neighborhood
The invention relates to the technical field of image stitching, in particular to an image stitching method for improving distortion of a moving object based on GMS.
Background
Image stitching is an important research problem in image processing neighborhoods. The image stitching mainly refers to stitching images acquired under different visual angles, different devices and different time points and with partially overlapped image areas into a panoramic image with high resolution and wide visual angle through image registration and image fusion. The image stitching has wide application in the neighborhood of deep sea exploration, remote sensing image processing, sonar image analysis and the like.
Feature extraction and matching of images are the first and most critical links in image stitching. The feature extraction and matching of the images refers to a process of extracting feature points between two images and matching them one by one. The feature extraction and matching of the image are an important research problem of the image processing neighborhood, and the method has extensive research on aspects of target identification, image indexing, motion tracking and the like. In the whole link of image stitching, the real-time performance and stability of feature point matching are two important criteria for measuring the feature point matching method. In the whole link of video stitching, the feature extraction and matching links of the images almost occupy two thirds of the time of program operation, and the accuracy of feature matching directly influences the stitching effect of the images, so how to quickly, robustly and accurately perform feature matching is a key problem of image stitching.
Currently, the common feature matching method includes a SIFT algorithm, a SURF algorithm, an ORB algorithm and the like. The SIFT algorithm performs feature detection by constructing pyramid scale space points, has good robustness and good adaptability to scale, rotation, translation and other transformations. The SURF algorithm adopts box filters with different sizes to construct a scale space, possible interest points in the image are extracted through a hessian matrix, then the directions of characteristic points are determined by counting haar wavelet characteristics of pixels in the horizontal direction and the vertical direction around the interest points, the SURF algorithm has high operation speed relative to the SIFT algorithm, and the SURF algorithm also has good adaptability to scale rotation, translation and other transformations. The ORB algorithm detects the interest points by using the FAST algorithm, the FAST algorithm detects the interest points by comparing the absolute difference values of pixels in the circular neighborhood of 16 pixels around the pixels, the final characteristic points are determined by adopting local non-maximum suppression, then the BRIEF is adopted to describe the characteristic points, and the ORB characteristic matching algorithm has extremely high calculation speed and good instantaneity.
The SURF algorithm is approximately equivalent to the SIFT algorithm in terms of feature matching precision, the ORB algorithm is poor in precision, the ORB algorithm is the fastest in terms of feature matching speed, real-time matching can be achieved, the SURF algorithm is slower, and the SIFT algorithm is the slowest.
However, for image stitching, there are inevitably many mismatching points among the matching points obtained by these feature matching methods. The phenomenon that images are inaccurate in transformation and distortion occurs in spliced images can be caused by more mismatching points, and the quality of the spliced images is greatly reduced. Therefore, a secondary fine matching algorithm is needed, and the common secondary fine matching algorithm is a RANSAC algorithm and a GMS algorithm.
The RANSAC algorithm randomly samples matching points, fits a model, sets points fitting the model as inner points, sets points not conforming to the model as outer points, considers that a better matrix model is obtained if the number of the inner points obtained by the model is larger than a threshold value N, then recalculates the model by using an inner point set through a least square method, enables the newly calculated model to meet more points as much as possible, and finally distinguishes correct matching points from wrong matching points through the optimal model until the optimal model is found, and the calculation speed is slower. The computational speed is too slow for image stitching, resulting in inefficient image stitching. Meanwhile, for poor image quality, when the proportion of the mismatching points in the proposed rough matching points is high, the iteration number of the RANSAC algorithm is greatly increased, the calculation efficiency is low, and the screening effect is poor.
The matching algorithm (GMS) based on grid motion statistical features is an image matching algorithm for distinguishing false matching points by counting probability distribution of a large number of matching points, calculates confidence level of the matching points by counting the number of corresponding matching points in the neighborhood of the image matching points, and distinguishes correct matching points from false matching points. Even if there are many mismatching points, a matching point with a better quality can be extracted. However, although the matching accuracy of the GMS algorithm is higher, the matching points are often distributed only in a local area, and for image stitching, if the extracted feature point distribution is more concentrated, then when image stitching is performed, the image stitching effect of the area with more concentrated feature points is better, and the image stitching effect of the area with more sparse feature points is worse. In addition, the GMS algorithm cannot filter out the matching points on the moving object, and if the matching points on the moving object are added to the calculation of the image coordinate transformation matrix, errors are also introduced due to the relative displacement of the moving object, so that distortion occurs during image stitching.
After the characteristic points of the images are matched, a coordinate transformation matrix is calculated through the obtained matching points, the pixel points in the two images are transformed to the same pixel coordinate system, and the corresponding areas of the two images are fused, so that the images can be spliced. The image fusion algorithm applied to the image splicing mainly comprises an average value method, a gradual-in gradual-out method, an optimal suture line method and the like, the average value method is simple to calculate and is widely applied, but when a moving object exists in an overlapping area, a double image phenomenon is very easy to occur, and a splicing boundary line caused by uneven illumination often exists at the junction between the overlapping area and the non-overlapping area. The progressive-in and progressive-out method can effectively solve the problem of uneven illumination of spliced images but cannot solve the problem of double images when a moving object is positioned in an overlapping area, and the optimal suture line algorithm can solve the problem of double images when the moving object is positioned in the overlapping area but cannot solve the problem of uneven illumination of images.
Therefore, how to provide an image stitching method for improving distortion of a moving object based on GMS with uniformly distributed matching points and high stitching image quality is a problem to be solved by the skilled person.
Disclosure of Invention
In view of the above, the present invention provides an image stitching method for improving distortion of a moving object based on GMS, which aims to effectively solve the problems of concentrated distribution of matching points obtained in a feature matching process when performing image stitching by using a matching algorithm (GMS) based on grid motion statistics features, influence of matching points on the moving object on stitching image quality, and ghost image generated when the moving object is located in an overlapping area.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an image stitching method for improving distortion of a moving object based on GMS, comprising the steps of:
Performing rough extraction and matching of characteristic points on the images to be spliced to obtain rough matching points which are uniformly distributed;
Dividing the image into G X G large grids by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grids to obtain fine matching points;
Randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
Calculating feature mapping points of all the fine matching points by using a transformation matrix, calculating the distance between the feature mapping points and the fine matching points, screening out the fine matching points with the distance larger than a threshold value, and taking the left fine matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
Acquiring an image fusion area: absolute difference calculation is carried out on images to be spliced after coordinate transformation to obtain a difference image, threshold segmentation is carried out on the difference image, all pixel values exceeding a pixel threshold value on the difference image are added row by row to obtain a difference weight coefficient of each row, rows where the pixel values of all the difference weight coefficients exceeding the difference weight coefficient threshold value are located are further obtained, and distances between the upper boundary and the lower boundary of the overlapping area of each row are respectively obtained for each row;
If the current line is closer to the upper boundary, the upper boundary of the overlapping area is taken as an upper image area, if the current line is closer to the lower boundary, the lower boundary of the overlapping area is taken as a lower image area, and the rest middle area is an image fusion area;
And fusing the images in the image fusion area by adopting a progressive-in and progressive-out method.
Preferably, the specific method for obtaining the fine matching point comprises the following steps:
(3) If the image I a has M characteristic points and the image I b has N characteristic points, the characteristic point set of the two images is set as { M, N }, and the matching point pair between the two images is x i={Ni,Mi }; dividing an image to be matched into G multiplied by G grids;
(4) Dividing each large grid into K multiplied by K small grids a i, and calculating neighborhood confidence support degree S i of the small grids a i by calculating the number of feature matching points of images I a and I b contained in 8 neighborhood small grids around the small grid a i; setting a threshold value Wherein α is a super parameter, n is the number of all feature points in the small grid a i, and if S i is greater than T, the matching points in the small grid a i are considered to be required fine matching points.
Preferably, selecting the fine matching points randomly in each grid to obtain an initial matching point set, and calculating the specific content of the transformation matrix of the initial matching point group comprises the following steps:
(1) Calculating the number of the fine matching points contained in each large grid, and randomly selecting one fine matching point in each large grid if all the large grids contain the fine matching points to obtain an initial matching point group; if k large grids do not contain the fine matching points, firstly randomly selecting one matching point in each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group;
(2) Calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group; and calculating characteristic mapping points of all the fine matching points under the transformation matrix by utilizing the transformation matrix, screening out the matching points with Euclidean distances between the mapping points and the corresponding matching points being larger than a threshold value, wherein the matching points with the remained Euclidean distances being smaller than the threshold value are the matching points required by image splicing.
Preferably, for the ghost problem generated when a moving object is located in an overlapping region in image fusion, threshold segmentation is performed on a difference value diagram by calculating a difference value between images to be spliced after transformation to obtain a region with an obvious difference value between two images, and an image fusion region is adaptively determined by calculating an energy function of the threshold diagram to obtain the image fusion region, wherein the specific content comprises:
(1) Converting the two images to be spliced after conversion into gray images to obtain I ag (x, y) and I bg (x, y), normalizing the gray images to eliminate the influence of illumination to obtain gamma ag (x, y) and gamma bg (x, y), and performing absolute difference on the overlapped area of the two images to obtain a difference image g (x, y) of the overlapped area;
(2) Threshold segmentation is carried out on the difference image, so that a region with obvious difference between the two images is obtained;
(3) Adding all pixel values exceeding a pixel threshold value on the difference map row by row to obtain a difference weight coefficient of each row, taking the median of the difference weight coefficients of all rows as the difference weight coefficient threshold value, and finding out the row where all the difference weights exceed the difference weight coefficient threshold value;
(4) Respectively acquiring the distance between the upper boundary and the lower boundary of the overlapping area of each row distance for each row; if the line is closer to the upper boundary, the line starts to the upper boundary of the overlapping region to serve as an upper image region, if the line is closer to the lower boundary, the line starts to the lower boundary of the overlapping region to serve as a lower image region, and the rest of the middle region is an image fusion region D.
Preferably, the specific content of fusing the images in the image fusion area by adopting a progressive-in and progressive-out method comprises the following steps:
let the pixel values on the coordinates (x, y) on the two images I a and I b to be stitched be I a (x, y) and I b (x, y), respectively, the pixel values of the points on the fused image are:
wherein d is a weight factor, and is calculated from the distance between the pixel and the distance boundary, and the calculation method comprises the following steps:
Compared with the prior art, the invention discloses an image splicing method for improving distortion of a moving object based on GMS, which is mainly used for solving the problems that matching points obtained by adopting a GMS algorithm for feature matching are concentrated in distribution, the quality of spliced images is influenced by matching points on the moving object, and the moving object generates heavy problems during image fusion in the prior art. Aiming at the problem of the matching point set obtained by the GMS algorithm, firstly, extracting and matching characteristic points of an image to obtain coarse matching points with uniform distribution, and then obtaining fine matching points according to the coarse matching points. Aiming at the problem that matching points on a moving object affect the quality of a spliced image, a fine matching point is randomly selected from each divided grid to obtain an initial point group, and a transformation matrix of the initial matching point group is calculated. And calculating feature mapping points of all the fine matching points by using the transformation matrix, calculating the distance between the feature mapping points and the matching points, and screening out the matching points on the moving object. Aiming at the problem of double image generated when a moving object is positioned in an overlapping area in image fusion, threshold segmentation is carried out on a difference value diagram through calculating the difference value between images to be spliced after transformation, an area with obvious difference value between the two images is obtained, the fusion area of the images is self-adaptively determined through calculating the energy function of the threshold diagram, and finally, the fusion is carried out by adopting a gradual-in gradual-out method, so that the moving object in the fusion area can be effectively avoided while illumination is balanced, and the double image phenomenon is avoided. The image stitching method provided by the invention can stitch more accurate panoramic images with higher image quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
fig. 2 is a schematic diagram of pyramid meshing in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
Fig. 3 is a schematic view of GMS meshing in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
fig. 4 is a schematic diagram of motion matching point screening in an image stitching method for improving distortion of a moving object based on GMS according to the present invention;
Fig. 5 is a schematic diagram of image fusion in an image stitching method for improving distortion of a moving object based on GMS according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which are obtained by persons of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments of the present invention, are within the scope of the present invention.
The embodiment of the invention discloses an image stitching method for improving distortion of a moving object based on GMS (global system for mobile communications), which is shown in figure 1 and comprises the following steps:
Performing rough extraction and matching of characteristic points on the images to be spliced to obtain rough matching points which are uniformly distributed;
Dividing the image into G X G large grids by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grids to obtain fine matching points;
randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
Calculating feature mapping points of all the fine matching points by using a transformation matrix, calculating the distance between the feature mapping points and the fine matching points, screening out the fine matching points with the distance larger than a threshold value, and taking the left fine matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
Acquiring an image fusion area: absolute difference calculation is carried out on images to be spliced after coordinate transformation to obtain a difference image, threshold segmentation is carried out on the difference image, all pixel values exceeding a pixel threshold value on the difference image are added row by row to obtain a difference weight coefficient of each row, rows where the pixel values of all the difference weight coefficients exceeding the difference weight coefficient threshold value are located are further obtained, and distances between the upper boundary and the lower boundary of the overlapping area of each row are respectively obtained for each row;
If the current line is closer to the upper boundary, the upper boundary of the overlapping area is taken as an upper image area, if the current line is closer to the lower boundary, the lower boundary of the overlapping area is taken as a lower image area, and the rest middle area is an image fusion area;
And fusing the images in the image fusion area by adopting a progressive-in and progressive-out method.
In order to further implement the technical scheme, feature point extraction and matching are carried out on the images by adopting an ORB algorithm based on pyramid grids on two input images to be matched, so as to obtain uniformly distributed rough matching points; the method specifically comprises the following steps:
(1) For an input image, an image pyramid is constructed, the input image is enlarged by one time, a Gaussian pyramid is constructed on the basis of the enlarged image, and then Gaussian blur is carried out on the image with the size, namely, the image is convolved by using a Gaussian convolution kernel function.
L (x, y, sigma) is a convolved image, G (x, y, sigma) is a Gaussian convolution kernel function, sigma takes a fixed value of 1.6, and four Gaussian blurred images are taken to form five layers of a first group of images. The 1 st group of the most blurred image images are subjected to double downsampling to obtain the 1 st image of the 2 nd group of images, then the images are subjected to Gaussian blur with smoothing factors of sigma, 2 sigma, 4 sigma and 8 sigma to obtain the five layers of the second group, so that the five layers are deduced, the images with the same group of images are consistent in size and different in smoothing coefficient, four groups of images are constructed, and a Gaussian image pyramid is formed
(2) And then uniformly spreading the number of the required characteristic points on each layer of pyramid on the image of each layer of pyramid according to the area, and then extracting the required number of interest points by adopting a Fast method. And taking the point p to be detected as the center, comparing the gray absolute difference values of 16 pixel points on the circumference with the radius of 3 of the point p, and taking the point p as the interest point if the gray absolute difference value is larger than the threshold value.
(3) In order to prevent the feature points from being too concentrated, the obtained interest points are subjected to non-maximum suppression, namely, the sum of gray scale difference absolute values of a central point and 16 points around is calculated as a response score.
(4) Calculating a Harris response, setting the number of feature points required to be extracted in the layer as N, dividing 30 x 30 grids for each layer of the pyramid in order to enable the feature points to be distributed more uniformly, extracting N/(30 x 30) corner points with maximum response for each grid independently, and if the points cannot be extracted, reducing a FAST threshold value and ensuring that areas with weaker textures can also extract some FAST corner points.
In order to further implement the technical scheme, the image is divided into G multiplied by G large grids (G can be set by a user according to the image size requirement), and grid motion statistics screening is carried out on coarse matching points of the image according to the grids to obtain fine matching points. The specific operation steps are as follows:
if the image I a has M feature points and the image I b has N feature points, the feature point set of the two images is { M, N }, the matching point pair between the two images is x i={Ni,Mi }, and the neighborhood of the matching point pair x i={Ni,Mi in the images I a and I b is J a and J b. . Because the probability that the matching points around the matching points with correct matching are correct is higher, and the probability that the feature points around the matching points with incorrect matching are matched with the corresponding positions with incorrect matching is smaller, if the matching point pair x i is matched correctly, the probability that the feature points in the neighborhood J a are correspondingly matched with the neighborhood J b of the matching points is also larger, and if the matching point pair x j is matched incorrectly, the corresponding matching point pairs of the corresponding neighborhood J a and J b are fewer. Dividing the image to be matched into G x G grids for fast screening of all matching points in the image (G can be set by the user according to the image size requirement)
(2) Calculating the neighborhood confidence support degree S i.Si of each grid, calculating the number of corresponding matching points in the neighborhood J a and J a of the matching point pair x i, wherein S i is the number of the matching points minus 1, and the calculation formula is as follows:
K represents K X K small grids divided in G X G large grids (generally, K is 9, and can be set by a user according to the image size), { a k,bk } is a matching point pair on a corresponding small grid area, Is the set of corresponding matching point pairs { a k,bk } on the neighborhood. In order to avoid that many feature points are just at the boundaries of the grids, the algorithm of retracting the length and width of the grids by 0.5 is iterated to obtain the score, the calculation is performed by the grid with the highest score, namely G 1{a1,a2...ai},G2{b1,b2...bj }, b j with the highest matching point number with a i in G 2 (G×G large grids in the image I b to be matched) is found for each grid a i of G 1 (G×G large grids in the image I a to be matched), then if the matching point in b j exceeds a threshold T, 8 grids around b j and 8 grids around a i are taken, and then the number of the matching points of the grids at the corresponding positions is calculated. Neighborhood confidence support S i as mesh a i.
(3) Setting a threshold value(Alpha is super parameter, which is generally taken as 6, and can be set by a user according to the image size, and n is the number of all feature points in the small grid. If the number of the corresponding matching points in the corresponding small grid a i, namely the neighborhood confidence support S i, is greater than the threshold T, the matching points in the grid are considered to be correct matching points, otherwise, the matching points are considered to be incorrect matching points.
In order to further implement the above technical solution, randomly selecting the fine matching points in each grid to obtain an initial matching point set, and calculating the specific content of the transformation matrix of the initial matching point group includes:
(1) Calculating the number of the fine matching points contained in each large grid, and randomly selecting one fine matching point in each large grid if all the large grids contain the fine matching points to obtain an initial matching point group; if k large grids do not contain the fine matching points, firstly randomly selecting one matching point from each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group.
(2) And calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group.
In order to further implement the technical scheme, the transformation matrix is utilized to calculate the feature mapping points of all the fine matching points under the transformation matrix, if the matching points belong to the feature points on the moving object, the topology attribute of the moving object is different from other feature points because of the relative displacement of the moving object, so that the mapping points calculated by the transformation matrix fitted by the feature points on the moving object through the initial matching point group are far away from the corresponding matching points on the moving object. Therefore, the Euclidean distance between the feature mapping point and the corresponding matching point is obtained by calculating the feature point through transformation matrix transformation, the matching point with the Euclidean distance between the mapping point and the corresponding matching point being larger than a threshold value d (which is set by a user according to the image requirement) is screened out, and the matching point with the Euclidean distance smaller than the threshold value is the matching point required by image splicing. As shown in fig. 4.
In order to further implement the technical scheme, the coordinate transformation matrix between two images to be spliced is calculated through the obtained matching point pairs, and one image is used as a reference, and the other image is subjected to coordinate transformation.
In order to further implement the technical scheme, the absolute difference value of the two transformed images to be spliced is calculated, a difference value diagram is obtained, and an image fusion area is calculated through the difference value diagram. The specific operation steps are as follows:
(1) Converting the two images to be spliced after conversion into gray images to obtain I ag (x, y) and I bg (x, y), normalizing the gray images to eliminate the influence of illumination to obtain I 'ag (x, y) and I' bg (x, y), and performing absolute difference on the overlapped area of the two images to obtain a difference image g (x, y) of the overlapped area.
(2) And (3) carrying out threshold segmentation on the difference image to obtain a region with obvious difference between the two images.
(3) And adding all pixel values exceeding the threshold value on the difference map line by line to obtain a difference weight coefficient of each line, taking the median of the difference weight coefficients of all lines as the threshold value, and finding out the line number where all the difference weights exceed the threshold value.
(4) And calculating the distance between the upper boundary and the lower boundary of the overlapping region of the row with the difference weight coefficient exceeding the threshold value, taking the upper boundary from the row to the upper boundary of the overlapping region as an upper graph region if the distance is nearer to the upper boundary, taking the lower boundary from the row to the lower boundary of the overlapping region as a lower graph region if the distance is nearer to the lower boundary, and taking the calculated middle region as a fusion region D.
In order to further implement the above technical solution, the image fusion area D adopts a progressive-in and progressive-out method to fuse the images as shown in fig. 5. Let the pixel values on the coordinates (x, y) on the two images Ia and I b to be stitched be I a (x, y) and I b (x, y) respectively, the pixel values of the points on the fused image are:
In the above formula, d is a weight factor, calculated from the distance between the pixel and the distance boundary, and its value is calculated as:
the invention is mainly used for solving the problems that matching points obtained by adopting a GMS algorithm in the prior art are concentrated in a local area during image splicing, the quality of spliced images is influenced by matching points on a moving object, and ghost images are generated by the moving object during image fusion.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (5)
1. A GMS-based image stitching method for improving distortion of a moving object, comprising the steps of:
Performing rough extraction and matching of characteristic points on the images to be spliced to obtain rough matching points which are uniformly distributed;
Dividing the image into G X G large grids by using a matching algorithm GMS based on grid motion statistical characteristics, and carrying out grid motion statistical screening on coarse matching points of the image according to the grids to obtain fine matching points;
Randomly selecting fine matching points in each large grid to obtain an initial matching point group, and calculating a transformation matrix of the initial matching point group;
Calculating feature mapping points of all the fine matching points by using a transformation matrix, calculating the distance between the feature mapping points and the fine matching points, screening out the fine matching points with the distance larger than a threshold value, and taking the left fine matching points with the distance smaller than the threshold value as splicing matching points required by image splicing;
calculating a coordinate transformation matrix between two images to be spliced according to the splicing matching points, and carrying out pixel coordinate transformation on the images to be spliced;
Acquiring an image fusion area: absolute difference calculation is carried out on images to be spliced after coordinate transformation to obtain a difference image, threshold segmentation is carried out on the difference image, all pixel values exceeding a pixel threshold value on the difference image are added row by row to obtain a difference weight coefficient of each row, rows where the pixel values of all the difference weight coefficients exceeding the difference weight coefficient threshold value are located are further obtained, and distances between the upper boundary and the lower boundary of the overlapping area of each row are respectively obtained for each row;
If the current line is closer to the upper boundary, the upper boundary of the overlapping area is taken as an upper image area, if the current line is closer to the lower boundary, the lower boundary of the overlapping area is taken as a lower image area, and the rest middle area is an image fusion area;
And fusing the images in the image fusion area by adopting a progressive-in and progressive-out method.
2. The GMS-based image stitching method for improving distortion of a moving object according to claim 1, wherein the specific method for obtaining a fine matching point comprises:
(1) If the image I a has M characteristic points and the image I b has N characteristic points, the characteristic point set of the two images is set as { M, N }, and the matching point pair between the two images is x i={Ni,Mi }; dividing an image to be matched into G multiplied by G grids;
(2) Dividing each large grid into K multiplied by K small grids a i, and calculating neighborhood confidence support degree S i of the small grids a i by calculating the number of feature matching points of images I a and I b contained in 8 neighborhood small grids around the small grid a i; setting a threshold value Wherein α is a super parameter, n is the number of all feature points in the small grid a i, and if S i is greater than T, the matching points in the small grid a i are considered to be required fine matching points.
3. The GMS-based image stitching method for improving distortion of a moving object according to claim 1, wherein selecting a fine matching point randomly in each grid to obtain an initial matching point set, and calculating the specific content of the transformation matrix of the initial matching point group comprises:
(1) Calculating the number of the fine matching points contained in each large grid, and randomly selecting one fine matching point in each large grid if all the large grids contain the fine matching points to obtain an initial matching point group; if k large grids do not contain the fine matching points, firstly randomly selecting one matching point in each large grid containing the fine matching points, and then randomly selecting k matching points from the remaining fine matching points in the whole image to obtain an initial matching point group;
(2) Calculating a transformation matrix fitted by the initial matching point group by using the obtained initial matching point group; and calculating characteristic mapping points of all the fine matching points under the transformation matrix by utilizing the transformation matrix, screening out the matching points with Euclidean distances between the mapping points and the corresponding matching points being larger than a threshold value, wherein the matching points with the remained Euclidean distances being smaller than the threshold value are the matching points required by image splicing.
4. The GMS-based image stitching method for improving distortion of a moving object according to claim 1, wherein for a ghost problem generated when the moving object is located in an overlapping area in image fusion, threshold segmentation is performed on a difference value diagram by calculating a difference value between images to be stitched after transformation to obtain an area with a distinct difference value between two images, and an image fusion area is adaptively determined by calculating an energy function of the threshold diagram, so as to obtain the image fusion area, which specifically comprises:
(1) Converting the two images to be spliced after conversion into gray images to obtain I ag (x, y) and I bg (x, y), normalizing the gray images to eliminate the influence of illumination to obtain gamma ag (x, y) and gamma bg (x, y), and performing absolute difference on the overlapped area of the two images to obtain a difference image g (x, y) of the overlapped area;
(2) Threshold segmentation is carried out on the difference image, so that a region with obvious difference between the two images is obtained;
(3) Adding all pixel values exceeding a pixel threshold value on the difference map row by row to obtain a difference weight coefficient of each row, taking the median of the difference weight coefficients of all rows as the difference weight coefficient threshold value, and finding out the row where all the difference weights exceed the difference weight coefficient threshold value;
(4) Respectively acquiring the distance between the upper boundary and the lower boundary of the overlapping area of each row distance for each row; if the line is closer to the upper boundary, the line starts to the upper boundary of the overlapping region to serve as an upper image region, if the line is closer to the lower boundary, the line starts to the lower boundary of the overlapping region to serve as a lower image region, and the rest of the middle region is an image fusion region D.
5. The GMS-based image stitching method for improving distortion of a moving object according to claim 1, wherein the specific content of fusing images in the image fusion area by using a fade-in and fade-out method comprises:
let the pixel values on the coordinates (x, y) on the two images I a and I b to be stitched be I a (x, y) and I b (x, y), respectively, the pixel values of the points on the fused image are:
wherein d is a weight factor, and is calculated from the distance between the pixel and the distance boundary, and the calculation method comprises the following steps:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111328375.7A CN114119437B (en) | 2021-11-10 | 2021-11-10 | GMS-based image stitching method for improving distortion of moving object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111328375.7A CN114119437B (en) | 2021-11-10 | 2021-11-10 | GMS-based image stitching method for improving distortion of moving object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114119437A CN114119437A (en) | 2022-03-01 |
CN114119437B true CN114119437B (en) | 2024-05-14 |
Family
ID=80378204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111328375.7A Active CN114119437B (en) | 2021-11-10 | 2021-11-10 | GMS-based image stitching method for improving distortion of moving object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114119437B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114842227B (en) * | 2022-05-30 | 2025-06-24 | 江南大学 | A feature point matching and screening method based on adaptive regional motion statistics |
CN115423681A (en) * | 2022-08-12 | 2022-12-02 | 西南交通大学 | Unmanned aerial vehicle image splicing method based on IB-SURF and neighborhood matching method |
CN116109852B (en) * | 2023-04-13 | 2023-06-20 | 安徽大学 | Quick and high-precision image feature matching error elimination method |
CN116310447B (en) * | 2023-05-23 | 2023-08-04 | 维璟(北京)科技有限公司 | Remote sensing image change intelligent detection method and system based on computer vision |
CN119048344B (en) * | 2024-10-31 | 2025-03-04 | 山东省地质测绘院 | Remote sensing image stitching method, device, computer equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376548A (en) * | 2014-11-07 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Fast image splicing method based on improved SURF algorithm |
KR101692227B1 (en) * | 2015-08-18 | 2017-01-03 | 광운대학교 산학협력단 | A panorama image generation method using FAST algorithm |
CN109741240A (en) * | 2018-12-25 | 2019-05-10 | 常熟理工学院 | A Multiplane Image Mosaic Method Based on Hierarchical Clustering |
CN110111248A (en) * | 2019-03-15 | 2019-08-09 | 西安电子科技大学 | A kind of image split-joint method based on characteristic point, virtual reality system, camera |
CN110992263A (en) * | 2019-11-27 | 2020-04-10 | 国网山东省电力公司电力科学研究院 | Image stitching method and system |
CN111784576A (en) * | 2020-06-11 | 2020-10-16 | 长安大学 | An Image Mosaic Method Based on Improved ORB Feature Algorithm |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984463A (en) * | 2010-11-02 | 2011-03-09 | 中兴通讯股份有限公司 | Method and device for synthesizing panoramic image |
-
2021
- 2021-11-10 CN CN202111328375.7A patent/CN114119437B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376548A (en) * | 2014-11-07 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Fast image splicing method based on improved SURF algorithm |
KR101692227B1 (en) * | 2015-08-18 | 2017-01-03 | 광운대학교 산학협력단 | A panorama image generation method using FAST algorithm |
CN109741240A (en) * | 2018-12-25 | 2019-05-10 | 常熟理工学院 | A Multiplane Image Mosaic Method Based on Hierarchical Clustering |
CN110111248A (en) * | 2019-03-15 | 2019-08-09 | 西安电子科技大学 | A kind of image split-joint method based on characteristic point, virtual reality system, camera |
CN110992263A (en) * | 2019-11-27 | 2020-04-10 | 国网山东省电力公司电力科学研究院 | Image stitching method and system |
CN111784576A (en) * | 2020-06-11 | 2020-10-16 | 长安大学 | An Image Mosaic Method Based on Improved ORB Feature Algorithm |
Non-Patent Citations (2)
Title |
---|
基于SIFT特征和误匹配逐次去除的图像拼接;张静;袁振文;张晓春;李颖;;半导体光电;20160215(第01期);全文 * |
融合GMS与VCS+GC-RANSAC的图像配准算法;丁辉;李丽宏;原钢;;计算机应用;20200410(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114119437A (en) | 2022-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114119437B (en) | GMS-based image stitching method for improving distortion of moving object | |
CN109785291B (en) | Lane line self-adaptive detection method | |
CN108510451B (en) | Method for reconstructing license plate based on double-layer convolutional neural network | |
CN110992263B (en) | Image stitching method and system | |
CN111583110A (en) | Splicing method of aerial images | |
CN107945111B (en) | An Image Mosaic Method Based on SURF Feature Extraction and CS-LBP Descriptor | |
CN109242888A (en) | Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation | |
CN105335952B (en) | Matching power flow computational methods and device and parallax value calculating method and equipment | |
CN101639947A (en) | Image-based plant three-dimensional shape measurement and reconstruction method and system | |
CN112991420A (en) | Stereo matching feature extraction and post-processing method for disparity map | |
CN105427333A (en) | Real-time registration method of video sequence image, system and shooting terminal | |
CN115147613B (en) | A method for infrared small target detection based on multi-directional fusion | |
CN115239882A (en) | A 3D reconstruction method of crops based on low-light image enhancement | |
CN104715221A (en) | Decoding method and system for ultra-low-contrast two-dimension code | |
CN101937562A (en) | Method for Constructing Gray Level Information Amount Histogram | |
CN109635809B (en) | A Superpixel Segmentation Method for Visually Degraded Images | |
CN118261787A (en) | High-precision sub-pixel interpolation method suitable for image registration of multispectral camera | |
CN104268550B (en) | Feature extracting method and device | |
CN110942102B (en) | Probability relaxation epipolar matching method and system | |
CN109190452A (en) | Crop row recognition methods and device | |
CN108564622A (en) | The method for realizing sub-pixel Corner character in positioning plate image | |
CN113033256A (en) | Training method and device for fingertip detection model | |
CN115601543A (en) | Mushroom cluster contour segmentation and reconstruction method based on improved SOLOV2 | |
CN106815851A (en) | A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement | |
CN106127147B (en) | A kind of face depth texture restorative procedure based on three-dimensional data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |