[go: up one dir, main page]

CN113723465A - Improved feature extraction method and image splicing method based on same - Google Patents

Improved feature extraction method and image splicing method based on same Download PDF

Info

Publication number
CN113723465A
CN113723465A CN202110883189.3A CN202110883189A CN113723465A CN 113723465 A CN113723465 A CN 113723465A CN 202110883189 A CN202110883189 A CN 202110883189A CN 113723465 A CN113723465 A CN 113723465A
Authority
CN
China
Prior art keywords
image
feature
algorithm
feature extraction
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110883189.3A
Other languages
Chinese (zh)
Other versions
CN113723465B (en
Inventor
石振锋
张萌菲
张孟琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202110883189.3A priority Critical patent/CN113723465B/en
Publication of CN113723465A publication Critical patent/CN113723465A/en
Application granted granted Critical
Publication of CN113723465B publication Critical patent/CN113723465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种改进的特征提取方法以及基于该方法的图像拼接方法,属于数字图像处理技术领域。本发明在不降低拼接精度的条件下,提高了对航拍图像的拼接速度。所述特征提取方法利用FAST‑9算法对待拼接图像IA和IB进行特征提取,利用Harris角点检测获得特征点,通过改进的BRISK算法获得特征串,实现特征提取。所述图像拼接方法基于ORB的图像拼接算法并结合无人机返回的航拍图像的特点,提出了一种改进的基于ORB的快速图像拼接算法,实现了快速高效的得到全景图的技术效果,通过仿真实验验证,本发明在不损失精度的情况下能够有效的提高多种拼接算法的拼接速度。本发明可以广泛应用于对航拍图像的拼接处理。能够快速准确的获得航拍区域的全局影像。

Figure 202110883189

An improved feature extraction method and an image stitching method based on the method belong to the technical field of digital image processing. The invention improves the splicing speed of aerial images without reducing the splicing precision. The feature extraction method utilizes the FAST-9 algorithm to perform feature extraction on the images IA and IB to be spliced, utilizes Harris corner detection to obtain feature points, and obtains feature strings through the improved BRISK algorithm to realize feature extraction. The image stitching method is based on the ORB image stitching algorithm and combined with the characteristics of the aerial images returned by the UAV, an improved ORB-based fast image stitching algorithm is proposed, which realizes the technical effect of obtaining a panorama quickly and efficiently. Simulation experiments verify that the present invention can effectively improve the splicing speed of various splicing algorithms without losing precision. The invention can be widely applied to the stitching processing of aerial images. It can quickly and accurately obtain the global image of the aerial photography area.

Figure 202110883189

Description

Improved feature extraction method and image splicing method based on same
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an improved feature extraction algorithm and an image splicing algorithm based on the same.
Background
In severe environments after disasters occur, on-site image information collection becomes dangerous and slow, the visual field of images acquired by using a common camera is often small, the resolution of the images acquired when a shooting scene becomes large is very low, and distortion is often caused by using a panoramic camera or a wide-angle lens. And unmanned aerial vehicle shoots and has the flexibility, characteristics such as mobility, and the scene image acquisition after very being applicable to the calamity to take place. When a disaster occurs, global image information of a disaster area is rapidly acquired, and the situation of disaster distribution of the area is mastered globally, which is very important for subsequent post-disaster rescue work.
Global image information relies on image stitching technology, which is one of the issues that has been receiving much attention in image processing and computer vision. The image stitching needs to be realized by two key technologies, namely image registration and image fusion. The image registration is to align two images obtained under different shooting conditions in the same coordinate system, and the common registration methods mainly include two types, one is an image registration method based on a region, and the other is an image registration method based on a feature. The image fusion technology mainly aims to solve the problem of seam splicing caused by factors such as uneven illumination.
The Harris corner detection algorithm is one of the early typical feature detection algorithms, but the algorithm does not satisfy the scale invariance (Smith S M, Brady J M. SUSAN a New Aproach to Low Level Image Processing [ J ]. International Journal of computer Vision,1997,23(1): 45-78.). Therefore, the SIFT (Scale Image Features transform) algorithm was proposed in 1999, and in 2004, Lowe improved SIFT (Lowe D. differential Image Features from Scale-innovative keys [ J ]. International Journal of computer vision,2004,60(2):91-110.), which showed very good robustness, and the extracted Features of the algorithm could remain unchanged even if the Image of the Features to be extracted was rotated, scaled, affine transformed or affected by different noise and lighting conditions. However, the SIFT operator uses a 128-dimensional feature vector to represent the direction of the feature point, which results in a large amount of time for the algorithm to calculate the feature vector. In order to solve the disadvantage of large calculation time consumption of the SFIT algorithm, many scholars provide different improvements: in 2006, Bay and Es propose the SURF (Speeded Up Robust feature) operator, which is based on invariant technology (Bay H, Tuytelaars T, Van Gool L. SURF: Speeded Up Robust Features [ C ] European conference on computer vision,2006: 404-. The method changes the mode of generating the scale pyramid and the structure of the feature vector of the feature descriptor, maintains the robustness and the anti-interference performance of the SIFT feature detection operator, greatly improves the speed of feature extraction, and is 3-5 times of the SIFT feature extraction algorithm. In 2011, Ruble and the like improve FAST feature detectors and BRIEF feature descriptors, and obtain an operator ORB (organized FAST and localized BRIEF) which can be rapidly calculated and can well perform under different illumination conditions and rotation changes (Bay H, Tuytelaars T, Van Gool L. SURF: speedup Robust Features [ C ]. European conference on computer vision,2006: 404-.
The image registration algorithm based on the characteristics can ensure the quality of the image and has better robustness, but the algorithm has high complexity and large calculation amount; the registration algorithm based on the region can well solve the problems of rotation, translation and scaling of the image, but the algorithm has high requirements on the image overlapping degree and size, and is not suitable for splicing the images of the unmanned aerial vehicle. Therefore, the unmanned aerial vehicle image stitching algorithm needs to solve the problems of realizing high registration speed, reducing invalid search, reducing mismatching, guaranteeing robustness and balance between the quality of image stitching and finally realizing quick automatic stitching.
Disclosure of Invention
The invention provides an improved feature extraction method and an image splicing method based on the method, aiming at solving the problem of improving the splicing speed of aerial images under the condition of not reducing the splicing precision:
an improved region-based feature extraction method, comprising the steps of:
s1, image I to be spliced is processed by using FAST-9 algorithmAAnd IBExtracting features, obtaining coordinates of feature points, sequencing the obtained feature points from good to bad by using Harris corner detection, and screening to obtain the feature points with better performance of corner records;
s2, determining the direction of each feature point descriptor through the intensity centroid;
s3, obtaining a binary characteristic string through the improved BRISK algorithm, namely the extracted characteristic information.
2. Further defining, in S1, the image I to be stitched is processed by using FAST-9 algorithmAAnd IBIn the process of feature extraction, the value range of feature extraction is as follows:
image I to be stitchedAAnd IBThe overlapping rate is gamma, gamma belongs to (0,1), a threshold value delta, delta belongs to (0, gamma) for feature extraction is set, and an image I to be spliced isAAnd IBThe width of the image to be spliced is w, the height of the image to be spliced is h, the overlapping areas of the images to be spliced are positioned at the upper, lower, left or right sides of the images to be spliced, and the areas which respectively correspond to the areas needing to extract the characteristics are marked as omegat、Ωb、Ωl、ΩrThe specific value range is shown in the following formula:
Figure BDA0003192953770000021
further defined, the method for determining the direction of each feature descriptor through the intensity centroid as described in S2 is as follows:
determining the direction of the feature point descriptor by the intensity centroid: setting the position of the feature point in the image to be spliced as O, and defining the moment of the neighborhood where the feature point is located as follows:
Figure BDA0003192953770000031
wherein p, q belongs to (0,1), and the brightness center of the neighborhood where the characteristic point is defined by the moment is as follows:
Figure BDA0003192953770000032
obtaining a vector pointing to a brightness center C from a position O of the feature point
Figure BDA0003192953770000033
The directions defining the characteristic region are thus:
θ=atan2(m01,m10),
wherein atan2 is a quadrant version of arctan, i.e. the output is a vector
Figure BDA0003192953770000034
And the positive direction of the X axis.
Further defined, the modified BRISK algorithm described in S3 is as follows:
defining a corresponding binary test at the smoothed pixel block p as:
Figure BDA0003192953770000035
wherein P (x) is the luminance of the pixel block P at point x, and finally an n-dimensional binary vector describing the features is obtained:
Figure BDA0003192953770000036
another aspect of the invention: the image splicing method based on the feature extraction method comprises the following steps:
(1) image preprocessing: inputting an image I to be splicedAAnd IBCarrying out image rotation, image enhancement and smoothing pretreatment;
(2) ORB feature extraction: the feature extraction method of claim 1 obtaining a binary feature string;
(3) and (3) eliminating mismatching: the characteristic point pairs are obtained through a k nearest neighbor algorithm,then screening I by a random consistency sampling algorithmAAnd IBEliminating a large amount of mismatching by using the characteristic points to be matched; selecting the feature points with better matching effect by setting a threshold t by taking the Euclidean distance between the feature descriptors as the main reference basis of feature registration, searching the potential matching points which are closest and next closest to the feature descriptors in the image to be matched for each feature point in the image to be spliced, and when the closest distance d is reached1And a second near distance d2Satisfies the inequality d1/d2If t is less than t, the point with the closest distance is considered as a correct feature matching point;
(4) image registration: after at least 4 groups of pairable point pairs are obtained, solving a transformation model between the images to be registered through the following formula, and applying matrix parameters obtained through the solution to the image I to be splicedBObtaining a converted image I'B
Figure BDA0003192953770000037
Wherein, (x, y,1)THomogeneous coordinates representing characteristic points of the image to be stitched, (x ', y', 1)TRepresenting homogeneous coordinates of feature points registered therewith;
(5) image fusion: converting the post-image I'BAnd an image I to be stitchedAAs input, image I 'is respectively'BAnd IAThe above pixel points are calculated according to the formula (1) to obtain the transformation distance d1、d2Then, α used for Alpha fusion is obtained and substituted into expression (2), and a fused image, that is, a final image is obtained according to an Alpha fusion algorithm, and expressions (1) and (2) are as follows:
Figure BDA0003192953770000041
(x,y)∈{(x,y)|I(x,y)≠0},
Figure BDA0003192953770000042
Figure BDA0003192953770000043
Ii(x,y)=α(x,y)Ii1(x,y)+[1-α(x,y)]Ii2(x,y) (2)。
the invention also provides a computer device comprising a memory in which a computer program is stored and a processor which, when running the computer program stored by the memory, executes the improved region-based feature extraction method according to any one of claims 1 to 4.
The invention also provides a computer device comprising a memory in which a computer program is stored and a processor which, when running the computer program stored by the memory, executes the image stitching method according to claim 5.
The invention has the beneficial effects that:
on the premise of ensuring the splicing precision, the invention improves the splicing speed of aerial images, and has the following specific effects:
the improved region-based feature extraction method is provided by improving the feature extraction mode, is applied to the image stitching method based on the feature extraction, and effectively proves the good performance of the algorithm in aerial image stitching through a specific comparison experiment.
The improved image splicing method is applied to an SIFT-based image splicing algorithm, an SURF-based image splicing algorithm and an ORB-based image splicing algorithm, simulation experiments are carried out, and the results show that: the improved region-based feature extraction algorithm can effectively reduce the number of feature extraction, is suitable for various image splicing algorithms, and can effectively reduce the time required by splicing while ensuring the image splicing precision. Especially for the image splicing algorithm based on SIFT (scale invariant feature transform) features with relatively complex feature descriptors, in twenty sets of simulation experiments, the minimum splicing speed of the improved algorithm is 2 times that of the original algorithm, when a large number of pictures are spliced, the significance of the splicing speed of the algorithm is particularly obvious, and the total time consumption of the improved splicing is less than 1/4 of the time consumption of the original algorithm. The improved algorithm also shows very good performance in SURF-based image splicing simulation, the total splicing speed is 1/2 of the original algorithm, in addition, the average PSNR value is consistent with the original algorithm, and the splicing precision is not reduced while the splicing speed is improved. In the improved simulation based on the ORB algorithm, even if the ORB feature extraction operator has the characteristics of low computational complexity and high feature extraction speed, the splicing speed of the original ORB-based image is greatly improved compared with the splicing speed of the two previous algorithms, the improved region-based feature extraction algorithm provided by the text still improves the splicing speed of the algorithm on the original basis, saves about 20% of time compared with the original algorithm in the time consumption of 20 pairs of image splicing, and does not change the average PSNR value too much, namely, the speed is accelerated and the precision is not influenced too much when the image splicing is finished.
The method can be widely applied to the splicing processing of aerial images. The global image of the aerial photographing area can be rapidly and accurately obtained.
Drawings
FIG. 1 is a basic flow chart for image stitching;
fig. 2 is an exemplary diagram of image transformation effect, in which a in fig. 2 is an original image, b is rigid transformation, c is similarity transformation, d is affine transformation, and e is perspective transformation;
FIG. 3 is a diagram of image stitching effects; in fig. 3, a is an image to be stitched, b is a stitching effect graph based on SIFT features, c is a stitching effect graph based on SURF features, and d is a stitching effect graph based on ORB features;
FIG. 4 is a diagram of a line shooting path in a shape of a Chinese character 'bow';
FIG. 5 is a simulation of an aerial photography process; wherein a in fig. 5 is a presentation diagram of the shooting process, and b is a return pattern type effect diagram;
FIG. 6 is a schematic diagram of a feature extraction area;
fig. 7 is a panorama stitching result diagram, where a in fig. 7 is an image to be stitched, and b is an improved ORB-based image stitching effect diagram.
Detailed Description
Example 1: improved region-based feature extraction method
There are many methods for implementing image stitching, and the specific details of different algorithm implementations are different to some extent, but the implementation steps included in the methods are approximately the same. Generally, image stitching is mainly performed according to the flow shown in fig. 1.
The general steps of image stitching are as follows:
(1) feature extraction
By analyzing the image, the position coordinates of the feature points in the image are obtained by finding solutions satisfying the corresponding extreme value conditions, and meanwhile, in order to describe the feature points, corresponding feature descriptors are constructed as description vectors of the feature points. For the subsequent related task, the extracted features should remain unchanged under the conditions of uneven illumination, image translation, rotation and scaling. At present, the feature detection algorithm which meets the above conditions and has good performance mainly comprises a SIFT operator, a SURF operator and an ORB feature detection operator which introduces a scale space.
(2) Image registration
The image registration refers to that two images to be registered are mapped to another image by constructing a space transformation model, and finally coordinate points of the two images at the same position in space are mutually overlapped, so that the matching of image pixels is further realized. The characteristic-based image registration firstly needs to determine characteristic points and characteristic vectors of images to be spliced, selects characteristic point pairs meeting conditions through a corresponding algorithm, and finally solves to obtain a corresponding transformation model.
Let I1(x,y),I2(u, v) respectively represent the image I1,I2The gray values of the pixel points (x, y), (u, v) and the spatial coordinate transformation between the registration images are shown as the following formula:
I2(u,v)=I1(f(x,y))
some spatial transformation models commonly used at present include rigid transformation, similarity transformation, affine transformation, perspective transformation, and the like. The perspective transformation model is the most general transformation model, and the specific transformation effect of each transformation model is shown in fig. 2.
According to the imaging principle of the camera, when the camera moves under a certain condition, the transformation relation between the coordinates of different images in the same scene collected by the camera is a 3 x 3 matrix called homography matrix. The transformation between images generally belongs to a perspective transformation, and a homography matrix of a perspective transformation model is given below.
When the image is subjected to perspective transformation, parallel straight lines in the image may not be parallel any more, and when the originally parallel lines are compared with an infinite point, the original image is transformed into perspective transformation, and the transformed result is shown in an e diagram in fig. 2. The corresponding normalized homography matrix is shown as the following formula:
Figure BDA0003192953770000061
the number of degrees of freedom of the transformation is 8, and for the perspective transformation containing 8 degrees of freedom, at least 4 groups of non-collinear feature matching pairs exist, so that all parameters of the matrix can be calculated.
(3) Image fusion
Due to the influence of different shooting angles, illumination and shooting environments, if images are directly spliced, obvious splicing seams often appear, and blurring and distortion may appear in an overlapping area. In order to achieve a good stitching effect, a proper image fusion method needs to be selected.
In consideration of the characteristics of the images shot by the text, the text realizes image fusion by using an Alpha fusion method in the process of image splicing simulation. Alpha fusion involves an important concept, Alpha channels. In the case of a picture taken normally having only three channels of RGB, the channel used to represent the transparency of each pixel is called the Alpha channel, in addition to the three primary color channels used to describe the digital image. Indications of Alpha channel concept: a pixel can be represented by (r, g, b, α) four channels, where α (x, y) e [0,1], represents the contribution of each color of the image to the pixel value of the point, where α (x, y) e [0,1], represents the transparency classified into 256 levels at the pixel point (x, y), pure white is opaque (α ═ 1, corresponding to 255 levels), and pure black is completely transparent (α ═ 0, corresponding to 0 levels).
In order to perform image fusion, two images to be fused are respectively regarded as a foreground and a background, and then the foreground image is extracted from a single background color to obtain a foreground image with a channel, which is called Mask. On the Mask, α (x, y) is 1 at each pixel inside the image, and α (x, y) is 0 outside the image. To solve the jaggy phenomenon at the edge of an image, the Alpha value at the edge pixel of Mask satisfies α (x, y) ∈ (0, 1).
After the Alpha value of each pixel is determined, the calculation of the formula is carried out on the image to be fused according to the pixel values of the three channels of RGB, and finally the components of the three channels are synthesized into one pixel for output.
Ii(x,y)=α(x,y)Ii1(x,y)+[1-α(x,y)]Ii2(x,y)
Where α (x, y) ∈ [0,1], i ═ 1,2,3 denotes three pixel channels.
Under the condition of the same environmental configuration and parameter setting, the aerial image with the resolution of 7952 x 5304 is subjected to a preliminary image stitching simulation experiment according to an image stitching process based on SIFT characteristics, SURF characteristics and ORB characteristics. The experiment is used for comparing the performance of three different algorithms in aerial photography image splicing, the specific splicing effect is shown in figure 3, and the simulation experiment result is shown in table 1.
TABLE 1 comparison of results of aerial image stitching simulation experiments with three different algorithms
Figure BDA0003192953770000071
From the image splicing effect shown in fig. 3, the three splicing effects are not very different visually, and the splicing seam is not obvious after image fusion, so that the splicing effect is ideal.
The performance of the image stitching algorithm is judged by taking the peak signal-to-noise ratio (PSNR) of the stitched image as a main basis, and the specific definition of the PSNR is shown in the following formula:
Figure BDA0003192953770000072
wherein M, N refer to the width and length of the image; n is generally 8, and the bit number corresponding to each pixel is shown in a table; i isoThe original image is shown, the spliced and fused image is shown, and the higher the PSNR value is, the better the image splicing and fusing effect is.
The evaluation of the effect of the simulation experiment is to cut 20 pictures with overlap respectively, then splice the cut images according to different splicing algorithms, compare the spliced images with the original images respectively to obtain corresponding PSRNR values, and finally average the PSRNR values to be used as PSNR for evaluation. The invention respectively records the index value related to time and the index value for measuring the splicing effect in the simulation experiment process, and the specific value taking situation is shown in table 1.
As can be seen from the data in table 1, in the application scenario, the ORB-based image mosaic algorithm has great advantages in speed, and is not much different from the SIFT-feature-based image mosaic algorithm in precision, so that the invention is improved on the basis of the ORB-based image mosaic algorithm.
From the results in table 1, the calculation amount of image registration is mainly caused by image registration, so the improvement of the image registration algorithm will have a direct influence on the image registration speed. The image registration only needs to extract 4 pairs of sufficiently matched feature points, and the existing registration mode is to directly extract the features of two registration images, so that a large number of feature point pairs are obtained. The feature extraction and description is a process requiring complex calculations such as convolution and the like, and great time complexity is brought to the algorithm. The data in table 1 show that it is verified that the time-consuming feature extraction of the image stitching algorithm takes approximately 3/4 hours.
In combination with the real application scene requirement, when shooting in real time, the unmanned aerial vehicle shoots and stores shooting data according to the shooting path shown in fig. 4. The presentation diagram and the return picture style effect diagram of the data acquisition process of unmanned aerial vehicle shooting are shown in fig. 5.
According to the distribution characteristics of the overlapped area of the images to be spliced, the characteristic points which are overlapped parts are really useful in characteristic matching. Therefore, before feature extraction, the image to be spliced is firstly subjected to block matching, and then ORB features are extracted for each block, so that the calculation amount is fundamentally reduced, and the purpose of improving the timeliness of the algorithm is achieved. The feature extraction area is schematically shown in fig. 6.
Setting the overlapping rate of images to be spliced as gamma, gamma belongs to (0,1), setting a threshold delta used for feature extraction, delta belongs to (0, gamma), setting the width of the images as w and the height as h, positioning the overlapping areas of the images to be spliced at the upper, lower, left or right sides of the images, and respectively marking the corresponding areas needing feature extraction as omegat、Ωb、Ωl、ΩrThe specific value range is shown in the following formula:
Figure BDA0003192953770000081
an improved region-based feature extraction method is provided by combining the characteristics of aerial images returned by an unmanned aerial vehicle and based on an ORB image stitching algorithm, and comprises the following specific steps:
s1, image I to be spliced is processed by using FAST-9 algorithmAAnd IBAnd extracting features, obtaining coordinates of the feature points, sequencing the obtained feature points from good to bad by using Harris corner detection, and screening to obtain the feature points with better performance of the corner records.
S1, using FAST-9 algorithm to treat image I to be splicedAAnd IBIn the process of feature extraction, the value range of feature extraction is as follows:
image I to be stitchedAAnd IBThe overlapping rate is gamma, gamma belongs to (0,1), a threshold value delta, delta belongs to (0, gamma) for feature extraction is set, and an image I to be spliced isAAnd IBThe width of the image to be spliced is w, the height of the image to be spliced is h, and the overlapping areas of the images to be spliced are positioned on the upper, lower, left or right sides of the images to be spliced and respectively correspond to the image to be splicedThe region needing to extract features is recorded as omegat、Ωb、Ωl、ΩrThe specific value range is shown in the following formula:
Figure BDA0003192953770000091
s2, determining the direction of each feature point descriptor through the intensity centroid:
determining the direction of the feature point descriptor by the intensity centroid: setting the position of the feature point in the image to be spliced as O, and defining the moment of the neighborhood where the feature point is located as follows:
Figure BDA0003192953770000092
wherein p, q belongs to (0,1), and the brightness center of the neighborhood where the characteristic point is defined by the moment is as follows:
Figure BDA0003192953770000093
obtaining a vector pointing to a brightness center C from a position O of the feature point
Figure BDA0003192953770000094
The directions defining the characteristic region are thus:
θ=atan2(m01,m10),
wherein atan2 is a quadrant version of arctan, i.e. the output is a vector
Figure BDA0003192953770000095
And the positive direction of the X axis.
S3, obtaining a binary characteristic string through the improved BRISK algorithm, namely the extracted characteristic information.
The modified BRISK algorithm is as follows:
defining a corresponding binary test at the smoothed pixel block p as:
Figure BDA0003192953770000096
wherein P (x) is the luminance of the pixel block P at point x, and finally an n-dimensional binary vector describing the features is obtained:
Figure BDA0003192953770000097
example 2: image stitching method based on feature extraction method described in embodiment 1
(1) Image pre-processing
Inputting an image I to be splicedA、IBAre respectively to IA、IBAnd performing image rotation, image enhancement and smoothing preprocessing.
(2) ORB feature extraction
The binary feature string is obtained according to the feature extraction method described in embodiment 1.
(3) Eliminating mismatch
Obtaining feature point pairs by a k nearest neighbor algorithm, and screening I by a random consensus sampling algorithm (RANSAC)AAnd IBEliminating a large amount of mismatching by using the characteristic points to be matched; meanwhile, the Euclidean distance between the feature descriptors is used as a main reference basis for feature registration, the feature points with better matching effect are selected by setting a threshold value t, for each feature point in the images to be spliced, the potential matching point closest to and next closest to the feature point is searched in the images to be matched, and when the closest distance d is reached1And a second near distance d2Satisfies the inequality d1/d2If t is less than t, the point with the closest distance is considered as the correct feature matching point.
(4) Image registration
After at least 4 groups of pairable point pairs are obtained, a transformation model between the images to be registered is solved through the following formula. Wherein, (x, y,1)THomogeneous coordinates representing characteristic points of the image to be stitched, (x ', y', 1)TRepresenting the homogeneous coordinates of the feature points with which it happens to be registered.
Figure BDA0003192953770000101
Applying the matrix parameters obtained by solving to the image I to be splicedBObtaining a converted image I'B
(5) Image fusion
Converting the post-image I'BAnd an image I to be stitchedAAs input, image I 'is respectively'BAnd IAThe above pixel points are calculated according to the formula (1) to obtain the transformation distance d1、d2Then, α used for Alpha fusion is obtained and substituted into expression (2), and a fused image, that is, a final image is obtained according to an Alpha fusion algorithm, and expressions (1) and (2) are as follows:
Figure BDA0003192953770000102
(x,y)∈{(x,y)|I(x,y)≠0},
Figure BDA0003192953770000103
Figure BDA0003192953770000104
Ii(x,y)=α(x,y)Ii1(x,y)+[1-α(x,y)]Ii2(x,y) (2)。
(6) quality evaluation-Peak Signal-to-noise ratio (PSNR)
The peak signal-to-noise ratio is one of indexes for performing objective evaluation of image quality. The method can reflect the approximation degree between the processed image and the standard image, but the method cannot introduce visual influence factors of human eyes when carrying out error sensitivity analysis, so that the subjective visual perception of people needs to be considered when carrying out comparison of experimental results so as to comprehensively and objectively evaluate the image quality. PSNR is defined as follows:
Figure BDA0003192953770000111
wherein M, N refer to the width and length of the image; generally, 8 is taken, and the bit number corresponding to each pixel is shown; i isoThe image splicing and fusion method comprises the steps of representing an original image, representing an image after splicing and fusion by I, and representing the effect of image splicing and fusion by the PSNR value to be better when the PSNR value is larger.
Algorithm simulation and performance comparison:
in order to ensure that the experimental result can verify the general rule to a certain extent and avoid the contingency of the result, the invention respectively carries out image splicing simulation and performance evaluation on 20 groups of images. The relevant environment and parameter configuration of this simulation experiment are shown in table 2.
Table 2 environmental and parametric formulations
Figure BDA0003192953770000112
The 20 aerial images with the resolution of 7952 × 5304 are cut into 20 images to be stitched according to the overlapping rate of 75%, and the original images are used as reference images to measure the performance of the image stitching algorithm. Under the environment and parameter configuration of table 2, the improved region-based feature extraction algorithm is applied to an SIFT-based image mosaic algorithm, a SURF-based image mosaic algorithm, and an ORB-based image mosaic algorithm, respectively, and simulation experiments are performed. To verify the performance of the improved region-based feature extraction algorithm. The algorithm operating effect is shown in table 3.
TABLE 3 data presentation of improved feature extraction algorithm simulation results based on three algorithms
Figure BDA0003192953770000113
According to the results in the table 3, the designed algorithm for eliminating mismatching can effectively reduce the matching number of the feature points, reduce the mismatching and improve the speed and the precision of image splicing to a certain extent. The improved region-based feature extraction algorithm can effectively reduce the number of feature extraction, is suitable for various image splicing algorithms, and can effectively reduce the time required by splicing while ensuring the image splicing precision. Especially for the image splicing algorithm based on SIFT (scale invariant feature transform) features with relatively complex feature descriptors, in twenty sets of simulation experiments, the minimum splicing speed of the improved algorithm is 2 times that of the original algorithm, when a large number of pictures are spliced, the significance of the splicing speed of the algorithm is particularly obvious, and the total time consumption of the improved splicing is less than 1/4 of the time consumption of the original algorithm. The improved algorithm also shows very good performance in SURF-based image splicing simulation, the total splicing speed is 1/2 of the original algorithm, in addition, the average PSNR value is consistent with the original algorithm, and the splicing precision is not reduced while the splicing speed is improved. In the improved simulation based on the ORB algorithm, even if the ORB feature extraction operator has the characteristics of low computational complexity and high feature extraction speed, the splicing speed of the original ORB-based image is greatly improved compared with the splicing speed of the two previous algorithms, the improved region-based feature extraction algorithm provided by the text still improves the splicing speed of the algorithm on the original basis, saves about 20% of time compared with the original algorithm in the time consumption of 20 pairs of image splicing, and does not change the average PSNR value too much, namely, the speed is accelerated and the precision is not influenced too much when the image splicing is finished.
The invention splices 12 aerial images with the resolution of 5964 x 5304 through an improved image splicing method based on ORB, and the obtained panoramic image is shown in figure 7. The improved algorithm only takes 15.25 seconds to obtain a high-resolution panoramic image which is free of splicing seams and has the resolution of 10120 multiplied by 7951, and the improved method has better performance in aerial image splicing and gives consideration to speed and precision.
The improved algorithm of the invention has slight loss in precision, but the loss is not obvious in vision. Meanwhile, the algorithm effectively improves the splicing speed, so the algorithm is very suitable for rapidly acquiring the post-disaster panoramic image and is superior to the existing algorithm.

Claims (7)

1.一种改进的基于区域的特征提取方法,其特征在于,包括以下步骤:1. an improved feature extraction method based on region, is characterized in that, comprises the following steps: S1、利用FAST-9算法对待拼接图像IA和IB进行特征提取并获得特征点的坐标,再利用Harris角点检测对得到的特征点从好到坏进行排序,筛选得到角点记录表现较好的特征点;S1. Use the FAST-9 algorithm to extract the features of the images IA and IB to be spliced and obtain the coordinates of the feature points, and then use the Harris corner detection to sort the obtained feature points from good to bad, and filter the corner points to obtain better performance. good feature points; S2、通过强度质心确定每个特征点描述符的方向;S2. Determine the orientation of each feature point descriptor by the intensity centroid; S3、通过改进的BRISK算法得到一个二进制特征串,即为提取的特征信息。S3. A binary feature string is obtained through the improved BRISK algorithm, which is the extracted feature information. 2.根据权利要求1所述的特征提取方法,其特征在于,S1中利用FAST-9算法对待拼接图像IA和IB进行特征提取过程中,特征提取的取值范围如下:2. feature extraction method according to claim 1, is characterized in that, utilizes FAST-9 algorithm in S1 to carry out in feature extraction process to be stitched image IA and IB , the value range of feature extraction is as follows: 待拼接图像IA和IB之间的重叠率为γ,γ∈(0,1),设置用于特征提取的阈值δ,δ∈(0,γ),待拼接图像IA和IB的宽均为w,高均为h,待拼接图像的重叠区域位于待拼接图像的上、下、左或右侧,分别对应的需要提取特征的区域记为Ωt、Ωb、Ωl、Ωr,具体取值范围见下式: The overlap rate between the images IA and IB to be stitched is γ, γ∈ (0,1), and the threshold δ, δ∈ (0,γ) for feature extraction is set. The width is w, the height is h, the overlapping area of the image to be spliced is located on the upper, lower, left or right side of the image to be spliced, and the corresponding regions that need to extract features are denoted as Ω t , Ω b , Ω l , Ω r , the specific value range is shown in the following formula:
Figure FDA0003192953760000011
Figure FDA0003192953760000011
3.根据权利要求1所述的特征提取方法,其特征在于,S2所述的通过强度质心确定每个特征点描述符的方向的方法如下:3. The feature extraction method according to claim 1, wherein the method for determining the direction of each feature point descriptor by the intensity centroid described in S2 is as follows: 通过强度质心确定特征点描述符的方向:设特征点在待拼接图像中的位置为O,定义特征点所在的邻域的矩为:Determine the direction of the feature point descriptor by the intensity centroid: let the position of the feature point in the image to be stitched be O, and the moment defining the neighborhood where the feature point is located is:
Figure FDA0003192953760000012
Figure FDA0003192953760000012
其中,p,q∈(0,1),并利用矩定义特征点所在邻域的亮度中心为:Among them, p, q∈(0,1), and use the moment to define the brightness center of the neighborhood where the feature point is located as:
Figure FDA0003192953760000013
Figure FDA0003192953760000013
得到从特征点所在位置O指向亮度中心C的向量
Figure FDA0003192953760000014
从而定义特征区域的方向为:
Get the vector pointing from the position O of the feature point to the brightness center C
Figure FDA0003192953760000014
Thus, the orientation of the feature area is defined as:
θ=atan2(m01,m10),θ=atan2(m 01 ,m 10 ), 其中,atan2是arctan的象限版本,即输出的是向量
Figure FDA0003192953760000015
和X轴正方向的夹角。
Among them, atan2 is the quadrant version of arctan, that is, the output is a vector
Figure FDA0003192953760000015
The included angle with the positive direction of the X-axis.
4.根据权利要求1所述的特征提取方法,其特征在于,S3所述改进的BRISK算法如下:4. feature extraction method according to claim 1, is characterized in that, the described improved BRISK algorithm of S3 is as follows: 在平滑处理的像素块p处定义相应的二值测试为:The corresponding binary test is defined at the smoothed pixel block p as:
Figure FDA0003192953760000021
Figure FDA0003192953760000021
其中,p(x)为像素块P在点x的亮度,最后得到描述特征的n维的二元向量:Among them, p(x) is the brightness of the pixel block P at point x, and finally an n-dimensional binary vector describing the feature is obtained:
Figure FDA0003192953760000022
Figure FDA0003192953760000022
5.一种基于权利要求1所述特征提取方法的图像拼接方法,其特征在于,所述图像拼接方法包括如下步骤:5. an image splicing method based on the described feature extraction method of claim 1, is characterized in that, described image splicing method comprises the steps: (1)图像预处理:输入待拼接图像IA和IB进行图像旋转、图像增强和平滑预处理;(1) Image preprocessing: input images IA and IB to be spliced for image rotation, image enhancement and smoothing preprocessing; (2)ORB特征提取:按照权利要求1所述的特征提取方法获取二进制特征串;(2) ORB feature extraction: obtain a binary feature string according to the feature extraction method described in claim 1; (3)消除误匹配:通过k近邻算法获取特征点对,再通过随机一致性采样算法筛选IA和IB中待匹配特征点,消除大量的误匹配;以特征描述符之间的欧式距离作为特征配准的主要参考依据,通过设置阈值t来选取匹配效果更好的特征点,对于待拼接图像中的每个特征点,均在待匹配的图像中寻找与距离其最近和次近的潜在匹配点,当最近距离d1和次近距离d2之间的关系满足不等式d1/d2<t时,认为得到最近距离的点为正确的特征匹配点;(3) Eliminate mismatches: obtain feature point pairs through the k-nearest neighbor algorithm, and then filter the feature points to be matched in IA and IB through the random consistency sampling algorithm to eliminate a large number of mismatches; use the Euclidean distance between feature descriptors As the main reference for feature registration, the feature points with better matching effect are selected by setting the threshold t. For each feature point in the image to be spliced, the image to be matched is searched for the closest and next closest to it. Potential matching points, when the relationship between the closest distance d 1 and the second closest distance d 2 satisfies the inequality d 1 /d 2 <t, the point with the closest distance is considered to be the correct feature matching point; (4)图像配准:在得到至少4组可配对的点对后,通过下式求解待配准的图像之间的变换模型,将求解得到的矩阵参数作用到待拼接图像IB上,得到变换后的图像I′B(4) Image registration: After obtaining at least 4 pairs of point pairs that can be paired, the transformation model between the images to be registered is solved by the following formula, and the obtained matrix parameters are applied to the images I B to be spliced to obtain. Transformed image I' B :
Figure FDA0003192953760000023
Figure FDA0003192953760000023
其中,(x,y,1)T表示待拼接图像的特征点的齐次坐标,(x′,y′,1)T表示与之配准的特征点的齐次坐标;Among them, (x, y, 1) T represents the homogeneous coordinates of the feature points of the image to be spliced, and (x', y', 1) T represents the homogeneous coordinates of the feature points to be registered with it; (5)图像融合:将变换后图像I′B和待拼接图像IA作为输入,分别把图像I′B和IA上的像素点按照式(1)进行计算得到变换距离d1、d2,进而求得用于Alpha融合的α,将其代入到式(2)中,根据Alpha融合算法即可得到融合后的图像,即最终的图像,式(1)和式(2)如下:(5) Image fusion: take the transformed image I′ B and the image to be spliced IA as input, and calculate the pixels on the images IB and IA respectively according to formula (1) to obtain the transformation distances d 1 , d 2 , and then obtain the α used for Alpha fusion, and substitute it into formula (2). According to the Alpha fusion algorithm, the fused image, that is, the final image, can be obtained. Equations (1) and (2) are as follows:
Figure FDA0003192953760000024
Figure FDA0003192953760000024
(x,y)∈{(x,y)|I(x,y)≠0},(x,y)∈{(x,y)|I(x,y)≠0},
Figure FDA0003192953760000025
Figure FDA0003192953760000025
Figure FDA0003192953760000026
Figure FDA0003192953760000026
Ii(x,y)=α(x,y)Ii1(x,y)+[1-α(x,y)]Ii2(x,y) (2)。I i (x,y)=α(x,y)I i1 (x,y)+[1-α(x,y)]I i2 (x,y) (2).
6.一种计算机设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,当所述处理器运行所述存储器存储的计算机程序时,所述处理器执行根据权利要求1-4中任一项中所述的改进的基于区域的特征提取方法。6. A computer device, comprising a memory and a processor, wherein a computer program is stored in the memory, and when the processor runs the computer program stored in the memory, the processor executes the program according to claim 1 The improved region-based feature extraction method described in any one of -4. 7.一种计算机设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,当所述处理器运行所述存储器存储的计算机程序时,所述处理器执行根据权利要求5所述的图像拼接方法。7. A computer device, comprising a memory and a processor, wherein a computer program is stored in the memory, and when the processor executes the computer program stored in the memory, the processor executes the program according to claim 5 The described image stitching method.
CN202110883189.3A 2021-08-02 2021-08-02 An improved feature extraction method and image stitching method based on the method Active CN113723465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110883189.3A CN113723465B (en) 2021-08-02 2021-08-02 An improved feature extraction method and image stitching method based on the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110883189.3A CN113723465B (en) 2021-08-02 2021-08-02 An improved feature extraction method and image stitching method based on the method

Publications (2)

Publication Number Publication Date
CN113723465A true CN113723465A (en) 2021-11-30
CN113723465B CN113723465B (en) 2024-04-05

Family

ID=78674730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110883189.3A Active CN113723465B (en) 2021-08-02 2021-08-02 An improved feature extraction method and image stitching method based on the method

Country Status (1)

Country Link
CN (1) CN113723465B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117330035A (en) * 2023-09-28 2024-01-02 海南集思勘测规划设计有限公司 A land surveying and mapping method and system for dynamic remote sensing monitoring

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN111080529A (en) * 2019-12-23 2020-04-28 大连理工大学 A Robust UAV Aerial Image Mosaic Method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN111080529A (en) * 2019-12-23 2020-04-28 大连理工大学 A Robust UAV Aerial Image Mosaic Method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117330035A (en) * 2023-09-28 2024-01-02 海南集思勘测规划设计有限公司 A land surveying and mapping method and system for dynamic remote sensing monitoring

Also Published As

Publication number Publication date
CN113723465B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Zhang et al. An image stitching algorithm based on histogram matching and SIFT algorithm
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
US20130051626A1 (en) Method And Apparatus For Object Pose Estimation
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN105809626A (en) Self-adaption light compensation video image splicing method
Mistry et al. Image stitching using Harris feature detection
Wang et al. A variational method for multiple-image blending
Zhang et al. Image stitching based on human visual system and SIFT algorithm
CN112634125A (en) Automatic face replacement method based on off-line face database
Yan et al. Deep learning on image stitching with multi-viewpoint images: A survey
CN110120013A (en) A kind of cloud method and device
Ruan et al. Image stitching algorithm based on SURF and wavelet transform
CN116132610A (en) Fully-mechanized mining face video stitching method and system
CN113723465A (en) Improved feature extraction method and image splicing method based on same
Xu et al. A two-stage progressive shadow removal network
Rong et al. Mosaicing of microscope images based on SURF
CN118135017A (en) Polarization synchronous positioning and mapping method for integrated double-branch superdivision network
CN115035281B (en) Rapid infrared panoramic image stitching method
CN117333659A (en) Multi-target detection method and system based on multi-camera and camera
CN113159169B (en) Image stitching method based on matching deformation and slitting optimization based on prior target feature points
CN114399423B (en) Image content removing method, system, medium, device and data processing terminal
Chand et al. Implementation of Panoramic Image Stitching using Python
CN112884649A (en) B-spline-based image stitching feature point extraction algorithm
Yao et al. An effective dual-fisheye lens stitching method based on feature points
CN117975546B (en) Fundus image feature point matching method based on improved feature descriptors and KNN search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant