[go: up one dir, main page]

CN101777129A - Image matching method based on feature detection - Google Patents

Image matching method based on feature detection Download PDF

Info

Publication number
CN101777129A
CN101777129A CN200910241543A CN200910241543A CN101777129A CN 101777129 A CN101777129 A CN 101777129A CN 200910241543 A CN200910241543 A CN 200910241543A CN 200910241543 A CN200910241543 A CN 200910241543A CN 101777129 A CN101777129 A CN 101777129A
Authority
CN
China
Prior art keywords
image
test image
feature
illumination
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910241543A
Other languages
Chinese (zh)
Other versions
CN101777129B (en
Inventor
谭铁牛
黄凯奇
余轶南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN2009102415431A priority Critical patent/CN101777129B/en
Publication of CN101777129A publication Critical patent/CN101777129A/en
Application granted granted Critical
Publication of CN101777129B publication Critical patent/CN101777129B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明是一种基于特征的图像匹配算法,步骤如下:利用基准图像基于Harris函数的的特征检测算法和最近邻匹配算法对测试图像进行匹配,获得并利用测试图像相对于基准图像的姿态估计的目标的光照进行估计,通过计算测试图像中目标包含于基准图像部分的光照变化,将此光照变化函数与已估计姿态的测试图像进行光照变换和修正,得到光照修正的测试图像;将光照修正的测试图像和基准图像进行基于特征点检测与描述,得到光照修正后的测试图像特征点匹配,结合测试图像的姿态估计,得到光照修正后测试图像的姿态匹配。图像匹配是计算机视觉中的底层问题,对目标跟踪,场景建模,图像拼接,图像检索等计算机视觉中的经典问题有着重要的意义。

Figure 200910241543

The present invention is a feature-based image matching algorithm, the steps are as follows: use the Harris function-based feature detection algorithm of the reference image and the nearest neighbor matching algorithm to match the test image, obtain and use the pose estimation of the test image relative to the reference image Estimating the illumination of the target, by calculating the illumination change of the target included in the reference image in the test image, performing illumination transformation and correction on the illumination change function and the test image with the estimated attitude, to obtain the illumination-corrected test image; The test image and the reference image are based on feature point detection and description, and the feature point matching of the test image after illumination correction is obtained. Combined with the pose estimation of the test image, the pose match of the test image after illumination correction is obtained. Image matching is an underlying problem in computer vision, and it is of great significance to classic problems in computer vision such as target tracking, scene modeling, image stitching, and image retrieval.

Figure 200910241543

Description

A kind of image matching method based on feature detection
Technical field
The invention belongs to area of pattern recognition, relate to technology such as Flame Image Process and computer vision, particularly relate to feature detection, images match, image mosaic, image retrieval, three-dimensional reconstruction and augmented reality.
Background technology
Images match is the basic problem in the computer vision, and accurate image matching algorithm efficiently provides solid underlying basis for the solution of other problems.Along with the development of technology and the reduction gradually of hardware device price, camera and video camera have become one of equipment of widespread use in people's daily life.People are converted into three-dimensional perception to the requirement of scene perception from original two-dimentional perception, i.e. the three-dimensional configuration of object and the 3 d pose of scene in the perception real world.Because the variation of object is various in the real world, actual object is directly carried out three-dimensional modeling or scene is carried out the space demarcation having consumed the lot of manpower and material resources resource.To the three-dimensional body modeling or three-dimensional scenic is rebuild is one of forward position research direction that receives much concern in recent years, its detects from the image that camera or video camera are caught, identification, follow the tracks of object and scene and the attitude in its behavior three dimensions is estimated by image.Although present existing computerized three-dimensional reconstruction technology is used widely, wherein several important problem still are worth inquiring into.These problems are accuracys, stability and high efficiency.Therefore, as the bottom problem in the computer vision, the research of images match on these three problems is particularly important.At these three problems, develop a cover accuracy height, good stability can carry out the algorithm of images match in real time, and place mat is carried out in the application of reality.
There is point in the image with good location ability and separating capacity.People can find out these representative points easily from image, and mate with another width of cloth image.But concerning computing machine, the detection of unique point and coupling are very difficult problems in the image.
These difficult problems generally can be summed up as the following aspects: the conversion of illumination, and the table at visual angle changes, and it is fuzzy that video camera causes, variation that the object attitude causes or the like.In recent years, the concern of extensively being sent out by people at the method for visual angle change.Existing characteristics of image detects and matching process mostly is to carry out at the point of the invariant feature in the metric space.
Summary of the invention
Existing method based on the characteristics of image unchangeability is used under the less situation of image difference.For differing greatly, the images match when especially the variation of object attitude is big in the image, precision and stability are lower.In order to solve existing technical matters, the purpose of this invention is to provide a kind of feature extraction and image matching algorithm of estimating based on the image attitude.
To achieve these goals, provided by the invention based on Feature Extraction and matching algorithm, comprise attitude estimation and conversion, three processes of illumination estimation and conversion and images match, it is as follows that the method comprising the steps of:
Step S1: utilize benchmark image to pass through test pattern to be mated, obtain test pattern and estimate with respect to the attitude of benchmark image based on the feature detection algorithm (HLSIFD) and the arest neighbors matching algorithm of Harris function;
Step S2: utilize benchmark image that the illumination of the target in the test pattern of attitude estimation is estimated, target is contained in benchmark image illumination variation partly in the test pattern by calculating, this illumination variation function and the test pattern of estimating attitude are carried out illumination conversion and correction, obtain the test pattern of illumination correction;
Step S3: the test pattern and the benchmark image of illumination correction are carried out based on feature point detection and description, obtain the revised test pattern Feature Points Matching of illumination, estimate, obtain the attitude matching of test pattern after the illumination correction in conjunction with the attitude of test pattern.
Wherein, attitude estimates that concrete steps are as follows:
Step S11: by based on the Harris function feature detection algorithm benchmark image and test pattern are extracted the constant unique point of yardstick respectively;
Step S12: unique points all on benchmark image and the test pattern is carried out constant feature description algorithm (SIFT) feature description of yardstick: with each unique point is the square window of center intercepting fixed size, to the image in window, calculate the weights of relevant gradient, gradient direction and gradient thereof of each pixel, the unique point that obtains each is described, with the description that the histogram of multidimensional comes statistical nature point, finally form the multidimensional characteristic vectors of each unique point;
Step S13: the proper vector of the unique point that extracted in benchmark image and the test pattern is mated with the arest neighbors method;
Step S14: the unanimity collection that obtains all match point centerings with random sampling consistency algorithm (RANSAC);
Step S15: the projection mapping matrix H of trying to achieve benchmark image and test pattern by the unanimity collection 1
Step S16: by the projection mapping matrix H 1, test pattern is carried out projective transformation, the rough attitude that obtains test pattern is estimated.
Wherein, the illumination estimation concrete steps are as follows:
Step S21: the grey level histogram of asking benchmark image and test pattern;
Step S22: utilize the histogram specification method, calculate the histogram transformation function L between benchmark image and the test pattern;
Step S23:, test pattern is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction according to benchmark image and the histogrammic transforming function transformation function L of test pattern.
Wherein, the attitude matching concrete steps of the revised test pattern of illumination are as follows:
Step S31: the constant unique point of yardstick of asking for the revised test pattern of illumination after the conversion;
Step S32: the proper vector of asking for the unique point descriptor;
Step S33: all unique points of these proper vectors and benchmark image are carried out the arest neighbors coupling;
Step S34: the unanimity collection of seeking all coupling centerings by random sampling consistency algorithm (RANSAC);
Step S35: try to achieve the revised test pattern of illumination after the conversion and the projection mapping between the benchmark image concerns H by the unanimity collection;
Step S36: ask the projection mapping between revised test pattern of illumination and the benchmark image to concern H:H=H 2H 1, in the formula: the projection mapping matrix H 1, the projection mapping matrix H 2
Described image matching method based on feature also comprises, adds the estimation of image being carried out illumination after attitude is estimated.
Method of the present invention, matching image still can keep higher precision and stability under all bigger situation of attitude and illumination conversion.Images match is the bottom problem in the computer vision problem, generally as the bottom engine of additive method, supports the upper strata algorithm, for the upper strata algorithm provides high-quality matching result.The present invention is different with other the method for extracting based on invariant features, the present invention does not directly use constant unique point as the coupling foundation, but the attitude between estimated image and illumination relation at first, thereby piece image is wherein revised, finally on the image of revising, do coupling, method of the present invention is to be easy to realize and use, and mainly can be applied to following several aspect:
(1) based on the monitoring tracker of signature tracking, help system is followed the tracks of the target in the scene.Obtain the behavior semanteme of interesting target in the scene, thereby the incident in the scene is understood.
(2) based on the three-dimensional body modeling of Image Feature Point Matching, be used for the three-dimensional body of complexity is carried out morphological analysis and three-dimensional reconstruction, virtual reality there is great function
(3) based on the augmented reality system of signature tracking, be used for by the tracking of image static nature point being obtained the three-dimensional spatial information of scene, thereby by the content of virtual object in enhanced scene.
Description of drawings
Fig. 1 illustrates the FB(flow block) based on the image matching algorithm of image attitude and illumination, comprises that attitude is estimated and correction, illumination estimation and correction, three parts of images match.
Fig. 2 illustrates attitude estimated result and correction result synoptic diagram.
Fig. 3 illustrates illumination estimation result and correction result synoptic diagram.
Fig. 4 illustrates images match example as a result.
Embodiment
Describe each related detailed problem in the technology of the present invention method in detail below in conjunction with accompanying drawing.Be to be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Degree of accuracy, stability and the real-time of images match have been improved based on the image matching algorithm of image attitude and illumination estimation.Utilization is to the estimation of attitude and illumination, and the present invention has realized an image matching system, as Fig. 1 image matching algorithm process flow diagram based on image attitude and illumination is shown, and this method is divided into attitude to be estimated and revise, illumination estimation and correction, and three parts of images match:
Described attitude is estimated and retouch comprises step: the benchmark image and first width of cloth test pattern are extracted the constant unique point of yardstick respectively; All unique points are described, ask for proper vector; With the arest neighbors method proper vector of the unique point that extracted in the benchmark image and first width of cloth test pattern is mated; Obtain the unanimity collection of all match point centerings with random sampling consistency algorithm (RANSAC); Try to achieve the projection mapping matrix relationship of the benchmark image and first width of cloth test pattern by the unanimity collection; By the projection mapping matrix relationship, second width of cloth test pattern is wherein carried out the attitude correction.
Described illumination estimation and retouch comprise step: ask first width of cloth matching test image packets to be contained in grey level histogram in the benchmark image; Utilizing the histogram specification method, is standard with the grey level histogram that is contained in the benchmark image, calculates the histogram transformation function L between the benchmark image and second width of cloth test pattern; According to benchmark image and the histogrammic transforming function transformation function L of second width of cloth test pattern, second width of cloth test pattern is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction.
Described images match partly comprises step: the constant unique point of yardstick of asking for revised image; Ask for the proper vector of unique point descriptor; All proper vectors of these proper vectors and benchmark image are carried out the arest neighbors coupling; Seek the unanimity collection of all coupling centerings by random sampling consistency algorithm (RANSAC) algorithm; Try to achieve projection mapping matrix relationship H between revised matching image and the benchmark image by the unanimity collection; Try to achieve the projection mapping matrix relationship H:H=H between second width of cloth test pattern and the benchmark image 2H 1, in the formula: the projection mapping matrix H 1, the projection mapping matrix H 2
The hardware minimalist configuration that method of the present invention needs is: P4 3.0G CPU, the computing machine of 512M internal memory.On the hardware of this configuration level, adopt the C Plus Plus programming to realize this method.The committed step that method of the present invention is related to describes in detail one by one below, and the basic step in the method for the present invention is identical, and concrete form is as described below:
At first, be to carry out attitude estimation and correction:
The attitude relation of image can obtain by the spatial relation of the unique point between benchmark image and the test pattern.So-called unique point can be stabilized the detected point that separating capacity is arranged exactly.The constant feature point detecting method SIFT of yardstick is adopted in the detection of unique point.The feature point detection algorithm that yardstick is constant is widely used in image mosaic at present, follows the tracks of in the scheduling algorithm, has than advantages of higher stability and adaptive faculty.Benchmark image and test pattern are carried out the SIFT feature detection respectively, suppose that detected unique point coordinate set is (U r, U t):
U r={ u R1, u R2, u R3..., u Rn, the unique point coordinate set of reference picture, u RnBe the element in this set, n is the number of benchmark image unique point, n=1,2,3 ... n;
U t={ u T1, u T2, u T3..., u Tm, the characteristic coordinates point set of test pattern, u RmBe the element in this set, m is the number of test pattern unique point, m=1,2,3 ... m.
Use the SIFT descriptor that each unique point is described, the set of eigenvectors that obtains each unique point descriptor is combined into (V r, V t):
V r={ v R1, v R2, v R3..., v Rn, the proper vector set of the unique point descriptor of benchmark image, v RnBe the element in this set, n is the number of the proper vector of benchmark image unique point descriptor, equals the number of benchmark image unique point, n=1,2,3 ... n;
V t={ v T1, v T2, v T3..., v Tm, the proper vector set of the unique point descriptor of test pattern, v TmBe the element in this set, m is the number of the proper vector of test pattern unique point descriptor, equals the number of test pattern unique point, m=1,2,3 ... m.
All unique point elements and the right set of proper vector to test pattern are expressed as { u Ti, v Ti, u TiBe the arbitrary element in the test pattern unique point coordinate set and its characteristic of correspondence descriptor proper vector v Ti, i=1 wherein, 2,3 ... m calculates the characteristic distance d of the proper vector of each unique point descriptor in itself and the benchmark image I, j: d I, j=|| v Ti-v Rj||, v wherein RjBe the proper vector of any one unique point descriptor in the benchmark image, j=1,2,3 ... n, test pattern unique point coordinate and proper vector are to { u Ti, v Ti, with unique point coordinate in the reference picture and proper vector to { u Rj, v RjCoupling be defined as: and if only if the shortest characteristic distance
Figure G2009102415431D0000061
And with inferior short characteristic distance
Figure G2009102415431D0000062
Ratio less than certain constant t, d 1/ d 2<t, wherein 0<t≤1.
The coupling that the one group of match point that obtains like this is an image is to P:
P={ (u T, 1, u R, 1), (u T, 2, u R, 2), (u T, 3, r R, 3) ..., (u T, k, u R, k), k is individual altogether, wherein u R, k∈ U r, u T, k∈ U tSuppose that correct relation between these matching characteristic points is a projection projection relation, can use projection mapping matrix relationship H represent each element wherein between spatial correspondence, as:
u R, k=Hu T, k, the coupling that obtains image by random sampling consistency algorithm (RANSAC) is right to the correct coupling among the P, and tries to achieve correct projection mapping matrix relationship H.
Use the image projection transformation technology, with test pattern I tCarry out projection projection conversion, the revised image I of process attitude of trying to achieve w:
I w(x,y)=I t(H(x,y) T)。I tTest pattern, (x y) is any pixel coordinate in the image, wherein H (x, y) TFor its pixel coordinate is carried out the inverse transformation of projection mapping.
Its two, be the estimation and the correction of illumination:
The illumination information of image can reflect by the histogram that passes through gray scale partly.Estimate and correction that by the attitude of image the match point that obtains is to P={ (u T, 1, u R, 1), (u T, 2, u R, 2), (u T, 3, r R, 3) ..., (u T, k, u R, k) and its near zone as the common visible zone of two width of cloth images, to the revised image I of attitude wAsk for grey level histogram with the corresponding zone of reference picture and be designated as h respectively r, h t, according to the histogram specification method, try to achieve the transformation relation i=f (j) of color between an image, i wherein, j be after the conversion and conversion before the gradation of image value.To I wCarry out the illumination conversion of image, obtain the illumination correction image I of image W, l
Its three, be images match:
Attitude and the revised test pattern of illumination and reference picture are mated.Attitude is similar to reference picture with light conditions with the revised image aspects of illumination, can high matching rate and the matching precision of easier acquisition.
To constant (HLSIFD) unique point of extraction yardstick, and use SIFT feature description to carry out the description of feature.Then to reference picture and illumination correction image I W, lThe unique point of this two width of cloth image is mated, and calculates reference picture and illumination correction image I W, lProjection mapping between this two width of cloth image concerns H W, l
Relation between the final image that images match is asked then is:
Figure G2009102415431D0000071
Please provide parameter
Figure G2009102415431D0000072
Implication.
Fig. 2 illustrates attitude estimated result and correction result,
Going up left side figure among Fig. 2 is that benchmark image, upward right figure are test pattern, and figure below is that test pattern is through the revised image of attitude.Upward the rectangle frame among the right figure is expressed as the result of coupling, by the attitude correction its attitude is modified to figure below, and rectangle frame is expressed as the result that the target of will mate is modified to the attitude of benchmark image.
Fig. 3 illustrates illumination estimation result and correction result, and left side figure is that benchmark image, middle figure are that test pattern, right figure are that centering figure carries out the illumination correction result among Fig. 3.
Fig. 4 illustrates the matching result of final image through the illumination correction, and the last figure of Fig. 4 is a benchmark image, the matching result that obtains after figure below process illumination correction.The target that benchmark image has been mated in the rectangle frame representative of figure below, and it can be carried out attitude correction.
Specific embodiment is as follows:
Based on image characteristics extraction and the matching algorithm that the image attitude is estimated, comprise attitude estimation and conversion, three processes of illumination estimation and conversion and images match, step is as follows:
The image attitude is estimated and revised step is for reducing in the images match because attitude changes the difficulty of bringing, step by the attitude correction, the unification that the visual angle of image obtains being similar to makes image can access stable matching rate and matching precision under the situation of big visual angle change.
The attitude estimating step is as follows:
Step S11: benchmark image and test pattern are extracted constant (HLSIFD) unique point of yardstick respectively;
Step S12: all unique points are taked to extract the constant unique point algorithm (SIFT) of yardstick carry out feature description, ask for proper vector;
Step S13: the unique point that is extracted in the benchmark image and first width of cloth test pattern is mated with the arest neighbors method;
Step S14: the unanimity collection that obtains all match point centerings with random sampling consistency algorithm (RANSAC);
Step S15: the rough projection mapping matrix H 1 of trying to achieve benchmark image and test pattern by the unanimity collection.
Step S16: by the projection mapping matrix H 1, test pattern is carried out the attitude correction.
Be that the people can obtain to stablize and mate accurately under the situation of big illumination variation when image irradiation estimation and correction.The relation of illumination information between can the grey level histogram that pass through the test pattern effective coverage that first width of cloth mated of part represented.Concrete steps are as follows:
Step S21: ask the grey level histogram of benchmark image and the revised test pattern of attitude, wherein the grey level histogram of benchmark image is the effective coverage of test pattern;
Step S22: utilize the histogram specification method, calculate the histogram transformation function L of benchmark image and test pattern;
Step S23: according to the transforming function transformation function L of image histogram, the revised image of attitude is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction.
The purpose of the attitude of image and illumination correction is in order to reduce the difficulty of images match.Revised image mated just can obtain higher matching rate and precision.Concrete steps are as follows
Step S31: the unique point of asking for the HLSIFD of attitude and the revised image of illumination;
Step S32: the SIFT proper vector of asking for unique point;
Step S33: all unique points of above-mentioned proper vector and benchmark image are carried out the arest neighbors coupling;
Step S34: the unanimity collection of seeking all coupling centerings by stochastic sampling consistency algorithm (RANSAC);
Step S35: try to achieve projection mapping matrix H between attitude and revised image of illumination and the benchmark image by the unanimity collection 2Relation;
Step S36: ask the projection mapping relation between benchmark image and the test pattern: H=H 2H 1
In a word, the present invention proposes a kind of simple and effective image matching method based on image attitude illumination estimation.Our validity and the stability of method of having carried out a large amount of experimental verifications on some databases in the world.The present invention is easy to realize, stable performance.The present invention can improve the precision of images match, and to image mosaic, the high layer methods of computer vision such as tracking have good guarantee.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (5)

1.一种基于特征的图像匹配方法,其特征在于:该方法包括步骤如下:1. A feature-based image matching method, characterized in that: the method comprises steps as follows: 步骤S1:利用基准图像基于Harris函数的特征检测算法(HLSIFD)和最近邻匹配算法对测试图像进行匹配,获得测试图像相对于基准图像的姿态估计;Step S1: Match the test image with the Harris function-based feature detection algorithm (HLSIFD) and the nearest neighbor matching algorithm of the reference image, and obtain the pose estimation of the test image relative to the reference image; 步骤S2:利用基准图像对已姿态估计的测试图像中的目标的光照进行估计,通过计算测试图像中目标包含于基准图像部分的光照变化,将此光照变化函数与已估计姿态的测试图像进行光照变换和修正,得到光照修正的测试图像;Step S2: Use the reference image to estimate the illumination of the target in the test image whose pose has been estimated, and calculate the illumination change of the target in the test image included in the reference image, and combine this illumination change function with the test image whose pose has been estimated Transformation and correction to obtain a test image for lighting correction; 步骤S3:将光照修正的测试图像和基准图像进行基于特征点检测与描述,得到光照修正后的测试图像特征点匹配,结合测试图像的姿态估计,得到光照修正后测试图像的姿态匹配。Step S3: Perform feature point detection and description on the light-corrected test image and the reference image to obtain feature point matching of the light-corrected test image, and combine with pose estimation of the test image to obtain pose matching of the light-corrected test image. 2.按权利要求1所述的基于特征的图像匹配方法,其特征在于:所述姿态估计具体步骤如下:2. according to the described image matching method based on feature of claim 1, it is characterized in that: the specific steps of described pose estimation are as follows: 步骤S11:通过基于Harris函数的尺度不变的特征检测算法对基准图像与测试图像分别提取尺度不变的特征点;Step S11: Extracting scale-invariant feature points from the benchmark image and the test image through a scale-invariant feature detection algorithm based on the Harris function; 步骤S12:对基准图像与测试图像上所有的特征点进行尺度不变的特征描述算法(SIFT)特征描述:以每个特征点为中心截取固定大小的正方形窗口,对在窗口内的图像,计算每个像素相关的梯度、梯度方向及其梯度的权值,得到每个的特征点描述,用多维的直方图来统计特征点的描述,最终形成每个特征点的多维特征向量;Step S12: Perform scale-invariant feature description algorithm (SIFT) feature description on all feature points on the reference image and test image: take each feature point as the center to intercept a fixed-sized square window, and calculate The gradient, gradient direction and gradient weight associated with each pixel are obtained to obtain the description of each feature point, and the multi-dimensional histogram is used to count the description of the feature points, and finally the multi-dimensional feature vector of each feature point is formed; 步骤S13:用最近邻方法将基准图像与测试图像中所提取出的特征点的特征向量进行匹配;Step S13: using the nearest neighbor method to match the feature vectors of the feature points extracted from the reference image and the test image; 步骤S14:用随机抽样一致性算法(RANSAC)得到所有匹配点对中的一致集;Step S14: use random sampling consensus algorithm (RANSAC) to obtain the consensus set in all matching point pairs; 步骤S15:通过一致集求得基准图像与测试图像的投影映射矩阵H1Step S15: Obtain the projection mapping matrix H 1 of the benchmark image and the test image through the consistent set; 步骤S16:通过投影映射矩阵H1,对测试图像进行投影变换,得到测试图像的粗略姿态估计。Step S16: Perform projection transformation on the test image through the projection mapping matrix H 1 to obtain a rough pose estimation of the test image. 3.按权利要求1所述的基于特征的图像匹配方法,其特征在于:所述光照估计具体步骤如下:3. according to the described image matching method based on feature of claim 1, it is characterized in that: the concrete steps of described illumination estimation are as follows: 步骤S21:求基准图像与测试图像的灰度直方图;Step S21: Find the grayscale histogram of the benchmark image and the test image; 步骤S22:利用直方图规定化方法,计算基准图像与测试图像之间的直方图变换函数L;Step S22: Using the histogram specification method, calculate the histogram transformation function L between the reference image and the test image; 步骤S23:根据基准图像与测试图像直方图的变换函数L,对测试图像进行直方图规定化,从而对姿态修正后的图像进行光照修正。Step S23 : According to the transformation function L of the histograms of the reference image and the test image, the test image is histogram-defined, so as to perform illumination correction on the pose-corrected image. 4.按权利要求1所述的基于特征的图像匹配方法,其特征在于:所述光照修正后的测试图像的姿态匹配具体步骤如下:4. according to the described image matching method based on feature of claim 1, it is characterized in that: the pose matching concrete steps of the test image after described illumination correction are as follows: 步骤S31:求取变换后的光照修正后的测试图像的尺度不变的特征点;Step S31: Obtain the scale-invariant feature points of the transformed test image after illumination correction; 步骤S32:求取特征点描述子的特征向量;Step S32: Obtain the feature vector of the feature point descriptor; 步骤S33:将这些特征向量和基准图像的所有特征点进行最近邻匹配;Step S33: performing nearest neighbor matching on these feature vectors and all feature points of the reference image; 步骤S34:通过随机抽样一致性算法(RANSAC)寻找所有匹配对中的一致集;Step S34: Find the consensus set in all matching pairs by the Random Sampling Consensus Algorithm (RANSAC); 步骤S35:通过一致集求得变换后的光照修正后的测试图像和基准图像之间的投影映射关系H;Step S35: Obtain the projection mapping relationship H between the transformed illumination-corrected test image and the reference image through the consistent set; 步骤S36:求光照修正后的测试图像与基准图像之间的投影映射关系H:H=H2H1,式中:投影映射矩阵H1,投影映射矩阵H2Step S36: Calculate the projection mapping relationship H between the test image after illumination correction and the reference image: H=H 2 H 1 , where: projection mapping matrix H 1 , projection mapping matrix H 2 . 5.按权利要求1所述的基于特征的图像匹配方法,其特征在于:还包括,在姿态估计之后加入对图像进行光照的估计。5. The feature-based image matching method according to claim 1, further comprising: adding an estimation of illumination to the image after the pose estimation.
CN2009102415431A 2009-11-25 2009-11-25 A Method of Image Matching Based on Feature Detection Expired - Fee Related CN101777129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102415431A CN101777129B (en) 2009-11-25 2009-11-25 A Method of Image Matching Based on Feature Detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102415431A CN101777129B (en) 2009-11-25 2009-11-25 A Method of Image Matching Based on Feature Detection

Publications (2)

Publication Number Publication Date
CN101777129A true CN101777129A (en) 2010-07-14
CN101777129B CN101777129B (en) 2012-05-23

Family

ID=42513587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102415431A Expired - Fee Related CN101777129B (en) 2009-11-25 2009-11-25 A Method of Image Matching Based on Feature Detection

Country Status (1)

Country Link
CN (1) CN101777129B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102486830A (en) * 2010-12-01 2012-06-06 无锡锦腾智能科技有限公司 Object micro texture identifying method based on spatial alternation consistency
CN102629330A (en) * 2012-02-29 2012-08-08 华南理工大学 Rapid and high-precision matching method of depth image and color image
CN102959588A (en) * 2011-04-28 2013-03-06 中国科学院自动化研究所 Method for detecting tampering with color digital image based on chroma of image
CN102980896A (en) * 2012-11-28 2013-03-20 西南交通大学 Method for detecting breakage of lugs of high-speed rail contact net suspension device
CN103403739A (en) * 2011-01-25 2013-11-20 意大利电信股份公司 Method and system for comparing images
CN103544492A (en) * 2013-08-06 2014-01-29 Tcl集团股份有限公司 Method and device for identifying targets on basis of geometric features of three-dimensional curved surfaces of depth images
CN103686194A (en) * 2012-09-05 2014-03-26 北京大学 Video denoising method and device based on non-local mean
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN106709500A (en) * 2015-11-13 2017-05-24 国网辽宁省电力有限公司检修分公司 A Method of Image Feature Matching
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN109409283A (en) * 2018-10-24 2019-03-01 深圳市锦润防务科技有限公司 A kind of method, system and the storage medium of surface vessel tracking and monitoring
CN109553140A (en) * 2018-12-05 2019-04-02 江西书源科技有限公司 The long-range control method of household water-purifying machine
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN118115716A (en) * 2024-03-05 2024-05-31 北京大希科技有限公司 High-precision positioning method integrating vision and AR technology

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102486830A (en) * 2010-12-01 2012-06-06 无锡锦腾智能科技有限公司 Object micro texture identifying method based on spatial alternation consistency
CN103403739A (en) * 2011-01-25 2013-11-20 意大利电信股份公司 Method and system for comparing images
CN102959588A (en) * 2011-04-28 2013-03-06 中国科学院自动化研究所 Method for detecting tampering with color digital image based on chroma of image
CN102629330A (en) * 2012-02-29 2012-08-08 华南理工大学 Rapid and high-precision matching method of depth image and color image
CN103686194A (en) * 2012-09-05 2014-03-26 北京大学 Video denoising method and device based on non-local mean
CN103686194B (en) * 2012-09-05 2017-05-24 北京大学 Video denoising method and device based on non-local mean value
CN102980896B (en) * 2012-11-28 2015-10-14 西南交通大学 High ferro overhead contact line device auricle fracture detection method
CN102980896A (en) * 2012-11-28 2013-03-20 西南交通大学 Method for detecting breakage of lugs of high-speed rail contact net suspension device
CN103544492A (en) * 2013-08-06 2014-01-29 Tcl集团股份有限公司 Method and device for identifying targets on basis of geometric features of three-dimensional curved surfaces of depth images
CN103544492B (en) * 2013-08-06 2017-06-06 Tcl集团股份有限公司 Target identification method and device based on depth image three-dimension curved surface geometric properties
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN106709500A (en) * 2015-11-13 2017-05-24 国网辽宁省电力有限公司检修分公司 A Method of Image Feature Matching
CN106709500B (en) * 2015-11-13 2021-12-03 国网辽宁省电力有限公司检修分公司 Image feature matching method
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN109409283A (en) * 2018-10-24 2019-03-01 深圳市锦润防务科技有限公司 A kind of method, system and the storage medium of surface vessel tracking and monitoring
CN109409283B (en) * 2018-10-24 2022-04-05 深圳市锦润防务科技有限公司 Method, system and storage medium for tracking and monitoring sea surface ship
CN109553140A (en) * 2018-12-05 2019-04-02 江西书源科技有限公司 The long-range control method of household water-purifying machine
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110084743B (en) * 2019-01-25 2023-04-14 电子科技大学 Image mosaic and positioning method based on multi-strip initial track constraints
CN118115716A (en) * 2024-03-05 2024-05-31 北京大希科技有限公司 High-precision positioning method integrating vision and AR technology
CN118115716B (en) * 2024-03-05 2025-01-28 北京大希科技有限公司 A high-precision positioning method integrating vision and AR technology

Also Published As

Publication number Publication date
CN101777129B (en) 2012-05-23

Similar Documents

Publication Publication Date Title
CN101777129A (en) Image matching method based on feature detection
Xue et al. Learning to calibrate straight lines for fisheye image rectification
Christiansen et al. Unsuperpoint: End-to-end unsupervised interest point detector and descriptor
Liu et al. Matching-cnn meets knn: Quasi-parametric human parsing
CN103839277B (en) A kind of mobile augmented reality register method of outdoor largescale natural scene
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
Lü et al. Comprehensive improvement of camera calibration based on mutation particle swarm optimization
CN109101981B (en) A loop closure detection method based on global image stripe code in street scene scene
CN112634125B (en) Automatic face replacement method based on off-line face database
CN111105460B (en) A RGB-D Camera Pose Estimation Method for 3D Reconstruction of Indoor Scenes
CN103489174B (en) A kind of face super-resolution method kept based on residual error
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
US20210304411A1 (en) Map construction method, apparatus, storage medium and electronic device
JP2011508323A (en) Permanent visual scene and object recognition
CN108256462A (en) A kind of demographic method in market monitor video
CN105825168A (en) Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN118097265A (en) Visual SLAM optimization method in dynamic scenes based on deep learning and GPU acceleration
CN115272450A (en) Target positioning method based on panoramic segmentation
CN116894876A (en) 6-DOF positioning method based on real-time image
CN104978558B (en) The recognition methods of target and device
CN104751455A (en) Crop image dense matching method and system
Liu et al. Estimation of sunlight direction using 3D object models
CN109934853B (en) Correlation filtering tracking method based on response image confidence region adaptive feature fusion
CN109727287B (en) An improved registration method and system suitable for augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120523

Termination date: 20211125