Summary of the invention
Existing method based on the characteristics of image unchangeability is used under the less situation of image difference.For differing greatly, the images match when especially the variation of object attitude is big in the image, precision and stability are lower.In order to solve existing technical matters, the purpose of this invention is to provide a kind of feature extraction and image matching algorithm of estimating based on the image attitude.
To achieve these goals, provided by the invention based on Feature Extraction and matching algorithm, comprise attitude estimation and conversion, three processes of illumination estimation and conversion and images match, it is as follows that the method comprising the steps of:
Step S1: utilize benchmark image to pass through test pattern to be mated, obtain test pattern and estimate with respect to the attitude of benchmark image based on the feature detection algorithm (HLSIFD) and the arest neighbors matching algorithm of Harris function;
Step S2: utilize benchmark image that the illumination of the target in the test pattern of attitude estimation is estimated, target is contained in benchmark image illumination variation partly in the test pattern by calculating, this illumination variation function and the test pattern of estimating attitude are carried out illumination conversion and correction, obtain the test pattern of illumination correction;
Step S3: the test pattern and the benchmark image of illumination correction are carried out based on feature point detection and description, obtain the revised test pattern Feature Points Matching of illumination, estimate, obtain the attitude matching of test pattern after the illumination correction in conjunction with the attitude of test pattern.
Wherein, attitude estimates that concrete steps are as follows:
Step S11: by based on the Harris function feature detection algorithm benchmark image and test pattern are extracted the constant unique point of yardstick respectively;
Step S12: unique points all on benchmark image and the test pattern is carried out constant feature description algorithm (SIFT) feature description of yardstick: with each unique point is the square window of center intercepting fixed size, to the image in window, calculate the weights of relevant gradient, gradient direction and gradient thereof of each pixel, the unique point that obtains each is described, with the description that the histogram of multidimensional comes statistical nature point, finally form the multidimensional characteristic vectors of each unique point;
Step S13: the proper vector of the unique point that extracted in benchmark image and the test pattern is mated with the arest neighbors method;
Step S14: the unanimity collection that obtains all match point centerings with random sampling consistency algorithm (RANSAC);
Step S15: the projection mapping matrix H of trying to achieve benchmark image and test pattern by the unanimity collection
1
Step S16: by the projection mapping matrix H
1, test pattern is carried out projective transformation, the rough attitude that obtains test pattern is estimated.
Wherein, the illumination estimation concrete steps are as follows:
Step S21: the grey level histogram of asking benchmark image and test pattern;
Step S22: utilize the histogram specification method, calculate the histogram transformation function L between benchmark image and the test pattern;
Step S23:, test pattern is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction according to benchmark image and the histogrammic transforming function transformation function L of test pattern.
Wherein, the attitude matching concrete steps of the revised test pattern of illumination are as follows:
Step S31: the constant unique point of yardstick of asking for the revised test pattern of illumination after the conversion;
Step S32: the proper vector of asking for the unique point descriptor;
Step S33: all unique points of these proper vectors and benchmark image are carried out the arest neighbors coupling;
Step S34: the unanimity collection of seeking all coupling centerings by random sampling consistency algorithm (RANSAC);
Step S35: try to achieve the revised test pattern of illumination after the conversion and the projection mapping between the benchmark image concerns H by the unanimity collection;
Step S36: ask the projection mapping between revised test pattern of illumination and the benchmark image to concern H:H=H
2H
1, in the formula: the projection mapping matrix H
1, the projection mapping matrix H
2
Described image matching method based on feature also comprises, adds the estimation of image being carried out illumination after attitude is estimated.
Method of the present invention, matching image still can keep higher precision and stability under all bigger situation of attitude and illumination conversion.Images match is the bottom problem in the computer vision problem, generally as the bottom engine of additive method, supports the upper strata algorithm, for the upper strata algorithm provides high-quality matching result.The present invention is different with other the method for extracting based on invariant features, the present invention does not directly use constant unique point as the coupling foundation, but the attitude between estimated image and illumination relation at first, thereby piece image is wherein revised, finally on the image of revising, do coupling, method of the present invention is to be easy to realize and use, and mainly can be applied to following several aspect:
(1) based on the monitoring tracker of signature tracking, help system is followed the tracks of the target in the scene.Obtain the behavior semanteme of interesting target in the scene, thereby the incident in the scene is understood.
(2) based on the three-dimensional body modeling of Image Feature Point Matching, be used for the three-dimensional body of complexity is carried out morphological analysis and three-dimensional reconstruction, virtual reality there is great function
(3) based on the augmented reality system of signature tracking, be used for by the tracking of image static nature point being obtained the three-dimensional spatial information of scene, thereby by the content of virtual object in enhanced scene.
Embodiment
Describe each related detailed problem in the technology of the present invention method in detail below in conjunction with accompanying drawing.Be to be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Degree of accuracy, stability and the real-time of images match have been improved based on the image matching algorithm of image attitude and illumination estimation.Utilization is to the estimation of attitude and illumination, and the present invention has realized an image matching system, as Fig. 1 image matching algorithm process flow diagram based on image attitude and illumination is shown, and this method is divided into attitude to be estimated and revise, illumination estimation and correction, and three parts of images match:
Described attitude is estimated and retouch comprises step: the benchmark image and first width of cloth test pattern are extracted the constant unique point of yardstick respectively; All unique points are described, ask for proper vector; With the arest neighbors method proper vector of the unique point that extracted in the benchmark image and first width of cloth test pattern is mated; Obtain the unanimity collection of all match point centerings with random sampling consistency algorithm (RANSAC); Try to achieve the projection mapping matrix relationship of the benchmark image and first width of cloth test pattern by the unanimity collection; By the projection mapping matrix relationship, second width of cloth test pattern is wherein carried out the attitude correction.
Described illumination estimation and retouch comprise step: ask first width of cloth matching test image packets to be contained in grey level histogram in the benchmark image; Utilizing the histogram specification method, is standard with the grey level histogram that is contained in the benchmark image, calculates the histogram transformation function L between the benchmark image and second width of cloth test pattern; According to benchmark image and the histogrammic transforming function transformation function L of second width of cloth test pattern, second width of cloth test pattern is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction.
Described images match partly comprises step: the constant unique point of yardstick of asking for revised image; Ask for the proper vector of unique point descriptor; All proper vectors of these proper vectors and benchmark image are carried out the arest neighbors coupling; Seek the unanimity collection of all coupling centerings by random sampling consistency algorithm (RANSAC) algorithm; Try to achieve projection mapping matrix relationship H between revised matching image and the benchmark image by the unanimity collection; Try to achieve the projection mapping matrix relationship H:H=H between second width of cloth test pattern and the benchmark image
2H
1, in the formula: the projection mapping matrix H
1, the projection mapping matrix H
2
The hardware minimalist configuration that method of the present invention needs is: P4 3.0G CPU, the computing machine of 512M internal memory.On the hardware of this configuration level, adopt the C Plus Plus programming to realize this method.The committed step that method of the present invention is related to describes in detail one by one below, and the basic step in the method for the present invention is identical, and concrete form is as described below:
At first, be to carry out attitude estimation and correction:
The attitude relation of image can obtain by the spatial relation of the unique point between benchmark image and the test pattern.So-called unique point can be stabilized the detected point that separating capacity is arranged exactly.The constant feature point detecting method SIFT of yardstick is adopted in the detection of unique point.The feature point detection algorithm that yardstick is constant is widely used in image mosaic at present, follows the tracks of in the scheduling algorithm, has than advantages of higher stability and adaptive faculty.Benchmark image and test pattern are carried out the SIFT feature detection respectively, suppose that detected unique point coordinate set is (U
r, U
t):
U
r={ u
R1, u
R2, u
R3..., u
Rn, the unique point coordinate set of reference picture, u
RnBe the element in this set, n is the number of benchmark image unique point, n=1,2,3 ... n;
U
t={ u
T1, u
T2, u
T3..., u
Tm, the characteristic coordinates point set of test pattern, u
RmBe the element in this set, m is the number of test pattern unique point, m=1,2,3 ... m.
Use the SIFT descriptor that each unique point is described, the set of eigenvectors that obtains each unique point descriptor is combined into (V
r, V
t):
V
r={ v
R1, v
R2, v
R3..., v
Rn, the proper vector set of the unique point descriptor of benchmark image, v
RnBe the element in this set, n is the number of the proper vector of benchmark image unique point descriptor, equals the number of benchmark image unique point, n=1,2,3 ... n;
V
t={ v
T1, v
T2, v
T3..., v
Tm, the proper vector set of the unique point descriptor of test pattern, v
TmBe the element in this set, m is the number of the proper vector of test pattern unique point descriptor, equals the number of test pattern unique point, m=1,2,3 ... m.
All unique point elements and the right set of proper vector to test pattern are expressed as { u
Ti, v
Ti, u
TiBe the arbitrary element in the test pattern unique point coordinate set and its characteristic of correspondence descriptor proper vector v
Ti, i=1 wherein, 2,3 ... m calculates the characteristic distance d of the proper vector of each unique point descriptor in itself and the benchmark image
I, j: d
I, j=|| v
Ti-v
Rj||, v wherein
RjBe the proper vector of any one unique point descriptor in the benchmark image, j=1,2,3 ... n, test pattern unique point coordinate and proper vector are to { u
Ti, v
Ti, with unique point coordinate in the reference picture and proper vector to { u
Rj, v
RjCoupling be defined as: and if only if the shortest characteristic distance
And with inferior short characteristic distance
Ratio less than certain constant t, d
1/ d
2<t, wherein 0<t≤1.
The coupling that the one group of match point that obtains like this is an image is to P:
P={ (u
T, 1, u
R, 1), (u
T, 2, u
R, 2), (u
T, 3, r
R, 3) ..., (u
T, k, u
R, k), k is individual altogether, wherein u
R, k∈ U
r, u
T, k∈ U
tSuppose that correct relation between these matching characteristic points is a projection projection relation, can use projection mapping matrix relationship H represent each element wherein between spatial correspondence, as:
u
R, k=Hu
T, k, the coupling that obtains image by random sampling consistency algorithm (RANSAC) is right to the correct coupling among the P, and tries to achieve correct projection mapping matrix relationship H.
Use the image projection transformation technology, with test pattern I
tCarry out projection projection conversion, the revised image I of process attitude of trying to achieve
w:
I
w(x,y)=I
t(H(x,y)
T)。I
tTest pattern, (x y) is any pixel coordinate in the image, wherein H (x, y)
TFor its pixel coordinate is carried out the inverse transformation of projection mapping.
Its two, be the estimation and the correction of illumination:
The illumination information of image can reflect by the histogram that passes through gray scale partly.Estimate and correction that by the attitude of image the match point that obtains is to P={ (u
T, 1, u
R, 1), (u
T, 2, u
R, 2), (u
T, 3, r
R, 3) ..., (u
T, k, u
R, k) and its near zone as the common visible zone of two width of cloth images, to the revised image I of attitude
wAsk for grey level histogram with the corresponding zone of reference picture and be designated as h respectively
r, h
t, according to the histogram specification method, try to achieve the transformation relation i=f (j) of color between an image, i wherein, j be after the conversion and conversion before the gradation of image value.To I
wCarry out the illumination conversion of image, obtain the illumination correction image I of image
W, l
Its three, be images match:
Attitude and the revised test pattern of illumination and reference picture are mated.Attitude is similar to reference picture with light conditions with the revised image aspects of illumination, can high matching rate and the matching precision of easier acquisition.
To constant (HLSIFD) unique point of extraction yardstick, and use SIFT feature description to carry out the description of feature.Then to reference picture and illumination correction image I
W, lThe unique point of this two width of cloth image is mated, and calculates reference picture and illumination correction image I
W, lProjection mapping between this two width of cloth image concerns H
W, l
Relation between the final image that images match is asked then is:
Please provide parameter
Implication.
Fig. 2 illustrates attitude estimated result and correction result,
Going up left side figure among Fig. 2 is that benchmark image, upward right figure are test pattern, and figure below is that test pattern is through the revised image of attitude.Upward the rectangle frame among the right figure is expressed as the result of coupling, by the attitude correction its attitude is modified to figure below, and rectangle frame is expressed as the result that the target of will mate is modified to the attitude of benchmark image.
Fig. 3 illustrates illumination estimation result and correction result, and left side figure is that benchmark image, middle figure are that test pattern, right figure are that centering figure carries out the illumination correction result among Fig. 3.
Fig. 4 illustrates the matching result of final image through the illumination correction, and the last figure of Fig. 4 is a benchmark image, the matching result that obtains after figure below process illumination correction.The target that benchmark image has been mated in the rectangle frame representative of figure below, and it can be carried out attitude correction.
Specific embodiment is as follows:
Based on image characteristics extraction and the matching algorithm that the image attitude is estimated, comprise attitude estimation and conversion, three processes of illumination estimation and conversion and images match, step is as follows:
The image attitude is estimated and revised step is for reducing in the images match because attitude changes the difficulty of bringing, step by the attitude correction, the unification that the visual angle of image obtains being similar to makes image can access stable matching rate and matching precision under the situation of big visual angle change.
The attitude estimating step is as follows:
Step S11: benchmark image and test pattern are extracted constant (HLSIFD) unique point of yardstick respectively;
Step S12: all unique points are taked to extract the constant unique point algorithm (SIFT) of yardstick carry out feature description, ask for proper vector;
Step S13: the unique point that is extracted in the benchmark image and first width of cloth test pattern is mated with the arest neighbors method;
Step S14: the unanimity collection that obtains all match point centerings with random sampling consistency algorithm (RANSAC);
Step S15: the rough projection mapping matrix H 1 of trying to achieve benchmark image and test pattern by the unanimity collection.
Step S16: by the projection mapping matrix H
1, test pattern is carried out the attitude correction.
Be that the people can obtain to stablize and mate accurately under the situation of big illumination variation when image irradiation estimation and correction.The relation of illumination information between can the grey level histogram that pass through the test pattern effective coverage that first width of cloth mated of part represented.Concrete steps are as follows:
Step S21: ask the grey level histogram of benchmark image and the revised test pattern of attitude, wherein the grey level histogram of benchmark image is the effective coverage of test pattern;
Step S22: utilize the histogram specification method, calculate the histogram transformation function L of benchmark image and test pattern;
Step S23: according to the transforming function transformation function L of image histogram, the revised image of attitude is carried out histogram specification, thereby the revised image of attitude is carried out the illumination correction.
The purpose of the attitude of image and illumination correction is in order to reduce the difficulty of images match.Revised image mated just can obtain higher matching rate and precision.Concrete steps are as follows
Step S31: the unique point of asking for the HLSIFD of attitude and the revised image of illumination;
Step S32: the SIFT proper vector of asking for unique point;
Step S33: all unique points of above-mentioned proper vector and benchmark image are carried out the arest neighbors coupling;
Step S34: the unanimity collection of seeking all coupling centerings by stochastic sampling consistency algorithm (RANSAC);
Step S35: try to achieve projection mapping matrix H between attitude and revised image of illumination and the benchmark image by the unanimity collection
2Relation;
Step S36: ask the projection mapping relation between benchmark image and the test pattern: H=H
2H
1
In a word, the present invention proposes a kind of simple and effective image matching method based on image attitude illumination estimation.Our validity and the stability of method of having carried out a large amount of experimental verifications on some databases in the world.The present invention is easy to realize, stable performance.The present invention can improve the precision of images match, and to image mosaic, the high layer methods of computer vision such as tracking have good guarantee.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.