Summary of the invention
The object of the present invention is to provide a kind of based on image border self-adapting window size, distance weighted based on geometric distance adaptive weighting, color-based, can take into account accuracy and runtime, can effectively reduce the coupling noise, improve degree of depth discontinuity zone and low texture region matching precision based on the self-adapting window of image border and the Stereo Matching Algorithm of weight.
The technical solution that realizes the object of the invention is:
A kind of based on the self-adapting window of image border and the Stereo Matching Algorithm of weight, comprise the following steps:
Step 1: use the Canny operator to ask for the edge to benchmark image;
Step 2: step 2, pointwise detects, and according to whether being the difference of edge and edge power, distributes different neighborhood window sizes, chooses three kinds of neighborhood window size M, N, O(M〉N〉O); Then the detected image edge, as judgment basis, if window center arranges support window and is of a size of O on strong edge, be N if window center on weak edge, arranges window size, is M otherwise window size is set; Put the model that assigns weight to the geometric distance of window center according to neighborhood, shown in (1),
In formula, (i, j) be the coordinate in window for neighborhood point, be this to the geometric distance of window center, a, s and ω are that weight regulatory factor: a is the amplitude regulatory factor, ω is exponential damping speed regulatory factor, ω is larger, and characteristic curve is more level and smooth; S is the kurtosis regulatory factor, and s is larger, and characteristic curve is narrower, s and ω acting in conjunction, scope and the weight coefficient in control core district;
Step 3 is calculated the color distance of every pair of corresponding element in neighborhood window matrix, and is used restraint with interceptive value, and color distance cw expression formula is suc as formula shown in (2),
In formula, (r
1, g
1, b
1) (r
2, g
2, b
2) be respectively the RGB triple channel brightness value for 2, (x
1, y
1), (x
2, y
2) be the coordinate of corresponding element;
Step 4 multiplies each other color distance and distance weighting corresponding in neighborhood window matrix and add up, take limit add up in disparity range with hour as optimum solution, namely this parallax, skip to step 2, descends some coupling, until complete the entire image coupling, draw disparity map.
The present invention compared with prior art, its remarkable advantage:
Support window consistent size, the uniform regional Stereo Matching Algorithm of reference value, larger support window has more brightness to change to carry out reliable matching at low texture region, but has more error message in occlusion areas; Less window has better effect to the coupling of degree of depth discontinuity zone, but low texture region is not suitable for; And in window, each pixel has different reference values; Algorithm of the present invention is on the basis of the size of dynamically choosing support window according to the marginal information of image, cumulative and as similarity with color distance, this similarity introducing is met the weight model of probability curve characteristic, thereby rationally utilize match information, obtain dense disparity map.
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Embodiment
As shown in Figure 2: the present invention is a kind of based on the self-adapting window of image border and the Stereo Matching Algorithm of weight, comprises the following steps:
Step 1: use the Canny operator to ask for the edge to benchmark image;
Step 2: step 2, pointwise detects, and according to whether being the difference of edge and edge power, distributes different neighborhood window sizes, chooses three kinds of neighborhood window size M, N, O(M〉N〉O); Then the detected image edge, as judgment basis, if window center arranges support window and is of a size of O on strong edge, be N if window center on weak edge, arranges window size, is M otherwise window size is set; Put the model that assigns weight to the geometric distance of window center according to neighborhood, shown in (1),
In formula, (i, j) be the coordinate in window for neighborhood point, be this to the geometric distance of window center, a, s and ω are that weight regulatory factor: a is the amplitude regulatory factor, ω is exponential damping speed regulatory factor, ω is larger, and characteristic curve is more level and smooth; S is the kurtosis regulatory factor, and s is larger, and characteristic curve is narrower, s and ω acting in conjunction, and scope and the weight coefficient in control core district, the probability curve feature is satisfied in the variation of geometric distance weight fw, as shown in Figure 1;
Step 3 is calculated the color distance of every pair of corresponding element in neighborhood window matrix, and is used restraint with interceptive value, and color distance cw expression formula is suc as formula shown in (2),
In formula, (r
1, g
1, b
1) (r
2, g
2, b
2) be respectively the RGB triple channel brightness value for 2, (x
1, y
1), (x
2, y
2) be the coordinate of corresponding element;
Step 4 multiplies each other color distance and distance weighting corresponding in neighborhood window matrix and add up, take limit add up in disparity range with hour as optimum solution, namely this parallax, skip to step 2, descends some coupling, until complete the entire image coupling, draw disparity map.
Wherein, the concrete grammar that in step 3, interceptive value uses restraint is: propose to introduce upper limit interceptive value ctw, make up the deficiency of simple use similarity, shown in (3), select upper limit interceptive value, in formula, T is the intercept threshold value,
ctw=min{cw,T} (3)
Wherein, step 4 is specially: on the basis of the size of dynamically choosing support window according to the marginal information of image, cumulative and as similarity with color distance, this similarity is introduced the weight model that meets the probability curve characteristic, at first the weighting color distance of calculation window element adds up and is SDC, as similarity, shown in (4).
SDC(x,y,d)=sum{fw(i,j)×ctw[(x+i,y+j),(x+i+d,y+j)]} (4)
Then introduce the self-adapting window algorithm based on the edge, namely EAW, in the EAW+SDC mode, realize the regional Stereo matching of accuracy and runtime compatibility.
The effect of this patent can further illustrate by following result:
In order to test this patent Algorithm Performance and selected with reference to coefficient, this patent has carried out a large amount of embodiment analytical algorithms.The embodiment environment is notebook computer, and dominant frequency is Intel Core2Duo T81002.10GHz, and internal memory 2G, programming language are Matlab R2009a.
Use respectively SAD algorithm, Yoon algorithm and this patent algorithm to carry out Stereo matching to Middlebury database Stereo Matching Algorithm test pattern.Test pattern Tsukuba picture size is 384 * 288, and disparity range is 0~15, as shown in Fig. 5 (a).The standard disparity map contains 8 parallax grades as shown in Fig. 5 (b), it has ignored the parallax grade in the background.
Find out the optimum window size of SAD algorithm by embodiment, data as shown in Figure 3, data are from embodiment and Middlebury evaluating system, draw the SAD algorithm relatively hour window size be 15 * 15.This patent algorithm desired parameters is got by the embodiment test and appraisal, as shown in Figure 4.
(1) use the SAD algorithm, window size is selected 9 * 9 and 15 * 15, calculates parallax and auto adapted filtering gets disparity map as Fig. 5 (c) and (d);
(2) use the Yoon algorithm, the embodiment parameter is set according to its data-oriented fully, gets disparity map as shown in Fig. 5 (e);
(3) use this patent algorithm, selected strong edge window size is 7 * 7, and weak edge window size is 9 * 9, non-edge window size is 15 * 15, each factor a=10 in distance weighting, s=2, ω=0.94, interceptive value T=5, result of calculation is as shown in Fig. 5 (f).The statistical graph of Mismatching point is as shown in Fig. 5 (g) and Fig. 5 (h), and in figure, black color dots is Mismatching point, and gray area is that occlusion area does not include erroneous point, and white portion is correct coupling.
Qualitative analysis:
(1) through filtering, Fig. 5 (c) still has a lot of noises and mistake coupling, and Fig. 5 (f) has eliminated major part wherein, and the area of residual fraction also obviously dwindles, and this is that this patent self-adapting window method has made up the not obvious loss of learning that causes of texture;
(2) Fig. 5 (e) outline effect is best, Fig. 5 (c) and Fig. 5 (d) are very not neat, Fig. 5 (e) has the fat situation in border, but existing obviously improvement, profile is the intersection of parallax discontinuity zone and occlusion areas, illustrates that this patent algorithm has lifting to this regional matching effect;
(3) Fig. 5 (f) details keeps better, illustrates that this patent algorithm has reduced the loss of detail that causes because of the window amplification.
Quantitative test:
The accuracy rate aspect, through the system testing of Middlebury Online Judge, SAD algorithmic match error rate is about 20%, and this patent algorithm is reduced to 6.7%, at each regional matching effect, obvious lifting is arranged.Evaluation result is as shown in table 1, ratio with mistake matched pixel number and the regional total pixel number of the type represents the matching error rate, in table, n-occ represents the matching error rate of non-occlusion areas (non-occluded regions), all represents the error rate of global area, disc represents the error rate of degree of depth locus of discontinuity near zone (regions near depth discontinuities), and bad pixels represents the overall matching error rate.
Time aspect: SAD algorithm 6.6s consuming time, Yoon adaptive weighting algorithm (Yoon ' s Adaptive Weight, write a Chinese character in simplified form Yoon AW) time loss up to 1152.5s, this patent algorithm 7.5s consuming time, the method that self-adapting window method and weighting color distance are cumulative and replacement SAD estimates has been offset the calculated amount that part increases because of the weighted sum hyperchannel.
Table 1 embodiment interpretation of result
This patent algorithm is when promoting accuracy rate, and computing velocity is near initial matching cost function SAD, and is very competitive aspect taking into account in speed and accuracy.Other images in the Middlebury database are tested, also obtained matching effect preferably, as shown in Figure 6.
Result shows, this patent algorithm can effectively reduce the coupling noise, improves the matching precision of fringe region and low texture region, and matching speed is fast.
In order to check this patent Algorithm Performance, this patent has built the required hardware platform of Binocular Stereo Vision System experiment.Use this patent algorithm to carry out Stereo matching to the image that gathers.
Binocular Stereo Vision System that utilization is built gathers stereo-picture pair, and as shown in Fig. 7 (a) Fig. 7 (b), image resolution ratio is 2048*1536, and disparity range is about 150~220 pixels.Use this patent algorithm to carry out Stereo matching.The selected algorithm parameter: window size is 31 * 31,23 * 23,19 * 19, T=100, w=0.94, s=1.3.Obtain disparity map as shown in Fig. 7 (c).
Analyze above disparity map, this patent algorithm can effectively be realized the division of degree of depth level, and noise is few, and profile is more obvious.
Result shows, this patent algorithm can effectively be applied to the image that the embodiment system gathers, and the matching result noise is little, speed is fast.
Compare by theoretical analysis with to Middlebury database data, embodiment data, prove that the method has higher matching efficiency than conventional stereo matching algorithm (SAD, SSD, NCC) and self-adapting window method (Yoon AW).