GB2244621A - Machine vision stereo matching - Google Patents
Machine vision stereo matching Download PDFInfo
- Publication number
- GB2244621A GB2244621A GB9028124A GB9028124A GB2244621A GB 2244621 A GB2244621 A GB 2244621A GB 9028124 A GB9028124 A GB 9028124A GB 9028124 A GB9028124 A GB 9028124A GB 2244621 A GB2244621 A GB 2244621A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- points
- point
- matching
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
- G06T7/596—Depth or shape recovery from multiple images from stereo images from three or more stereo images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/12—Acquisition of 3D measurements of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Vascular Medicine (AREA)
- Databases & Information Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Corresponding points 11(U1, V1) and (U2, V2) 15 in at least two different images of a scene are matched by the use of epipolar lines. A point 11 is selected in one image and the position of the resulting epipolar line 16 is calculated using matrices defining parameters associated with the capturing of the image. The epipolar line 16 is used to identify in the other image points as potential matches to the selected point in the one image. Further constraints are applied to the identified points to find the one point in the other image that most likely matches the selected point. By using three images from different vantage points, intersecting epipolar lines giving a 'fix' can be obtained (Fig 2). <IMAGE>
Description
MACHINE VISION STEREO MATCHING
The invention relates to machine vision stereo matching, that is to say the matching of features between two different images of a scene.
The matching of features in images representing a scene viewed from at least two different viewing positions is a major problem in stereo vision in the art of machine vision systems. Hitherto, this problem has been approached by applying a set of constraints so that areas or features or both in two or more images are matched. This approach has met with varying degrees of success because there is no general solution to the problem and a set of constraints applied to one scene may not be appropriate to other scenes. Improperly applied constraints will lead to an interpretation of the images which does not correspond fully with the scene.
There is a reduction of one dimension when a three dimensional (3D) scene is captured in a two dimensional (2D) image. This dimensional reduction results in a loss of information about the scene because a point in the 2D image will correspond to a line in the 3D scene. Thus, the process of 2D image formation can be regarded as a many-to-one mapping.
That is to say, many points in the 3D space of the scene will map onto a single point in the 2D space of the image. Thus, the inverse operation of mapping 2D image space points onto 3D scene space points is a one-to-many mapping.
This loss of information can be and is avoided by using at least one further image of the scene taken from a different viewing position and then identifying corresponding points in the plural images that relate to the same point in the 3D space of the scene. In this way the process of mapping 2D image space points on to 3D scene space points is reduced to a one-to-one mapping. However, in adopting this approach a new problem is created, namely the problem of identifying corresponding points in the plural images; the correspondence problem. For any one pixel in one image there will be a large number of possibly corresponding pixels in the or each other image; the extreme being every pixel in the other image being a possible match with the pixel in the first pixel, and there is of course only one correct corresponding pixel.
The present invention provides a means by which the number of possible matches is reduced, thereby both reducing the processing time required to find a match and increasing the likelihood of good match being made.
To obtain accurately triangulated 3D measurements from stereo images, a wide baseline, ie. the separation between cameras, or in the case of convergent stereo images, a wide viewing angle is required. The correspondence problem is more severe in wide baseline/angle stereo images because the changes between pairs of images can be very large and this in turn makes matching of images more difficult.
These changes can be categorized into two types, geometric distortion and photometric distortion.
Geometric distortion is the change in the geometrical description of a matching entity between images.
photometric distortion is caused by the change in orientation of a surface of an object in relation to the viewing point when a scene is viewed from a different viewing position. The degree of both types of distortion increases with increased baseline or viewing angle between the two viewing positions, although photometric distortion is less significant when a scene is evenly illuminated. In an embodiment of the invention to be described in greater detail hereinafter, a compromise is made so that the matching of images is possible and at the same time the recovered 3D measurements are accurate enough for subsequent use.
Most stereo matching algorithms recognize a common problem, namely that matching is only possible for those image points with distinctive image characteristics. Image points within homogeneous regions cannot be matched. In general, pixel or area based matching is more prone to image noise, whereas feature based matching suffers from both photometric and geometric distortions. The ambiguity of stereo matching can be decreased by increasing the sophistication of the image primitives used in matching.
The sophistication of matching primitives is, in ascending order:
i) pixel or area;
ii) edge pixels;
iii) linear or curved edge segments; and
iv) structural descriptions; i.e. shapes formed
by edge segments.
Despite less matching ambiguity with higher level primitives, the pre-processing techniques commonly used to extract higher level primitives often produce artefacts which create additional problems to those that already exist.
The embodiment to be described uses edgels, that is edge pixels or elements, as the matching primitives. Current edge detection techniques are capable of producing edges to near sub-pixel accuracy and accordingly the effect of geometric distortion and photometric distortion is not severe in edgel matching. This approach reduces the problem of identifying corresponding pixels to that of resolving the ambiguities posed by matching edgels. The embodiment uses as few constraints as possible in order to make the stereo algorithm more general.
Matching constraints used in stereo can be classified into three groups; the geometrical constraints which are derived from the imaging geometry, the photometric constraints which are derived from the image characteristics, such as the intensity or colour, and heuristics, such as the assumption of figural continuity.
In one aspect the present invention provides a method of matching corresponding points in at least two different images representing a scene as captured by a camera, each image having an associated camera matrix defining parameters related to the capturing of the image by the camera the method comprising the steps of selecting a point in a first one of the images; calculating directly from the camera matrices of the first image and of a second one of the images the position of an epipolar line; and identifying in the second image points intersected by the epipolar line as potential matches to the selected point in the first image.
Once potential corresponding points have been selected or defined, additional constraints can be applied to the points to describe the nature of those points. The constraints may be for example any or all of the local edge orientation, the intensity normal and/or the surrounding colour of the or each image point. In this way, the similarity between each potential point and the point of interest can be examined to facilitate selection of the best matching point from the potential points.
According to another aspect of the invention there is provided a method of matching corresponding points in a plurality of images representing different aspects of a scene, the method comprising selecting points in one image, calculating an epipolar line in each other image for each selected point, identifying in the or each other image points which are potential matches to each of the selected point as points on or near the epipolar lines, applying to each identified point in each other image a weighting representing the likelihood of the identified points matching the selected points and selecting as the best match to the selected points the combination of points which result in a minimal weighting summation.
According to a further aspect of the invention there is provided a method of matching image points in at least two different images of a scene, the method comprising selecting an image point in one image and candidate matching image points in another image and applying a shortest path algorithm to the candidate matching points thereby to identify a single point in the other image corresponding to the selected point in the one image.
In international patent application no.
WO89/01850 now assigned to us, the teachings of which are incorporated herein by reference, there is disclosed an adaptive vision based controller in which inter alia an electronic camera acquires images representing a scene, which images can be processed to determine information related to the scene. The present invention can be incorporated into the system of this controller or indeed any other machine vision system.
Therefore, according to a further aspect of the invention there is provided an image system in which an electronic camera acquires images representing a scene, the system comprising defining means for defining matrices representing parameters associated with the acquisition of the images by the camera; image selecting means for selecting from the acquired images a first image and a second image; point selecting means for selecting a point in the first image; calculating means for calculating directly from the matrices associated respectively with the first and the second image the position of an epipolar line; and identifying means for identifying in the second image points intersected by the epipolar line as potential matches to the selected point in the first image.
The above and further features of the invention together with advantages thereof will become clearer from consideration of the following detailed description of exemplary embodiments of the invention given with reference to the accompanying drawings.
In the drawings:
Figure 1 is a representation of the geometry associated with two images of a scene;
Figure 2 is a representation of the geometry associated with three images of a scene;
Figure 3 is a graphical representation of an intensity contour;
Figure 4 shows masks suitable for extracting colours across an edge;
Figure 5 shows the construction of a layered network of potential matches;
Figure 6 illustrates how dummy layers can be added to the network of Figure 5; and
Figure 7 shows a matching algorithm.
Referring now to Figure 1 of the accompanying drawings, there is represented the geometry associated with stereo imaging. A point 10 in 3D space (x,y,z) is seen as a point 11 in a first image plane (U1, V1) when viewed from a first observation point 01. In other words, the image plane (U1, V1) is in 3D space at a position such that a line 12 between the observation point 01 and the point 10 will intersect the plane Ul, V1 at the point 11. When the point 10 is viewed from a second observation point 13 a line 14 will intersect a second image plane ( U2 t V2) at a point 15 in the second image plane. The first line 12 will be seen in the second image plane (U2 t V2) as an epipolar line 16.
Thus, as will be appreciated from Figure 1, the space in the second image plane ( U2 t V2) in which a point corresponding to the point 11 will lie can be reduced to the epipolar line 16. The use of an epipolar line in the second image reduces significantly the number of points in the second image that might correspond to a point in the first image and this can be put to good effect either to reduce the processing time taken to match points and identify features or to increase the accuracy of confidence of a match of points and features between images.
As shown in Figure 2 of the accompanying drawings, for more than two views of a scene all the notional lines 12, 14, 17 projected from the respective viewing positions (not shown) will intersect at the point 10 in the 3D space. The viewing positions can be selected so that at least two epipolar lines will be seen in at least one image. In the example shown in Figure 2, the epipolar line 16 is seen in the second image plane (U2, V,) and the epipolar line 16, together with an epipolar line 18 resulting from the notional line 14, is seen in a third image plane (Us, V3). The point 19 at which the epipolar lines 16, 18 intersect in the third image plane (U3 t V3) corresponds to the point 10 in 3D space.
It is a simple matter to calculate the position of epipolar lines in the second and subsequent images the problem being merely one of image geometry. The calculation of epipolar lines per se is considered to be well within the scope of the notional skilled man, but for the sake of completeness a brief description of the calculation will be given hereinafter. It should be however noted that such calculations cannot be made until such time as the camera or other means by which images are acquired has been calibrated. One preferred method of calibration is that developed by
Bowman & Forrest and disclosed in Transformation
Calibration of a Camera Mounted on a Robot, Image and
Vision Computing, Vol. 5, No. 4, 1987, the teachings of which are incorporated herein by reference.
From Figure 1 it is known that a point 10 x=(x,y,z) in 3D space coordinates can be projected on to an image plan (U1, vl) resulting in an image point u=(U1, V1).
The transformation between 3D space and image space is a matrix T which represents the transformation as performed by the camera and where: (x,y,2,1)T = (su,sv,0,x) or xT = u
The point P will be projected onto the two image planes forming an image point (u1, v1) in the first or left hand image and an image point (U2, V2) in the second or right hand image. This can be expressed as: (x,y,z,l)Tl = (S1,U1,S1,V1,0,S1) for the left hand view, and (x,y,z,l)T= = (S2,U2,S2,V2,O,S2) for the right hand view
where T1 and T- are the transformation matrices for the left hand and right hand views respectively, i.e. the camera matrices.
The elements in T1 and T- are as follows:
Tlll T112 0 T114 TS11 T=l2 0 Tr4, Tl2l T122 0 T124 T= 21 T = 22 0 t = 24 T1= (9), Tr = (10) T1 31 T132 0 T134 Tr 31 T=32 0 Tr 34 T141 T142 0 T144 Tr = 41 T=42 0 T=44 The line 12 passes through 01P can be expressed as xl O1P = Y1 Ny
ZI Nz where (X1 Y1 Z1 1)T1 = (0 0 0 0).
(T121 (Tl2l - U1T124) (T131 - u1T134)
Nx = ,
(T122 - V1Tl24) (T132 - v1T134
| (Tlll - u1T114 ) (T131 - u1T134)
Ny = , and
(T112 - v1T114) (T132 - V1T134)
(T111 - u1T114) (T121 - u1T124)
(T112 - v1T14 ) (T122 - v1T124
The epipolar line 16 can therefore be obtained by a direct transformation, namely (Xl-gNx Yl-gNY Z1- N 1) T= = (S2U S2V O S2)
The projection of the first observation point Ol gives the epipole (u.,v.) in the second image and this is the point from which all the epipolar lines extend in the second image. The position of this epipole (u., v=) in the second image is found by letting #=0 in the above equation.The equation for the epipolar line can then be expressed as
(v - ve) (V v.) (u - ue) (Uz - ue ) where uu and vA are obtained by letting R equal an integer constant.
Once epipolar lines have been defined in the or each other image, the images can be processed to identify features by applying various constraints to the images. In other words, the processing is arranged to look for similarities between a point in the first image and points selected by the identification of epipolar lines in the second and any subsequent images.
As mentioned above, in this embodiment edge pixels (edgels) are used as the image primitive to be matched. Edges are therefore first identified in the images. The preferred method of edge detection is that developed by Marr and Hildreth as modified in the system disclosed in our abovementioned international patent application, though other known methods could instead be used. Once edgels have been thus identified an orientation constraint is applied to each edgel to facilitate identification of corresponding edgels.
For an edgel in a segment, the orientation of the edgel has a slightly different meaning to that of a line segment. The present embodiment uses local edge orientation to refer to the tangent to a local edge segment at an edgel (u,v). Each edgel has two directional first derivatives of intensity, Gv and Gu, where the gradient G(u,v) of the intensity I of an edgel is Gv + Gu and G (u,v)=8I(u,v) AI(U,V) + au dv.
Gv and Gu are obtained from a gradient operator such as the so-called Sobel or the Canny operators.
The local edge orientation of an edgel is expressed by tan-ll Gv and since the local edge orientation is Gu susceptible to both geometric distortion and image noise, this constraint is used to filter the obvious false matches by restricting the orientation difference between the matching edgels to a limit of = 4/#. This limit can be varied accordingly to the viewing angle. This constraint is then applied between images 1 and 2 as follows.
GvlGu2 - Gv2G=l 0 < tan A= < 1 G=.G=2 + GvlGv2 where Gv and Gu are the vertical and horizontal gradients of the image intensity at an edgel.
Thus, one way in which the system can look for similarities between images is to examine the gradient of an edgel in one image and to compare that gradient to the respective gradients of edgels identified by the epipolar line in another image.
Another constraint that can be applied is to examine the intensity normal of edgels. Figure 3 of the accompanying drawings shows an intensity profile K of an image in the image plane. For a point S in the image plane, where S = f(u,v,w) and w = g(x,y), there will be an intensity normal vector Bn, where Bn = ai + bi + ck. The position of the point S is given by a position vector r, where E = ui + vi + wk and from this Bn can be calculated.
The gradient in u for the position r is given by
ar = i + aw k and the gradient in v is given by
ar = j + aw k Ov du Therefore, Bn = ar ay du av 8r 8r au av It should be noted that the processing can be simplified where the above discussed gradient constraint is applied because aw and aw are in fact 8u av the above discussed Gu and Gv respectively.
An edgel may also be characterized by a sharp transition of colour as well as or instead of intensity. The colours on each side of a detected edge are calculated using specially designed orientation masks as shown in Figure 4 of the accompanying drawings. The function of the masks is to obtain a representative colour value on each side of an edge and four orientation masks 20,21,22,23 are derived from for example a weighted half-Gaussian kernel. At least four orientation masks are required to cover the eventuality of the detected edge having an orientation approximately to horizontal as shown by edge 24, vertical as shown by edge 25, inclined left to right as shown by edge 26 and inclined right to left as shown by edge 27. For edges having different orientations to those shown, additional masks can be used or the mask corresponding to the approximate orientation can be selected.The centre, ie. the maximum weighting of the kernel is placed approximately 4 pixels away from the edgel in the image and therefore the weights decrease nearer to the edge. The masks are applied symmetrically about the edge and are applied repeatedly for each edgel in the edge.
In this way a representation of the colours on each side of the edge is built up.
The luminance or intensity L of a pixel/edgel is calculated from L = R + G + B, where R, G, B are the three primary colour components of the pixel. R, G and B can be normalised to r, g, b as follows:
R G B
r= L g= L and b= L
Only r and g ed to be calculated because r + g + b = 1 and b can therefore be calculated readily from r and g. A colour vector c is then calculated form c = ar + bq and thus, colours on each side of the edge is represented by a colour vector in normalized colour coordinates. Vector subtraction is then used to compare the colour similarity between an edgel in one image and potential candidate edgels in the other image. A difference limit E is used to accept potential matches.The difference limit E is defined as
Ica - cb < E
In order to handle an occlusion situation, that is a feature in one image being obscured by another feature in the other image, one should not impose colour matches on both sides of the edge. The requirement therefore is that at least one side of potential matching edgels should have the same colour.
If edgel matching is limited to a local comparison ambiguities can arise. Such ambiguities can be reduced by using the edge segment along which a disparity continuity constraint usually applies. This constraint comes from the assumption that a continuous edge in 3D space gives rise to a continuous edge in the image. This implies that the disparity value varies smoothly along a continuous edge. This is the coherence principle proposed in Prazdny in his paper "Detection of Binocular Disparities": Biological
Cybernetics 52: 93-99, 1985. It should be noted that a continuous edge in an image may not necessarily come from just one edge in 3D space, therefore, there may be discontinuities of disparity along a continuous edge in the image. Particular attention should therefore be paid to this point when a disparity discontinuity constraint is applied to edgel matching.
Once an edge has been identified in an image and for each edgel in that image potential corresponding edgels in another image have been selected, a final selection must be made to identify corresponding edgels in the two images on a one to one basis. In other words, an edgel in the second or any other image must be uniquely defined as that corresponding to an edgel in the first image.
Stereo matching techniques would usually produce more than one acceptable match, even when all of the constraints described hereinabove are applied to an undecided edgel. The requirements for matching uniqueness dictates that only one must be selected from these acceptable matches and the correspondence problem thus can be considered as a combinatorial optimization problem. A potential matching candidate will satisfy the following constraints: 1. The matched pixels or the matching entities will
all lie on the respective epipolar lines, or in
the case of more than two views at the
intersections of the respective epipolar lines.
2. There will be a similarity of image attributes
between all the matched entities.
3. A continuous 2D edge segment will usually
correspond to a continuous edge segment whether
it is a physical edge or an edge segment
resulting from a shadow. The disparity
continuity constraint can then be applied.
It is possible to formulate a measure which reflects the quality or goodness of match of different potential candidates based on the above three criteria and this measure is used to seek a combination of the potential candidates which will give the 'best' measure.
The coherence principle requires that disparities of the neighbouring elements corresponding to the same 3D object be similar, i.e. neighbouring image points corresponding to the same object should have nearly the same disparities. The Markov Random Field (MRF) image model, proposed by Geman and Geman in their paper "Stochastic Relaxation, Gibbs Distributions and the Bayesian Restoration of Images", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.
PAMI-6, pp. 721-741, November 1984, is therefore also applicable in stereo matching. Using the MRF model, matching adjacent edgels on an edge segment is an interdependent process as the disparities between these edgels must be similar. The MRF model quite often leads to the use of a simulated annealing algorithm which usually takes a larger amount of time to arrive at near optimal matching configurations.
Simulated annealing will only be suitable if the solution space is very large and where a deterministic algorithm would also run into time and space problems.
If more than two views are used, the application of the similarity and the epipolar line constraints will give a very confined solution space where a deterministic optimization algorithm can be used in preference to a stochastic one.
Although the MRF is best understood as a markov chain with transition probabilities for each edgel, the goodness measure can be applied in such a way that the goodness of match of an edgel depends not only on the combinations of potential candidates for itself but also on those of its neighbours. In this way, edgel-segment matching, which is a compromise between matching ambiguity and the generality of the matching algorithm, can be used to preserve the advantages of edgel matching and reduces the matching ambiguity using the disparity continuity constraint. This measure is then the transition probability in a different guise, and, in the framework of optimization, this measure of goodness of match is known as the cost function.
The above discussed local edge orientation and intensity normal at a corresponding edgel vary from image to image due to the change of viewing position.
The above discussed colour match constraint requires calculation of the colour value on each side of an edge using masks with a certain size of support and it therefore suffers from foreshortening effects due to a wide baseline or wide viewing angle. Therefore, these similarity constraints can neither be minimized nor maximised. They can however determine if a potential matching should be considered.The cost function C in canonical form can then be formulated as:
where S=f(#c, N, tan A) and, S = 1 if the similarity
constraints are
satisfied
S = 0 otherwise
and where c, N and tan A are respectively the above discussed colour vector, the intensity normal, and the local edge orientation,
n is the total number of views,
dj is the sum of distances of potential matches from the epipolar line (j = 2) or from the intersections of the epipolar lines (j > 2) for the i th and i+1 th edgels in a segment, Dj is the change of disparities between neighbouring pixels (the i th and the i+lth edgel in a segment, and
ss is a weighting factor.
It should be note that the cost reflects the goodness of match of two neighbouring edgels. If the similarity constraints are applied to filter the obvious false matches, the cost function will be simplified to
The optimization constraints are then reduced to the distances from the epipolar lines and intersections of epipolar lines and change of disparities between neighbouring edgels only. We want a combination of matches for edgels along a segment that will give
The solving of this equation will give a solution that is globally optimal as well as locally optimal.
This will be better understood by reference to Figure 5 of the accompanying drawings. For a continuous edge segment containing 1 edgels, each edgel will find pw potential candidates and a layered network of layers can be constructed with ps nodes in each layer of the network. Cost paths 30,31,32 can then be generated for each node across the layers of the network. The paths are unidirectional from the ith layer to the i+lth layer and paths between the same layer are forbidden.
The paths 30,31,32 connecting two nodes are associated with a weight which reflects the cost for choosing each node as a match for the respective edgels. This cost is obtained using the above discussed simplified cost equation for Cay . The complexity of the network is indicated by the number of nodes and the number of paths connecting the nodes.
The total number of nodes is given by
The number of arcs is given by
The aim is therefore to find a route which will give the minimum cost from the first edgel (i = 1st layer) to the last edgel (i-lth layer) of the segment, i.e. a node in the first layer to a node in the last layer. In other words, once the network has been created finding the solution is a shortest path problem.
Various shortest path algorithms have been developed and are well known and any efficient shortest path algorithm can be used that finds the shortest path from a node to another node, or from a node to all other nodes. It has been shown that the so-called linked double ended queue (L-deque) data structure and the peculiar label updating policy designed by Pape and disclosed in ACM Transactions on
Mathematical Software 6,450,1974 Algorithm 562, is one of the fastest on layered graphs and is also one of the easiest to implement. The algorithm is well known and no further explanation is required but the reasons why this algorithm is chosen for stereo matching are as follows. The memory requirement of the L-deque algorithm is 4n + 2m where n is the number of nodes and m is the number of arcs. The computational cost is approximately linear with the number of layers in the graph.There is no threshold value to be adjusted as there is in some of the other algorithms for layered graphs. Also, we have found by experimentation that the algorithm is inherently capable of handling disparity discontinuities in edge segments resulting from depth discontinuities in a scene. This enables a continuous segment to be broken up or several short segments to be joined together arbitrarily for the purpose of processing though a minimum of about 10 edgels is required to ensure correct correspondence.
The L-deque algorithm calculates the shortest path from a chosen node to all other nodes in the network. Therefore, the L-deque algorithm has to be used pl times to obtain the shortest of the pishortest paths. The computational cost for such a task will be pl*t where t is the time required to calculate the shortest path from a node in the first layer to the nodes in the last layers as represented by Figure 6 of the accompanying drawings a dummy 0th layer and a dummy 1+1 layer with one node in each layer can be introduced into the network. Introducing these dummy layers increases the network complexity by two nodes and by pl+pl arcs but reduces the number of calculations of the shortest path to only once. This, in general, results in considerable reduction of the computational cost.
Once corresponding edgels have been identified between images, edgels are then grouped together by an 8-connectedness criterion using an iterative edge labelling algorithm. In this way, segments of edges are obtained and a matching algorithm is then used to obtain a sparse 3D map of the scene. The matching algorithm is shown in Figure 7 of the accompanying drawings and is self explanatory.
During the matching of a segment or a subsegment, only information about the matching edgels is needed. Therefore, the algorithm can be run on a network of processors in parallel and no communication is necessary between processors. A group of edgels forming a matching segment can come from a single continuous edge segment or several concatenated edge segments or a section of a long edge segment. The computation time of the algorithm is approximately linear with the length of the matching segment. the lengths of all the matching segments can be made the same for each processor so that the time taken for matching on all processors is approximately similar.
Having thus described the present invention by reference to a preferred embodiment it is to be well understood that the embodiment in question is exemplary only and that modifications and variations such as will occur to those possessed of appropriate knowledge and skills may be made without departure from the spirit and scope of the invention as set forth in the appended claims and equivalents thereof.
Claims (23)
1. A method of matching corresponding points in at
least two different images representing a scene as captured by a camera, each image having an associated camera matrix defining parameters related to the capturing of the image by the camera the method comprising the steps of selecting a point in a first one of the images; calculating directly from the camera matrices of the first image and of a second one of the images the position of an epipolar line; and identifying in the second image points intersected by the epipolar line as potential matches to the selected point in the first image.
2. A method as claimed in claim 1, further comprising the steps of calculating for a third image from appropriate camera matrices an epipolar line corresponding to the selected point in the first image and an epipolar line corresponding to the epipolar line calculated in the second image, and identifying as a likely potential match to the selected point in the first image the point at which the two epipolar lines intersect in the third image.
3. A method as claimed in claim 1 or 2, further comprising the step of applying further constraints to the points identified as potential matches in order to facilitate the identification of potential matches in the or each other image to the selected point in the first image.
4. A method as claimed in claim 3, wherein the further constraints include comparing at least one of the local edge orientation, the intensity normal and/or the surrounding colour of both the selected point and the potential matches in the or each other image.
5. A method as claimed in any preceding claim, further comprising the step of selecting in the or each image from the potential matches a point as the best match to the selected point in the first image by applying to each potential match a cost function and using a shortest path algorithm to identify the best match.
6. A method as claimed in any preceding claim, wherein the selected point is chosen as an edgel detected in the first image.
7. A method as claimed in any preceding claim, wherein only edgels detected in the second image are identified as points intersected by the epipolar line.
8. A method as claimed in any preceding claim, wherein the camera matrix is calculated for each captured image after an initial calibration of the camera.
9. A method of matching corresponding points in a plurality of images representing different aspects of a scene, the method comprising selecting points in one image, calculating an epipolar line in each other image for each selected point, identifying in the or each other image points which are potential matches to each of the selected point as points on or near the epipolar lines, applying to each identified point in each other image a weighting representing the likelihood of the identified points matching the selected points and selecting as the best match to the selected points the combination of points which result in a minimal weighting summation.
10. A method of matching image points in at least two different images of a scene, the method comprising selecting an image point in one image and candidate matching image points in another image and applying a shortest path algorithm to the candidate matching points thereby to identify a single point in the other image corresponding to the selected point in the one image.
11. A method as claimed in claim 10, wherein candidate points in the other image are identified as edge elements lying on or near to an epipolar line calculated from an image capture matrix defining parameters associated with the capturing of the other image.
12. A method as claimed in claim 10 or 11 wherein candidate points are selected by comparing characteristics of the point in the one image with characteristics of points in the other image.
13. A method as claimed in claim 12, wherein the compared characteristics include the colour vector of the selected points in the one way and the colour vectors of points in the other image.
14. A method as claimed in claim 13, wherein the colour vector is determined by applying an orientation mask to image points in the vicinity of the point in each image under comparison.
15. A method as claimed in claim 12 or 13 or 14 wherein the compared characteristics include the intensity normal of the selected point in the one image and the intensity normal of points in the other image.
16. A method as claimed in claim 15, wherein the intensity normal is calculated from a gradient operator.
17. A method as claimed in any of claims 12 to 16 as dependent on claim 11, wherein the compared characteristics include the local edge orientation in the vicinity of the point in one image and in the vicinity of points in the other image.
18. A method as claimed in any of claims 10 to 17, wherein the shortest path algorithm applies to each candidate matching point a weighting representing the likelihood of the candidate point matching the selected point and selecting as the best match to the selected point the point which results in a minimal weighting.
19. A method as claimed in claim 18, wherein the weighting that is applied is dependent on the compared characteristics as claimed in any of claims 12 to 17.
20. An image system in which an electronic camera acquires images representing a scene, the system comprising defining means for defining matrices representing parameters associated with the acquisition of the images by the camera; image selecting means for selecting from the acquired images a first image and a second image; point selecting means for selecting a point in the first image; calculating means for calculating directly from the matrices associated respectively with the first and the second image the position of an epipolar line; and identifying means for identifying in the second image points intersected by the epipolar line as potential matches to the selected point in the first image.
21. An image system as claimed in claim 20, further comprising comparing means for comparing information relating to features of the selected point in the first image with like features of the identified points in the second image.
22. A method substantially as herein described with reference to the accompanying drawings.
23. A system substantially as herein described with reference to the accompanying drawings.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU79899/91A AU7989991A (en) | 1990-05-29 | 1991-05-29 | Machine vision stereo matching |
US07/966,181 US5432712A (en) | 1990-05-29 | 1991-05-29 | Machine vision stereo matching |
EP91910391A EP0551274A1 (en) | 1990-05-29 | 1991-05-29 | Determination of mutually corresponding features in associated two-dimensional partial images of a three-dimensional scene |
PCT/GB1991/000856 WO1991019265A1 (en) | 1990-05-29 | 1991-05-29 | Machine vision stereo matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB909011922A GB9011922D0 (en) | 1990-05-29 | 1990-05-29 | Machine vision stereo matching |
Publications (2)
Publication Number | Publication Date |
---|---|
GB9028124D0 GB9028124D0 (en) | 1991-02-13 |
GB2244621A true GB2244621A (en) | 1991-12-04 |
Family
ID=10676701
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB909011922A Pending GB9011922D0 (en) | 1990-05-29 | 1990-05-29 | Machine vision stereo matching |
GB9028124A Withdrawn GB2244621A (en) | 1990-05-29 | 1990-12-28 | Machine vision stereo matching |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB909011922A Pending GB9011922D0 (en) | 1990-05-29 | 1990-05-29 | Machine vision stereo matching |
Country Status (1)
Country | Link |
---|---|
GB (2) | GB9011922D0 (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0680014A2 (en) * | 1994-04-25 | 1995-11-02 | Canon Kabushiki Kaisha | Image processing method and apparatus |
EP0680019A2 (en) * | 1994-04-19 | 1995-11-02 | Canon Kabushiki Kaisha | Image processing method and apparatus |
EP0684585A2 (en) * | 1994-04-22 | 1995-11-29 | Canon Kabushiki Kaisha | Image forming method and apparatus |
EP0689167A1 (en) * | 1994-06-24 | 1995-12-27 | Canon Kabushiki Kaisha | Image processing apparatus and method |
EP0707287A3 (en) * | 1994-10-13 | 1996-07-03 | Canon Kk | Image data processing apparatus and image reproduction apparatus |
EP0707288A3 (en) * | 1994-10-14 | 1996-07-03 | Canon Kk | Image processing method and apparatus |
WO1996035102A1 (en) * | 1995-05-04 | 1996-11-07 | Siemens Aktiengesellschaft | Process for selecting an image from a collection of images for the photogrammetric calculation of the spatial co-ordinates of a point on an object |
WO1999063306A1 (en) * | 1998-05-29 | 1999-12-09 | Central Research Laboratories Limited | Method and apparatus for automated measurement |
EP1065628A1 (en) * | 1999-06-21 | 2001-01-03 | Institut für Neurosimulation und Bildtechnologien GmbH | Optical 3-D measurement with several approximation points |
DE19621612C2 (en) * | 1996-05-31 | 2001-03-01 | C Vis Comp Vision Und Automati | Device for monitoring a section of track in a train station |
US6516099B1 (en) | 1997-08-05 | 2003-02-04 | Canon Kabushiki Kaisha | Image processing apparatus |
US6647146B1 (en) | 1997-08-05 | 2003-11-11 | Canon Kabushiki Kaisha | Image processing apparatus |
US6668082B1 (en) | 1997-08-05 | 2003-12-23 | Canon Kabushiki Kaisha | Image processing apparatus |
WO2004012147A1 (en) * | 2002-07-30 | 2004-02-05 | Mitsubishi Denki Kabushiki Kaisha | Method for linking edges in stereo images into chains |
GB2483434A (en) * | 2010-08-31 | 2012-03-14 | Sony Corp | Detecting stereoscopic disparity by comparison with subset of pixel change points |
EP2810247B1 (en) * | 2012-01-31 | 2018-04-25 | Sony Mobile Communications Inc. | Method and electronic device for creating a combined image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1363207A (en) * | 1971-12-08 | 1974-08-14 | Bendix Corp | Stereoplotting apparatus for correlating image points disposed along epipolar lines |
GB1473017A (en) * | 1974-04-12 | 1977-05-11 | Bendix Corp | Single photo epipolar scan instrument |
GB2129640A (en) * | 1982-10-28 | 1984-05-16 | Audim Sa | Binocular vision correlators |
GB2147762A (en) * | 1983-10-10 | 1985-05-15 | Audim Sa | An artifical binocular vision system |
EP0158984A2 (en) * | 1984-04-13 | 1985-10-23 | Hitachi, Ltd. | Method for determining apair of corresponding points between images |
-
1990
- 1990-05-29 GB GB909011922A patent/GB9011922D0/en active Pending
- 1990-12-28 GB GB9028124A patent/GB2244621A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1363207A (en) * | 1971-12-08 | 1974-08-14 | Bendix Corp | Stereoplotting apparatus for correlating image points disposed along epipolar lines |
GB1473017A (en) * | 1974-04-12 | 1977-05-11 | Bendix Corp | Single photo epipolar scan instrument |
GB2129640A (en) * | 1982-10-28 | 1984-05-16 | Audim Sa | Binocular vision correlators |
GB2147762A (en) * | 1983-10-10 | 1985-05-15 | Audim Sa | An artifical binocular vision system |
EP0158984A2 (en) * | 1984-04-13 | 1985-10-23 | Hitachi, Ltd. | Method for determining apair of corresponding points between images |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0680019A3 (en) * | 1994-04-19 | 1996-01-17 | Canon Kk | Image processing method and apparatus. |
EP0680019A2 (en) * | 1994-04-19 | 1995-11-02 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US6233004B1 (en) | 1994-04-19 | 2001-05-15 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US6628820B2 (en) | 1994-04-22 | 2003-09-30 | Canon Kabushiki Kaisha | Image forming method and apparatus |
EP0684585A3 (en) * | 1994-04-22 | 1996-04-17 | Canon Kk | Image forming method and apparatus. |
US6263100B1 (en) | 1994-04-22 | 2001-07-17 | Canon Kabushiki Kaisha | Image processing method and apparatus for generating an image from the viewpoint of an observer on the basis of images obtained from a plurality of viewpoints |
EP0684585A2 (en) * | 1994-04-22 | 1995-11-29 | Canon Kabushiki Kaisha | Image forming method and apparatus |
EP0680014A2 (en) * | 1994-04-25 | 1995-11-02 | Canon Kabushiki Kaisha | Image processing method and apparatus |
EP0680014A3 (en) * | 1994-04-25 | 1996-12-04 | Canon Kk | Image processing method and apparatus. |
US5937105A (en) * | 1994-04-25 | 1999-08-10 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US6023276A (en) * | 1994-06-24 | 2000-02-08 | Canon Kabushiki Kaisha | Image processing apparatus and method for forming a three-dimensional display |
EP0689167A1 (en) * | 1994-06-24 | 1995-12-27 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US5764236A (en) * | 1994-10-13 | 1998-06-09 | Canon Kabushiki Kaisha | Image data processing apparatus and image reproduction apparatus |
EP0707287A3 (en) * | 1994-10-13 | 1996-07-03 | Canon Kk | Image data processing apparatus and image reproduction apparatus |
US6608622B1 (en) | 1994-10-14 | 2003-08-19 | Canon Kabushiki Kaisha | Multi-viewpoint image processing method and apparatus |
EP0707288A3 (en) * | 1994-10-14 | 1996-07-03 | Canon Kk | Image processing method and apparatus |
WO1996035102A1 (en) * | 1995-05-04 | 1996-11-07 | Siemens Aktiengesellschaft | Process for selecting an image from a collection of images for the photogrammetric calculation of the spatial co-ordinates of a point on an object |
DE19621612C2 (en) * | 1996-05-31 | 2001-03-01 | C Vis Comp Vision Und Automati | Device for monitoring a section of track in a train station |
US6516099B1 (en) | 1997-08-05 | 2003-02-04 | Canon Kabushiki Kaisha | Image processing apparatus |
US6647146B1 (en) | 1997-08-05 | 2003-11-11 | Canon Kabushiki Kaisha | Image processing apparatus |
US6668082B1 (en) | 1997-08-05 | 2003-12-23 | Canon Kabushiki Kaisha | Image processing apparatus |
WO1999063306A1 (en) * | 1998-05-29 | 1999-12-09 | Central Research Laboratories Limited | Method and apparatus for automated measurement |
EP1065628A1 (en) * | 1999-06-21 | 2001-01-03 | Institut für Neurosimulation und Bildtechnologien GmbH | Optical 3-D measurement with several approximation points |
WO2004012147A1 (en) * | 2002-07-30 | 2004-02-05 | Mitsubishi Denki Kabushiki Kaisha | Method for linking edges in stereo images into chains |
GB2483434A (en) * | 2010-08-31 | 2012-03-14 | Sony Corp | Detecting stereoscopic disparity by comparison with subset of pixel change points |
US8611641B2 (en) | 2010-08-31 | 2013-12-17 | Sony Corporation | Method and apparatus for detecting disparity |
EP2810247B1 (en) * | 2012-01-31 | 2018-04-25 | Sony Mobile Communications Inc. | Method and electronic device for creating a combined image |
Also Published As
Publication number | Publication date |
---|---|
GB9011922D0 (en) | 1990-07-18 |
GB9028124D0 (en) | 1991-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5432712A (en) | Machine vision stereo matching | |
US5818959A (en) | Method of producing a three-dimensional image from two-dimensional images | |
US20200126289A1 (en) | Method and system for creating a virtual 3d model | |
US7711182B2 (en) | Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces | |
US8385630B2 (en) | System and method of processing stereo images | |
US12118668B2 (en) | Scene representation using image processing | |
EP0526938A2 (en) | Method and apparatus for determining the distance between an image and an object | |
GB2244621A (en) | Machine vision stereo matching | |
US20150049937A1 (en) | Method and apparatus for processing images | |
O'Byrne et al. | A stereo‐matching technique for recovering 3D information from underwater inspection imagery | |
Lane et al. | Tutorial: Overview of stereo matching research | |
JPH04130587A (en) | Three-dimensional picture evaluation device | |
Kutulakos | Shape from the light field boundary | |
EP4372687A1 (en) | Training method for object pose estimation | |
JP2807137B2 (en) | 3D shape detection method | |
Hatzitheodorou et al. | Stereo matching using optic flow | |
JP3275252B2 (en) | Three-dimensional information input method and three-dimensional information input device using the same | |
Kovacs et al. | Edge detection in discretized range images | |
Blake | Parallel computation in low-level vision | |
Chen et al. | An integrated approach to stereo matching, surface reconstruction and depth segmentation using consistent smoothness assumptions | |
Baha et al. | Neural disparity map estimation from stereo image | |
Zabulis et al. | Efficient, precise, and accurate utilization of the uniqueness constraint in multi-view stereo | |
Huang et al. | Stereo vision using a microcanonical mean field annealing neural network | |
Schütz | Geometric point matching of free-form 3D objects | |
Tran et al. | A simple model generation system for computer graphics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |