EP1934936A2 - A method of segmenting an image - Google Patents
A method of segmenting an imageInfo
- Publication number
- EP1934936A2 EP1934936A2 EP06806125A EP06806125A EP1934936A2 EP 1934936 A2 EP1934936 A2 EP 1934936A2 EP 06806125 A EP06806125 A EP 06806125A EP 06806125 A EP06806125 A EP 06806125A EP 1934936 A2 EP1934936 A2 EP 1934936A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- area
- boundary
- digital data
- disocclusion
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000002308 calcification Effects 0.000 claims description 29
- 210000004204 blood vessel Anatomy 0.000 claims description 8
- 230000007423 decrease Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 230000006872 improvement Effects 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 description 23
- 208000004434 Calcinosis Diseases 0.000 description 21
- 230000011218 segmentation Effects 0.000 description 18
- 241000282819 Giraffa Species 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 4
- 238000009472 formulation Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 238000012804 iterative process Methods 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 1
- 210000000709 aorta Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/755—Deformable models or variational models, e.g. snakes or active contours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- the present invention relates to a method of segmenting an image including a background region and an area of interest.
- lnpainting is a technique that originates from retouching paintings where one wants to recreate lost or damaged structures in a legible way.
- Digital inpainting uses spatial or frequency information to restore partially damaged/removed images.
- Geodesic active contours introduce a parameterisation independent formulation. All these models deal only with contours, not with the regions they separate.
- contour based active contour methods can provide a segmentation of the circle. Although sensitive to the initialisation, region based active contours using statistical parameters of intensity/colour distribution for each region will fail to isolate the circle in the flag from the rest of the image, since the two regions have the same statistics.
- Figure 1 B the pseudo-classification image, the difficulty comes from the hardly visible edge information.
- the present inventors have recognised that it would be desirable to provide an alternative method of segmentation, using region based active contours with image inpainting.
- a method of segmenting an image having a background region and an area of interest, wherein the area of interest occludes part of the background region comprising the steps of: taking a starting set of digital data representative of the image; (1 ) estimating a boundary of the area of interest;
- the result representative of the degree of disocclusion is additionally determined by the size.of the area within the estimated boundary.
- the result representative of the degree of disocclusion is biased in a positive way in response to an increase in the difference between the starting set of digital data and the inpainted set of digital data and a decrease in the area within the estimated boundary.
- the step of moving the position of the estimated boundary comprises using information derived from the result representative of the degree of disocclusion to determine how the position of the estimated boundary should be moved.
- the step of moving the position of the estimated boundary comprises moving portions of the boundary in subsequent iterations in a direction determined by the factor that yields improvement in the result representative of the degree of disocclusion.
- the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to increase the area within the boundary in response to an increase in the result representative of the degree of disocclusion resulting from an increase in .
- the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to increase the area within the boundary in response to an increase in the result representative of the degree of disocclusion resulting from an increase in the previous iteration of the difference between the starting set of digital data and the inpainted set of digital data.
- the result representative of the degree of disocclusion is a generalised statistical moment of the difference between the starting set of digital data and the inpainted set of digital data on the area of interest.
- the present invention also extends to a method of computing the shape and size of a calcification of a blood vessel by processing an image of at least a part of the blood vessel containing said calcification, which method comprises: taking a starting set of digital data representative of an image of at least part of a blood vessel containing an area of calcification, said area of calcification being set against a background area;
- the present invention further extends to a method of reconstructing the background of an image having an area of interest in the foreground, comprising: taking a starting set of digital data representative of the image;
- the present invention also extends to a pre-programmed computational means and an instruction set for performing the above described invention.
- Figures 1A and 1 B show two examples where segmentation is required for inpainting
- FIGS. 2A and 2B show an example of the segmentation required in the present invention
- Figure 3 shows the results of a first experiment where the four square blocks on white are the background
- Figure 4 shows the results of a second experiment where an image in the foreground is isolated from the background image to enable segmentation;
- Figure 5 shows the image of the second experiment being defined from a first starting contour; - -
- Figure 6 shows the various stages of the present invention starting with an image having a background region with an occluding area and ending with an image where the occluding area has been segmented;
- Figures 7 A to 71 illustrate an example of the present invention when used to determine the shape and size of an area of calcification in a blood vessel
- Figures 8A to 81 show the example of Figures 7A to 71 illustrating the progression of the area within the boundary
- Figures 9A to 91 show the example of Figures 7A and 71 illustrating the background image when the area within the boundary is inpainted;
- Figures 1 OA to 10D illustrate an example of the present invention starting with a small misplaced initial contour
- Figures 1 1 A to 11 F illustrate an example of the present invention starting with an initially manually placed contour derived from the detection of regions of calcification in an x-ray;
- Figure 13 illustrates the present invention when used to remove an image of a bus from a background image.
- the present invention introduces a novel methodology within the framework of active contours. This belongs to the region based family of active contours, but instead of considering that the image domain is partitioned into two (or several regions), the present invention consists in considering the following problem:
- an area is segmented by differentiating between an area of foreground having different characteristics to the background and the area of background that surrounds the foreground.
- the image is segmented by differentiation between the foreground and the background as a whole - as if the foreground did not exist.
- the foreground that requires segmentation may be considered to be an area of interest that occludes or is set against the background.
- An assumption is made that an image consists of a background region 2 and an area of interest 4, where the area of interest occludes part of the background region. 2.
- An initial estimate is made of the location of the boundary 6 of the area of interest located in the background region.
- the background region is assigned a positive function and the area of interest is assigned a negative function.
- Steps 4 and 5 are repeated such that the assumed area of interest is once again inpainted using information from the background region and the intensity values of the original image and the inpainted image are compared to derive a disocclusion quality measure.
- Steps 4 to 7 are repeated until the disocclusion quality measure reaches a maximum, thus fulfilling the convergence criteria described below. At this time, the assumed boundary should correlate with the actual boundary of the area of interest.
- the image may then be segmented into two separate and distinct images (as shown in Figure 2A) as required.
- disocclusion is used to describe the removal of an area of interest occluding part of the background region.
- the disocclusion quality measure gives a numerical indication of how successful the boundary estimation and therefore the inpainting has been.
- the present invention is described with reference to "an image”. It will be apparent that by “taking an image” this involves starting with an image previously acquired by some imaging means, for example, an x-ray image.
- the starting point of the present invention is digital information representing an image.
- D be the image domain
- w the background image defined on D .
- Ii 0 C (Ub 1 Uf ) . - -
- the foreground region should be relatively small.
- the true region ⁇ should be an extremal point of 7( ⁇ ) and we therefore search for a necessary condition for the function /( ⁇ ) to achieve an extremum.
- the next step is to consider the case of noisy additive signals u h + u f + ⁇ , ⁇ being some noise, as it would be in the case of, for example, X-rays or reflection.
- the resulting J( ⁇ ) can be seen as a "generalised moment" of u 0 -u b on ⁇ , a moment that we want to maximise.
- J(Q) is biased by the two following properties:
- the disocclusion quality measure seeks to optimise itself by finding the maximum difference in intensity between the original image and the inpainted image within the minimum possible area.
- the present invention uses the computation from Aubert et al in "Image Segmentation using active contours: Calculus of variations or shape gradients? (SIAM Journal of Applied Mathematics, 63(6):2128-2154, 2003)" of 7'( ⁇ ) using shape derivative tools.
- rthe boundary of ⁇ , N the inward normal to r
- the Gateaux derivative of /( ⁇ ) in the direction of V is, using the chain rule:
- the next step is to apply the inpainting operation and compute the shape derivate related terms.
- F uu denoting the partial differential of F u with respect to its first variable is explicited below as a linear elliptic operator, while F m denotes the shape derivative of F 11 and can be computed as:
- Lagrangian L being defined by either (3) or (2), one gets: . .
- the goal is to maximise the "disocclusion criterion".
- a first order space convex scheme is used to solve this equation, it is ⁇ 'f 1 O)V " ] .
- the time step ⁇ s chosen at each iteration as 1/ Il F" 11 >o .
- the discretisation used for the inpainting /( « 0 , ⁇ " ) follows closely the one proposed by Chan and Shen in "Mathematical models for local non texture inpainting” ([SIAM Journal of Applied Mathematics, 62(3), 1019-1043, 2001]).
- the digital inpainting is computed using a Full Multigrid Scheme, with one V-cycle at each resolution where each level of the V-cycle implements a Full Approximation Scheme (FAS) (following the description given in "A Multigrid tutorial” (SIAM)).
- FAS Full Approximation Scheme
- the Grid transfer operations use the ones proposed in "Variational Optical Flow
- the initial estimate for the boundary to eventually define the contour of the area of interest is in the shape of a giraffe.
- the invention relies on inpainting an area within the estimated boundary using information from the remainder of the background. Therefore, in this example, information derived from the area outside of the giraffe is used to inpaint the area inside the giraffe. Following this, the disocclusion quality measure is derived based upon the difference in intensity values between the original image and the inpainted image with reference to the size of the area within the giraffe.
- the boundary is deemed to be split into portions. For each portion a decision is made based on information derived from the disocclusion quality measure. For example, we will first consider the left leg 12 of the giraffe. It is clear from the image that this left leg does not cover the area of interest. Therefore, a decision will be made to have a first attempt at moving the portion of boundary at the bottom of the left leg so that the overall area of the giraffe increases. A move of this nature will result in the same difference of intensity values between the orginal image and the inpainted image as in the previous iteration. As the area within the boundary has increased, the overall disocclusion quality measure will have decreased. Therefore, in the next iteration, it will be apparent that this particular portion of the boundary should be moved to result in a decrease in the overall area of the giraffe.
- This iterative process will continue for this portion of the boundary until a move of this portion of boundary results in no increase in the difference of intensity values between the original image and the inpainted image, i.e. this portion of the boundary now falls outside the area of interest, and therefore the overall result will be a decrease of the disocclusion quality measure.
- the estimated boundary is gradually adapted to fit the contour of the area of interest as shown in Figure 5 (middle image).
- Results are given below from experiments performed on synthetic as well as real data. They present an increasing level of complexity in the image content.
- experiment 1 is based on an image of four black squares on a white background.
- the two areas to be segmentised are the white background, following its reconstruction, and the foreground of four black squares.
- the first row shows a snapshot of the contour evolution at iterations 1 , 7, 15 and 30. From this it can be seen that as the iterations continue, the contours more clearly define the black squares in the foreground.
- the second row shows the corresponding domain and the last row shows the inpainting result.
- the circles in row 1 provide the starting point for finding the contours of the areas of segmentation.
- the contours are then iteratively adapted until the foreground areas have been depicted.
- the final row shows the result of inpainting within the segmentation.
- experiment 2 is based on a dark non convex object on a light background with 30% added Gaussian noise.
- ⁇ - 0.1, p 0.55.
- the first row shows a snapshot of the contour evolution of iterations 1 , 10, 20 and 30. From this it can be seen that gradually, the mathematical formulae result in the contour clearly defining the foreground shape.
- the second row shows the corresponding domains and the last one of the inpaintings.
- experiment 3 uses the same image as experiment 2. It illustrates some stability with respect to the initial contour, as well as the role of the area penaliser exponent p in L .
- the ⁇ weight in the inpainting was set of 0.1.
- the initial contour is in the shape of a giraffe, but this is gradually adapted to fit the shape of the foreground object.
- experiment 4 shows a "pseudo classification" image.
- This image could represent an image typically found in medical imaging, for example, of an area of calcification.
- the first row shows the original object and the pseudo classification.
- the two next rows show snapshots of contour evolution at different iterations, for two different initialisations.
- the second initialisation while "more creative” is also closer to the true contour and allows for a much faster convergence (260 iterations in the first case, 50 in the second).
- Figures 7 A-I, Figure 8 A-I and Figures 9 A- I show various iterations of the contour evolution starting with an initial random estimate of the boundary of the area of calcification. As can be seen, as the iterations progress, the boundary is adapted to fit the contour of the area of calcification. This sort of technique would be useful for locating and assessing calcifications found in blood vessels, for example, an aorta.
- Figures 8A-I show corresponding images of the areas within the boundary.
- Figures 9A- I show the images of the background with the area within the boundary inpainted using information from the remainder of the background. As can be seen in Figure 91, the area of calcification is no longer visible.
- Figures 1OA to 1OD illustrates the robustness of the segmentation algorithm described above.
- the initial boundary provided at the beginning of the algorithm is relatively small and misplacedjn comparison to the area of interest.
- this initial small curve evolves into a meaningful boundary that can subsequently be used for segmentation.
- Figures 11 A to 11 F illustrate the results of a segmentation initialised by a calcification detection method.
- the initial contours shown in Figures 1 1 B and 11 E are applied manually following the detection of areas of calcification using the calcification detection method described in Bruijne in "Shape Particle Guided Tissue Classification". These contours are used as the initial boundaries in the above defined algorithm which is subsequently executed to result in the boundary locating the edge of the area of interest, in this case an area of calcification.
- the disocclusion quality measure continues increasing until the contour fits the area to be segmented - based on the calculations made following inpainting of the foreground region.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
A method of segmenting an image having a background region and an area of interest, wherein the area of interest is surrounded by part of the background region, which method comprises: taking a starting set of digital data representative of the image; (1 ) estimating a boundary of the area of interest; (2) inpainting the area of interest within the boundary using information from the remainder of the background region; (3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region; (4) moving the position of the estimated boundary to produce a further generation estimated boundary; (5) inpainting the area within the further generation estimated boundary; and (6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained, using information of the boundary of the area of interest at the time the maximum degree of disocclusion is obtained to segment the image.
Description
A METHOD OF SEGMENTING AN IMAGE
The present invention relates to a method of segmenting an image including a background region and an area of interest.
lnpainting is a technique that originates from retouching paintings where one wants to recreate lost or damaged structures in a legible way. Digital inpainting uses spatial or frequency information to restore partially damaged/removed images.
Various inpainting techniques are known that enable image restoration, in particular for photographs, videos and films.
Active contours, or snakes, have been used extensively in computer vision and medical imaging, for the purpose of segmentation, in order to overcome the locality problem of. edge detectors. They are curves with built-in regularity properties and preferences for edges in an image. Although very simple to implement, they suffer from initialisation problems that necessitates reparameterisation.
Several approaches have previously been suggested in order to overcome initialisation problems, including balloon forces and gradient vector flows. Geodesic active contours introduce a parameterisation independent formulation. All these models deal only with contours, not with the regions they separate.
Based on a simplification of the Mumford-Shah segmentation described in "Communications on Pure and Applied Mathematics (42:577-684, 1989)", Chan and Vese in "Active contours without edges (IEEE Transactions on Image Processing)" proposed a region based algorithm that leads to a contour evolution coupled with the estimation of the mean values in the regions delimited by this contour. More complex statistical descriptors have also been proposed instead of the mean. In a series of papers, Paragios and Deriche in "Geodesic active regions: a new paradigm to deal with frame partition problems in computer vision (Journal on Visual Communication and Image
Representation, 13 (1/2):249-268, March/June 2002)" proposed a new paradigm called Geodesic Active Regions where both contour based and
region based terms are used. Many declinations on these ideas have been proposed to tackle a variety of situations encountered in computer vision and medical imaging.
Usual region based active contours assume that the image is divided into several semantically meaningful regions and attempt to differentiate them through recovering dynamically statistical optimal parameters for each region. For example, a prior art method is illustrated in Figure 1 , where Figure 1A represents the flag of the Greenland territory and Figure 1 B shows an object added on a smooth background via addition of intensities, similar to the presence of transparent layers, known as "pseudo-classifications".
Referring to Figure 1 A, contour based active contour methods can provide a segmentation of the circle. Although sensitive to the initialisation, region based active contours using statistical parameters of intensity/colour distribution for each region will fail to isolate the circle in the flag from the rest of the image, since the two regions have the same statistics. Referring to Figure 1 B, the pseudo-classification image, the difficulty comes from the hardly visible edge information.
The present inventors have recognised that it would be desirable to provide an alternative method of segmentation, using region based active contours with image inpainting.
According to a first aspect of the present invention, there is provided a method of segmenting an image having a background region and an area of interest, wherein the area of interest occludes part of the background region, comprising the steps of: taking a starting set of digital data representative of the image; (1 ) estimating a boundary of the area of interest;
(2) inpainting the area of interest within the boundary using information from the remainder of the background region;
(3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region;
(4) moving the position of the estimated boundary to produce a further generation estimated boundary;
- -
(5) inpainting the area within the further generation estimated boundary; and
(6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained, using information of the boundary of the area of interest at the time the maximum degree of disocclusion is obtained to segment the image.
It will be appreciated that a maximum degree of disocclusion would result in optimum information about the boundary of the area of interest. However, a useful segmentation can also be obtained if steps (4), (5) and (6) are repeated until the degree of disocclusion approaches a maximum.
Preferably the result representative of the degree of disocclusion is additionally determined by the size.of the area within the estimated boundary.
In an embodiment, the result representative of the degree of disocclusion is biased in a positive way in response to an increase in the difference between the starting set of digital data and the inpainted set of digital data and a decrease in the area within the estimated boundary.
Preferably, the step of moving the position of the estimated boundary comprises using information derived from the result representative of the degree of disocclusion to determine how the position of the estimated boundary should be moved.
More preferably, the step of moving the position of the estimated boundary comprises moving portions of the boundary in subsequent iterations in a direction determined by the factor that yields improvement in the result representative of the degree of disocclusion.
In an embodiment, the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to increase the area within the boundary in response to an increase in the result representative of the degree of disocclusion resulting from an increase in
. -
the difference between the starting set of digital data and the inpainted set of digital data.
In an alternative embodiment, the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to increase the area within the boundary in response to an increase in the result representative of the degree of disocclusion resulting from an increase in the previous iteration of the difference between the starting set of digital data and the inpainted set of digital data.
Preferably, the result representative of the degree of disocclusion is a generalised statistical moment of the difference between the starting set of digital data and the inpainted set of digital data on the area of interest.
The present invention also extends to a method of computing the shape and size of a calcification of a blood vessel by processing an image of at least a part of the blood vessel containing said calcification, which method comprises: taking a starting set of digital data representative of an image of at least part of a blood vessel containing an area of calcification, said area of calcification being set against a background area;
(1 ) estimating a boundary of the area of calcification;
(2) inpainting the area of calcification within the estimated boundary using information from the remainder of the background area;
(3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region;
(4) moving the position of the estimated boundary to produce a further generation estimated boundary;
(5) inpainting the area within the further generation estimated boundary; and
(6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained, using information of the boundary of the calcification at the time of the maximum degree of disocclusion to compute the shape and size of the calcification.
The present invention further extends to a method of reconstructing the background of an image having an area of interest in the foreground, comprising: taking a starting set of digital data representative of the image;
(1 ) estimating a boundary of the area of interest of the foreground;
(2) inpainting the area of interest within the boundary using information from the remainder of the background region;
(3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region;
(4) moving the position of the estimated boundary to produce a further generation estimated boundary;
(5) inpainting the area within the further generation estimated boundary; and
(6) . computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained.
The present invention also extends to a pre-programmed computational means and an instruction set for performing the above described invention.
Embodiments of the present invention will hereinafter be described, by way of example, with reference to the accompanying drawings, in which:
Figures 1A and 1 B show two examples where segmentation is required for inpainting;
Figures 2A and 2B show an example of the segmentation required in the present invention;
Figure 3 shows the results of a first experiment where the four square blocks on white are the background;
Figure 4 shows the results of a second experiment where an image in the foreground is isolated from the background image to enable segmentation; Figure 5 shows the image of the second experiment being defined from a first starting contour;
- -
Figure 6 shows the various stages of the present invention starting with an image having a background region with an occluding area and ending with an image where the occluding area has been segmented;
Figures 7 A to 71 illustrate an example of the present invention when used to determine the shape and size of an area of calcification in a blood vessel;
Figures 8A to 81 show the example of Figures 7A to 71 illustrating the progression of the area within the boundary;
Figures 9A to 91 show the example of Figures 7A and 71 illustrating the background image when the area within the boundary is inpainted;
Figures 1 OA to 10D illustrate an example of the present invention starting with a small misplaced initial contour;
Figures 1 1 A to 11 F illustrate an example of the present invention starting with an initially manually placed contour derived from the detection of regions of calcification in an x-ray;
_ Figure .12 illustrates the present invention when applied to the Greenland flag; and
Figure 13 illustrates the present invention when used to remove an image of a bus from a background image.
To summarise, the present invention introduces a novel methodology within the framework of active contours. This belongs to the region based family of active contours, but instead of considering that the image domain is partitioned into two (or several regions), the present invention consists in considering the following problem:
If the region Ω containing the object of interest is known, and if enough information is present on the type of background, it may be possible to construct the occluded background image. If a positive answer to this question is assumed, then it can be questioned whether a given region Ω contains an object occluding the background in the following way:
If Ω delimits the area occupied by a foreground object, then there should be sufficiently large difference between the observed image and the reconstructed one within this region. This naturally leads to a variational problem, if Ω is a given region of the image plane D , M0 the observed image, w(Ω) = /(M0 , Ω) the reconstructed background and if this
background/foreground difference is called 7(Ω) , we may look for the true region b as an extremum for 7(Ω) . The goal of this work is to propose such a measure, first in general terms, and then a specific instance based on a relatively simple variational inpainting formulation, which essentially corresponds to TV inpainting as disclosed by Chan and Shen in "Mathematical models for local nontexture inpainting (Siam journal of applied Mathematics, 62(3): 1019-1043, 2002)", and a simple measure of discrepancy between two images, based on pixel value differences. From it, a corresponding active contour algorithm is derived.
It is required to try and separate the background from the foreground. In traditional segmentation described above, an area is segmented by differentiating between an area of foreground having different characteristics to the background and the area of background that surrounds the foreground. By contrast, in the present invention, the image is segmented by differentiation between the foreground and the background as a whole - as if the foreground did not exist.
It will be understood that the foreground that requires segmentation may be considered to be an area of interest that occludes or is set against the background.
The present invention will be described below in context of the mathematics used to enable the various functions to be found. However, to summarise, the following steps, as shown in Figure 2, are taken:
1. An assumption is made that an image consists of a background region 2 and an area of interest 4, where the area of interest occludes part of the background region. 2. An initial estimate is made of the location of the boundary 6 of the area of interest located in the background region.
3. The background region is assigned a positive function and the area of interest is assigned a negative function.
4. Information from the assumed background region is used to inpaint the assumed area of interest i.e. the foreground region.
5. The intensity values of the original image, prior to inpainting, are compared with the intensity values of the inpainted image to derive a disocclusion quality measure.
6. The assumed boundary of the area of interest is adjusted. 7. Steps 4 and 5 are repeated such that the assumed area of interest is once again inpainted using information from the background region and the intensity values of the original image and the inpainted image are compared to derive a disocclusion quality measure.
8. Steps 4 to 7 are repeated until the disocclusion quality measure reaches a maximum, thus fulfilling the convergence criteria described below. At this time, the assumed boundary should correlate with the actual boundary of the area of interest.
9. The image may then be segmented into two separate and distinct images (as shown in Figure 2A) as required.
In the present invention, the term "disocclusion" is used to describe the removal of an area of interest occluding part of the background region. The disocclusion quality measure gives a numerical indication of how successful the boundary estimation and therefore the inpainting has been.
The present invention is described with reference to "an image". It will be apparent that by "taking an image" this involves starting with an image previously acquired by some imaging means, for example, an x-ray image. The starting point of the present invention is digital information representing an image.
Described below are background disocclusion ideas and corresponding variational formulae that lead to an active contour evolution equation.
Let D be the image domain, wή the background image defined on D .
Let Ω c D an unknown region and uf the foreground image with support Ω . One supposes that the observed image u0 is obtained through a combination operation
Ii0 = C (Ub 1 Uf ) .
- -
The basic problem is to be able to determine, from u0 , the region Ω and information on ub and uf . We now assume that if Ω is known, we can confidently estimate uh using an inpainting or disocclusion operation uh - /(M0, Ω) . The following two assumptions are made:
1. M0 and uh should differ significantly inside the foreground region Ω
2. the foreground region should be relatively small.
These two assumptions are used to form a "disocclusion quality measure", that attributes a numerical score to a given region Ω c D belonging to an admissible set P of regions of D
J(J?) = / I (J(-u0, J7), «o! J?) rf.x' (1)
where the "Lagrangian" L -. SixSixP → %(χ, y, A) \→ L(x, y, A) incorporates measures of the difference between x and y as well as measure of the size of its argument.
The true region Ω should be an extremal point of 7(Ω) and we therefore search for a necessary condition for the function /(Ω) to achieve an extremum. The next step is to consider the case of noisy additive signals uh + uf +η,η being some noise, as it would be in the case of, for example, X-rays or reflection.
With this additive model, an elementary way to measure the discrepancy between u0 and uh in the region of inpainting consists in summing up a function of pixelwise intensity differences, while a measure of the region can be given by its area, or a function of it. A simple form for the function L incorporating these measures is:
- -
With I A I denoting the area of the set A , and p > 0,q - 1 or 2 , and we therefore want to maximise /(Ω) . Another option is:
In both options, the resulting J(Ω) can be seen as a "generalised moment" of u0 -ub on Ω , a moment that we want to maximise.
As set out in the above equations, J(Q) is biased by the two following properties:
1. The difference between the intensity values of the original image and the inpainted image uo - ϊϊh , that is, as the difference in intensity values increases, the resultant value for the disocclusion quality measure also increases; and 2. The area that is inpainted A . As the area increases, the disocclusion quality measure decreases.. .. . . .. .. .. . .
Therefore, the disocclusion quality measure seeks to optimise itself by finding the maximum difference in intensity between the original image and the inpainted image within the minimum possible area.
The present invention uses the computation from Aubert et al in "Image Segmentation using active contours: Calculus of variations or shape gradients? (SIAM Journal of Applied Mathematics, 63(6):2128-2154, 2003)" of 7'(Ω) using shape derivative tools. Let rthe boundary of Ω , N the inward normal to r , the Gateaux derivative of /(Ω) in the direction of V is, using the chain rule:
(J(Ω)'t V) = / L, (J(W0, Ω), u0, Ω) I8(U0, Ω, V) dx
+ / L s (J(U0 > Ω), U0. Ω. V) ) dx - I L (J(W0 - Ω), U0 , P) (V - N) da(x) . (4) Jn Jr
In order to obtain a contour evolution equation, it is necessary to calculate the first term, involving the shape derivative Z5(M05Q1V)Of the inpainting with respect to the inpainting domain, as well as the second term, involving the shape derative of the Lagrangian L(x, y,A) .
- π -
The next step is to apply the inpainting operation and compute the shape derivate related terms.
The present invention has been described using a relatively simple inpainting formulation. However, it will be appreciated that the same method will apply to various other inpainting methods, for example Elastica Inpainting (as described in Chan and Shen).
The inpainting formulation used in the present invention is: u = /(wo,Ω) is defined as:
u = arg.min .F(ι;. Mn, Ω) υ
With
T(υ. «„. Ω) = ± f (r - U())2 d.r + I - ( yfivϊf+ ε*) '' cLr (5)
where 1 ≤ r < 2. For r = l,ε = 0 , this corresponds to the total variation inpainting of Chan and Shen. Assuming nevertheless that ε > 0 when 1 < r < 2 , this makes the solution of the minimisation unique (standard arguments show the uniqueness of the solution in the case r = 2 and a formal justification in the settings for r and ε as used in the present invention will be presented below) and allows to define the inpainting operator («0,Ω) ι→ /(wo,Ω,V) .
The choice of a simple inpainting technique enables the assumption of a high level of regularity for the partially unknown background, and this is an important restriction for the domain of the validity of the final algorithm.
The derivation of the inpainting related shape derivate is now performed.
If v is a test function on D , define /(T) = F(u +τv) . Since u is assumed to be a minimiser for F , the first variation of F in the direction v,/'(0) , must vanish and it provides the optimality condition
- -
0 = /(Λγmr>('i - «<))<•+ (|V//,|)'--2V</ • Vr) d.r (G)
JD
χD \Ω denotes the characteristic function of D\Ω . The inpainting PDE is the Euler-Lagrange equation
Tu(u,ιkhΩ) = 0. (9) extracted from equation (7), which allows to implicitly define u = /(M0, Ω) by
Ta(I(uo, Ω),u{hΩ) = 0. - --
Assuming that Fu is differentiable with respect to its first and last variables, this expression allows to compute ws(V) via the implicit function theorem:
U3[V) = - {Tuu(ιι,u0,Ω))-1 TUH{u,u{),Ω,Y) (10)
Fuu denoting the partial differential of Fu with respect to its first variable is explicited below as a linear elliptic operator, while Fm denotes the shape derivative of F11 and can be computed as:
Fas(u> wo, V) = X(a - U0)(V ■ N)ό> δr being the Dirac distribution along r . The first term of the Gateaux derivative (4) can be written:
-λ / \ΩLx(u,u0.Ω)T ({u- U0)(V -N)(Jr) dx J 'DD where u
and χn is the characteristic function of Ω . Using the formal adjoint of T* of T , this term can be rewritten as:
-λ / r {\ΩLε(u. U0. Ω)){u- U0)(V ■ Words ■ID
and finally as the boundary integral:
An expression can now be provided for the operator T = Fj . Setting g(τ) = (Fu(u + τw,u0,Ω),v) for a test function w on D , the operator Fuu is obtained from
(Tuu(u,tkhΩ){w), v) =y'(0). A straightforward computation using for instance expression (6) gives:
the last equality by integration by part and adequate boundary conditions, and where A[V u] is the 2x2 symmetric matrix
which, under the assumptions made on rand ε is positive and definite. It can also be noted that the quadratic form FHH(κ,«0,Ω),(v),v is /"(0), the second variation of F in the variation of v and is positive, as it can be seen from equation (11). This means that the inpainting energy is strictly convex, explaining the uniqueness of the minimiser. Expression (12) provides the expression of Fuu(u,uQ,Ω) as the differential operator
- -
TiiU(u. Uo, Ω)(w) =
- άiv(A[Vu]Vw)
Positive definiteness also implies here that Fuu is formally invertible, and from equation (1 1 ) and the symmetry of A[Vw] , it is self-adjoint
(Fuu(u, Uo, Ω) (;w), v) = (w, J7 Uu (U- Uo, Ω) (v)) .
The operator T = Fj which consequently self-adjoins, T = T * , is given by v ι→ T(v) , and is defined as the unique solution of
The next step is involved with writing the curve evolution equation. To do this, it is necessary to compute the second term in equation (4). The shape derivative in the direction of V of the area | Ω | is given by - J (V ■ N)da(x) and with the
Lagrangian L being defined by either (3) or (2), one gets: . .
/ (v • N) Mx)-
Collecting all these calculations, we find that the Gateaux derivative (/(Ω)',V) is given by J F(V • N)da(x) where F is
p-— λ(w - Uo)T (XQLX (V,, UQ , Ω)) + L(u, U0, Ω) (15) IJ /I
The goal is to maximise the "disocclusion criterion".
This leads to the following evolution equation — = FN . This curve evolution is
Bt rewritten into the Osher-Sethian in "Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid MechNics, Computer Vision, and Materials Sciences. (Cambridge Monograph on Applied and Computational Mathematics, Cambridge University Press 1999)" framework of level sets as φ, = F | V^ | where φ represents implicitly T(t) and Ω(t) via T(O = {φ(x,t) = 0} and Ω(0 = (φ(x,t) < 0} .
A first order space convex scheme is used to solve this equation, it is
ψ'f1
O)V"] .
This leads to the following algorithm:
1. Choose an original contour r° , compute φ° as the signed distance function φ° (x) = distr0 (x) and Ω° = [x, H(-φ° (x)) = 1} , where H is a 1 -dimensional heaviside function
2. For i = 0 to N - 1 or a convergence condition has been fulfilled (a) compute the inpainting u" = /(«„, Ω") and r(jΩLc(w",w0,Ωn )) and the disocclusion measure 7(Ω") .
(b) Compute F" from the previous calculations
(c) Compute φn+ι with the above scheme
(d) Extract rn+1 and Ωn+1 from it (e) Reinitialise φn+i as a signed distance φ"+ι(χ) = distr+t (x)
3. return t^ and ΩN
The time step Δπs chosen at each iteration as 1/ Il F" 11 =>o . The discretisation used for the inpainting /(«0,Ω" ) follows closely the one proposed by Chan and Shen in "Mathematical models for local non texture inpainting" ([SIAM Journal of Applied Mathematics, 62(3), 1019-1043, 2001]). The digital inpainting is computed using a Full Multigrid Scheme, with one V-cycle at each resolution where each level of the V-cycle implements a Full Approximation Scheme (FAS) (following the description given in "A Multigrid Tutorial" (SIAM)). The Grid transfer operations use the ones proposed in "Variational Optical Flow
Computation in Real Time" (IEEE Transactions on Image Processing, 14(5), 608-615, 2005) except for the downsizing of the digital characteristic function XD\Ω where a nearest neighbourhood interpolation is used.
The computation of T(χΩLx(un,u0,Ω." )) as the solution w of the PDE
XΩLΛ'U" , U0, Ω") (1C) requires some care. Indeed, apart from the trivial case r = 2 in (13) where A I VM" I reduces to the 2 x 2 matrix, the divergence term will normally contain spatial cross-derivatives and thus needs special attention for the discretisation. The linear resulting from equation (16) is again solved with a Full Multigrid Scheme, each V-cycle implementing a linear multilevel solver.
- 1 -
Finally, the heaviside function used is implemented with almost no regularisation of most region based algorithms. Too much regularisation would bias the inpainting.
An example of the iterative process used to adapt the contour from the initial estimate to fit the contour of the area to be inpainted is described below with reference to the example shown in Figure 5, where an area of interest 8 is shown as the foreground to a slightly noisy background 10.
The initial estimate for the boundary to eventually define the contour of the area of interest is in the shape of a giraffe. As set out above, the invention relies on inpainting an area within the estimated boundary using information from the remainder of the background. Therefore, in this example, information derived from the area outside of the giraffe is used to inpaint the area inside the giraffe. Following this, the disocclusion quality measure is derived based upon the difference in intensity values between the original image and the inpainted image with reference to the size of the area within the giraffe.
The boundary is deemed to be split into portions. For each portion a decision is made based on information derived from the disocclusion quality measure. For example, we will first consider the left leg 12 of the giraffe. It is clear from the image that this left leg does not cover the area of interest. Therefore, a decision will be made to have a first attempt at moving the portion of boundary at the bottom of the left leg so that the overall area of the giraffe increases. A move of this nature will result in the same difference of intensity values between the orginal image and the inpainted image as in the previous iteration. As the area within the boundary has increased, the overall disocclusion quality measure will have decreased. Therefore, in the next iteration, it will be apparent that this particular portion of the boundary should be moved to result in a decrease in the overall area of the giraffe.
Conversely, if we consider the left side of the neck 14 of the giraffe, it is clear from the image that this boundary falls within the area of interest. Moving this portion of the boundary to result in an increase in area of the giraffe will result in a greater difference between the intensity values of the original image and the inpainted image. This will, in part, be offset by the increase in area of
the inpainted section. However, the overall result will either be that the disocclusion quality measure will increase, or that it will stay the same. Therefore, in the next iteration, it will be apparent that this portion of the boundary should be moved to result in a further increase in the overall area of the giraffe. This iterative process will continue for this portion of the boundary until a move of this portion of boundary results in no increase in the difference of intensity values between the original image and the inpainted image, i.e. this portion of the boundary now falls outside the area of interest, and therefore the overall result will be a decrease of the disocclusion quality measure.
By this iterative process, the estimated boundary is gradually adapted to fit the contour of the area of interest as shown in Figure 5 (middle image).
Results are given below from experiments performed on synthetic as well as real data. They present an increasing level of complexity in the image content. The method of inpainting used in these experiments is regularised Total Variation one, i.e. r = 1 in the energy, and the function L being the one given by (2), with q = 1.0 .
Experiment 1 As shown in Figure 3, experiment 1 is based on an image of four black squares on a white background. Thus, based on the assumption of the present invention, the two areas to be segmentised are the white background, following its reconstruction, and the foreground of four black squares. The parameters are λ = 1, p = 0.25. The first row shows a snapshot of the contour evolution at iterations 1 , 7, 15 and 30. From this it can be seen that as the iterations continue, the contours more clearly define the black squares in the foreground. The second row shows the corresponding domain and the last row shows the inpainting result. The circles in row 1 provide the starting point for finding the contours of the areas of segmentation. The contours are then iteratively adapted until the foreground areas have been depicted. The final row shows the result of inpainting within the segmentation.
Experiment 2 As shown in Figure 4, experiment 2 is based on a dark non convex object on a light background with 30% added Gaussian noise. In this experiment, λ - 0.1, p = 0.55. This is illustrated in the right part of Figure 4. The first row shows a snapshot of the contour evolution of iterations 1 , 10, 20 and 30. From this it can be seen that gradually, the mathematical formulae
result in the contour clearly defining the foreground shape. The second row shows the corresponding domains and the last one of the inpaintings.
Experiment 3 As shown in Figure 5, experiment 3 uses the same image as experiment 2. It illustrates some stability with respect to the initial contour, as well as the role of the area penaliser exponent p in L . Two experiments were performed, the first with a value of p = 0.55 , as in the previous case and the second with a value of p = 0.60. In both cases, the λ weight in the inpainting was set of 0.1. In the first case, a correct segmentation has been achieved, while in the second, the right part of the horizontal ellipse could not be fully recovered. As can be seen from Figure 5, the initial contour is in the shape of a giraffe, but this is gradually adapted to fit the shape of the foreground object.
Experiment 4 Shown in Figure 6, experiment 4 shows a "pseudo classification" image. This image could represent an image typically found in medical imaging, for example, of an area of calcification. This image is made of a smooth background (a 4th order polynomial in the image coordinates), with an object added (by adding intensity values). While the intensities in background are in the range [0,1], the original intensity of the added object is constant, equal to 0.065. From the experimentation, it was possible to produce some good quality measure using the function L given by (3), however, it proved difficult to control. So the same function was used as for the other examples, with p = 0.5. In order to prevent the inpainter/denoiser from smoothing too much of the background, a value of λ = 8 was chosen in (5). As illustrated in Figure 6, the first row shows the original object and the pseudo classification. The two next rows show snapshots of contour evolution at different iterations, for two different initialisations. The second initialisation, while "more creative" is also closer to the true contour and allows for a much faster convergence (260 iterations in the first case, 50 in the second).
A further example of use of the present invention in determining the shape of a calcification is shown in Figures 7 A-I, Figure 8 A-I and Figures 9 A- I. Figures 7A-I show various iterations of the contour evolution starting with an initial random estimate of the boundary of the area of calcification. As can be seen, as the iterations progress, the boundary is adapted to fit the contour of the area of calcification. This sort of technique would be useful for locating and assessing calcifications found in blood vessels, for example, an aorta. Figures
8A-I show corresponding images of the areas within the boundary. Figures 9A- I show the images of the background with the area within the boundary inpainted using information from the remainder of the background. As can be seen in Figure 91, the area of calcification is no longer visible.
In accordance with the algorithm and the factors that contribute to determining the disocclusion quality measure, it can be seen from Figure 71 that the boundary does not exceed the actual area of calcification. In this respect, the mathematics used to calculate this disocclusion quality measure is biased against increasing the area of the boundary without the increase resulting in an increase in the difference in intensity between the original image and the inpainted image.
Figures 1OA to 1OD illustrates the robustness of the segmentation algorithm described above. As can be seen from Figure 1OB, the initial boundary provided at the beginning of the algorithm is relatively small and misplacedjn comparison to the area of interest. However, using the algorithm above this initial small curve evolves into a meaningful boundary that can subsequently be used for segmentation.
Experiment 5 Figures 11 A to 11 F illustrate the results of a segmentation initialised by a calcification detection method. In this experiment the initial contours shown in Figures 1 1 B and 11 E are applied manually following the detection of areas of calcification using the calcification detection method described in Bruijne in "Shape Particle Guided Tissue Classification". These contours are used as the initial boundaries in the above defined algorithm which is subsequently executed to result in the boundary locating the edge of the area of interest, in this case an area of calcification.
Experiment 6 This experiment uses the Greenland flag with added Gaussian noise, with a standard deviation of 50% of the intensity range of the image. The function L is used with /? = 0.85 this time. The contour differentiates the circular part of the flag from the remainder of the background. The two areas of segmentation are the circle and the background rectangle, ignoring the circle. The inpainting takes information from the remainder of the surroundings to inpaint the area of segmentation within the circle. Results are shown in Figure 12, where the evolution of the disocclusion measure is also plotted. As can be
- -
seen, the disocclusion quality measure continues increasing until the contour fits the area to be segmented - based on the calculations made following inpainting of the foreground region.
Experiment 7 This last experiment presented shows a different use of the present invention, for example when used in photography, or special effects. A rather complex input image is used, with the third frame from the Ettlinger-Tor sequence, frequently used in optical flow estimation. As can be seen from Figure 13, the image is of a road with a bus. In this case, the road is the background image and the bus is the foreground image. The results, although perhaps of lesser quality than may be obtained with state of the art algorithms, show that the described invention can produce meaningful results, even when the background smoothness assumption is largely violated as in the present case. The parameters used here were λ = 0.5 and p = 2~η , a very small value. It can be seen that the background is gradually reconstructed to remove the bus from the image. It is apparent that this technique could therefore be used in various applications, for example image restoration or special effects.
Thus it can be seen from these experiments that the contour evolution equation derived is effective when used to segment images having a background with a foreground image present.
It will be appreciated that modifications to or variations of the embodiments described and illustrated may be made within the scope of this application as set out in the appended claims.
Claims
1. A method of segmenting an image having a background region and an area of interest, wherein the area of interest is surrounded by part of the background region, which method comprises: taking a starting set of digital data representative of the image;
(1 ) estimating a boundary of the area of interest;
(2) inpainting the area of interest within the boundary using information from the remainder of the background region;
(3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region;
(4) moving the position of the estimated boundary to produce a further generation estimated boundary;
(5) inpainting the area within the further generation estimated boundary; and
(6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained, using information of the boundary of the area of interest at the time the maximum degree of disocclusion is obtained to segment the image.
2. A method as claimed in claim 1 , wherein the result representative of the degree of disocclusion is additionally determined by the size of the area within the estimated boundary.
3. A method as claimed in claim 2, wherein the result representative of the degree of disocclusion is biased in a positive way in response to an increase in the difference between the starting set of digital data and the inpainted set of digital data and a decrease in the area within the estimated boundary.
4. A method as claimed in claim 3 wherein the step of moving the position of the estimated boundary comprises using information derived from the result representative of the degree of disocclusion to determine how the position of the estimated boundary should be moved. - T) .
5. A method as claimed in claim 3 or claim 4, wherein the step of moving the position of the estimated boundary comprises moving portions of the boundary in subsequent iterations in a direction determined by the factor that yields improvement in the result representative of the degree of disocclusion.
6. A method as claimed in claim 5, wherein the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to increase the area within the boundary in response to an increase in the result representative of the degree of disocclusion resulting from an increase in the previous iteration of the difference between the starting set of digital data and the inpainted set of digital data.
7. A method as claimed in claim 5, wherein the step of moving the position of the estimated boundary in subsequent iterations comprises moving a portion of the boundary to decrease the area within the boundary in response to a decrease in the result representative of the degree of disocclusion resulting from either an increase in the size of the area within the boundary or a decrease in the difference between the starting set of digital data and the inpainted set of digital data.
8. A method as claimed in any of claim 2 to claim 7, wherein the result representative of the degree of disocclusion is a generalised moment of the difference between the starting set of digital data and the inpainted set of digital data on the area of interest.
9. A method of computing the shape and size of a calcification of a blood vessel by processing an image of at least a part of the blood vessel containing said calcification, which method comprises: taking a starting set of digital data representative of an image of at least part of a blood vessel containing an area of calcification, said area of calcification being set against a background area;
(1 ) estimating a boundary of the area of calcification;
(2) inpainting the area of calcification within the estimated boundary using information from the remainder of the background area; (3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region; (4) moving the position of the estimated boundary to produce a further generation estimated boundary;
(5) inpainting the area within the further generation estimated boundary; and
(6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained, using information of the boundary of the calcification at the time of the maximum degree of disocclusion to compute the shape and size of the calcification.
10. A method of reconstructing the background of an image having an area of interest in the foreground, comprising: taking a starting set of digital data representative of the image; (1 ) estimating a boundary of the area of interest of the foreground;
(2) . inpainting the area of interest within the boundary using information from the remainder of the background region;
(3) computing the difference between the starting set of digital data and the inpainted set of digital data to obtain a result representative of the degree of disocclusion of the background region;
(4) moving the position of the estimated boundary to produce a further generation estimated boundary;
(5) inpainting the area within the further generation estimated boundary; and
(6) computing the difference between the starting set of digital data and the inpainted set of digital data; continuing iterations of steps (4), (5) and (6) until a maximum degree of disocclusion is obtained.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0520566A GB0520566D0 (en) | 2005-10-10 | 2005-10-10 | A method of segmenting an image |
GB0520948A GB0520948D0 (en) | 2005-10-10 | 2005-10-14 | A method of segmenting an image |
PCT/EP2006/009748 WO2007042251A2 (en) | 2005-10-10 | 2006-10-09 | A method of segmenting an image |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1934936A2 true EP1934936A2 (en) | 2008-06-25 |
Family
ID=37943160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06806125A Withdrawn EP1934936A2 (en) | 2005-10-10 | 2006-10-09 | A method of segmenting an image |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP1934936A2 (en) |
WO (1) | WO2007042251A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827286A (en) * | 2018-08-08 | 2020-02-21 | 菜鸟智能物流控股有限公司 | Geographic region segmentation method and device based on road network and electronic equipment |
Families Citing this family (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010038195A2 (en) * | 2008-09-30 | 2010-04-08 | Lodox Systems (Proprietary) Limited | Method and system for removing butting or stitching artifacts from images |
KR101502362B1 (en) | 2008-10-10 | 2015-03-13 | 삼성전자주식회사 | Image processing apparatus and method |
KR20120014876A (en) * | 2010-08-10 | 2012-02-20 | 삼성전자주식회사 | Image processing apparatus and method |
US8824793B2 (en) | 2012-03-02 | 2014-09-02 | Adobe Systems Incorporated | Methods and apparatus for applying a bokeh effect to images |
US9380222B2 (en) | 2012-12-04 | 2016-06-28 | Symbol Technologies, Llc | Transmission of images for inventory monitoring |
US10352689B2 (en) | 2016-01-28 | 2019-07-16 | Symbol Technologies, Llc | Methods and systems for high precision locationing with depth values |
US11042161B2 (en) | 2016-11-16 | 2021-06-22 | Symbol Technologies, Llc | Navigation control method and apparatus in a mobile automation system |
US10663590B2 (en) | 2017-05-01 | 2020-05-26 | Symbol Technologies, Llc | Device and method for merging lidar data |
US10505057B2 (en) | 2017-05-01 | 2019-12-10 | Symbol Technologies, Llc | Device and method for operating cameras and light sources wherein parasitic reflections from a paired light source are not reflected into the paired camera |
US11093896B2 (en) | 2017-05-01 | 2021-08-17 | Symbol Technologies, Llc | Product status detection system |
US10591918B2 (en) | 2017-05-01 | 2020-03-17 | Symbol Technologies, Llc | Fixed segmented lattice planning for a mobile automation apparatus |
US11367092B2 (en) | 2017-05-01 | 2022-06-21 | Symbol Technologies, Llc | Method and apparatus for extracting and processing price text from an image set |
WO2018204308A1 (en) | 2017-05-01 | 2018-11-08 | Symbol Technologies, Llc | Method and apparatus for object status detection |
US11449059B2 (en) | 2017-05-01 | 2022-09-20 | Symbol Technologies, Llc | Obstacle detection for a mobile automation apparatus |
US10949798B2 (en) | 2017-05-01 | 2021-03-16 | Symbol Technologies, Llc | Multimodal localization and mapping for a mobile automation apparatus |
US10726273B2 (en) | 2017-05-01 | 2020-07-28 | Symbol Technologies, Llc | Method and apparatus for shelf feature and object placement detection from shelf images |
WO2018201423A1 (en) | 2017-05-05 | 2018-11-08 | Symbol Technologies, Llc | Method and apparatus for detecting and interpreting price label text |
US10521914B2 (en) | 2017-09-07 | 2019-12-31 | Symbol Technologies, Llc | Multi-sensor object recognition system and method |
US10572763B2 (en) | 2017-09-07 | 2020-02-25 | Symbol Technologies, Llc | Method and apparatus for support surface edge detection |
US10521961B2 (en) | 2017-12-10 | 2019-12-31 | International Business Machines Corporation | Establishing a region of interest for a graphical user interface for finding and depicting individuals |
US10338768B1 (en) * | 2017-12-10 | 2019-07-02 | International Business Machines Corporation | Graphical user interface for finding and depicting individuals |
US10809078B2 (en) | 2018-04-05 | 2020-10-20 | Symbol Technologies, Llc | Method, system and apparatus for dynamic path generation |
US10823572B2 (en) | 2018-04-05 | 2020-11-03 | Symbol Technologies, Llc | Method, system and apparatus for generating navigational data |
US11327504B2 (en) | 2018-04-05 | 2022-05-10 | Symbol Technologies, Llc | Method, system and apparatus for mobile automation apparatus localization |
US10740911B2 (en) | 2018-04-05 | 2020-08-11 | Symbol Technologies, Llc | Method, system and apparatus for correcting translucency artifacts in data representing a support structure |
US10832436B2 (en) | 2018-04-05 | 2020-11-10 | Symbol Technologies, Llc | Method, system and apparatus for recovering label positions |
US11506483B2 (en) | 2018-10-05 | 2022-11-22 | Zebra Technologies Corporation | Method, system and apparatus for support structure depth determination |
US11010920B2 (en) | 2018-10-05 | 2021-05-18 | Zebra Technologies Corporation | Method, system and apparatus for object detection in point clouds |
US11003188B2 (en) | 2018-11-13 | 2021-05-11 | Zebra Technologies Corporation | Method, system and apparatus for obstacle handling in navigational path generation |
US11090811B2 (en) | 2018-11-13 | 2021-08-17 | Zebra Technologies Corporation | Method and apparatus for labeling of support structures |
US11416000B2 (en) | 2018-12-07 | 2022-08-16 | Zebra Technologies Corporation | Method and apparatus for navigational ray tracing |
US11079240B2 (en) | 2018-12-07 | 2021-08-03 | Zebra Technologies Corporation | Method, system and apparatus for adaptive particle filter localization |
US11100303B2 (en) | 2018-12-10 | 2021-08-24 | Zebra Technologies Corporation | Method, system and apparatus for auxiliary label detection and association |
US11015938B2 (en) | 2018-12-12 | 2021-05-25 | Zebra Technologies Corporation | Method, system and apparatus for navigational assistance |
US10731970B2 (en) | 2018-12-13 | 2020-08-04 | Zebra Technologies Corporation | Method, system and apparatus for support structure detection |
CA3028708A1 (en) | 2018-12-28 | 2020-06-28 | Zih Corp. | Method, system and apparatus for dynamic loop closure in mapping trajectories |
US11960286B2 (en) | 2019-06-03 | 2024-04-16 | Zebra Technologies Corporation | Method, system and apparatus for dynamic task sequencing |
US11662739B2 (en) | 2019-06-03 | 2023-05-30 | Zebra Technologies Corporation | Method, system and apparatus for adaptive ceiling-based localization |
US11402846B2 (en) | 2019-06-03 | 2022-08-02 | Zebra Technologies Corporation | Method, system and apparatus for mitigating data capture light leakage |
US11341663B2 (en) | 2019-06-03 | 2022-05-24 | Zebra Technologies Corporation | Method, system and apparatus for detecting support structure obstructions |
US11151743B2 (en) | 2019-06-03 | 2021-10-19 | Zebra Technologies Corporation | Method, system and apparatus for end of aisle detection |
US11200677B2 (en) | 2019-06-03 | 2021-12-14 | Zebra Technologies Corporation | Method, system and apparatus for shelf edge detection |
US11080566B2 (en) | 2019-06-03 | 2021-08-03 | Zebra Technologies Corporation | Method, system and apparatus for gap detection in support structures with peg regions |
US11507103B2 (en) | 2019-12-04 | 2022-11-22 | Zebra Technologies Corporation | Method, system and apparatus for localization-based historical obstacle handling |
US11107238B2 (en) | 2019-12-13 | 2021-08-31 | Zebra Technologies Corporation | Method, system and apparatus for detecting item facings |
US11822333B2 (en) | 2020-03-30 | 2023-11-21 | Zebra Technologies Corporation | Method, system and apparatus for data capture illumination control |
US11450024B2 (en) | 2020-07-17 | 2022-09-20 | Zebra Technologies Corporation | Mixed depth object detection |
US11593915B2 (en) | 2020-10-21 | 2023-02-28 | Zebra Technologies Corporation | Parallax-tolerant panoramic image generation |
US11392891B2 (en) | 2020-11-03 | 2022-07-19 | Zebra Technologies Corporation | Item placement detection and optimization in material handling systems |
US11847832B2 (en) | 2020-11-11 | 2023-12-19 | Zebra Technologies Corporation | Object classification for autonomous navigation systems |
US11954882B2 (en) | 2021-06-17 | 2024-04-09 | Zebra Technologies Corporation | Feature-based georegistration for mobile computing devices |
CN119477953A (en) * | 2025-01-15 | 2025-02-18 | 苏州大学 | Image segmentation method and system based on multiplicative bias field modeling |
-
2006
- 2006-10-09 EP EP06806125A patent/EP1934936A2/en not_active Withdrawn
- 2006-10-09 WO PCT/EP2006/009748 patent/WO2007042251A2/en active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2007042251A2 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827286A (en) * | 2018-08-08 | 2020-02-21 | 菜鸟智能物流控股有限公司 | Geographic region segmentation method and device based on road network and electronic equipment |
CN110827286B (en) * | 2018-08-08 | 2023-05-16 | 菜鸟智能物流控股有限公司 | Geographic region segmentation method and device based on road network and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2007042251A3 (en) | 2007-11-08 |
WO2007042251A2 (en) | 2007-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1934936A2 (en) | A method of segmenting an image | |
Ham et al. | Robust image filtering using joint static and dynamic guidance | |
Huhle et al. | Robust non-local denoising of colored depth data | |
Xu et al. | A segmentation based variational model for accurate optical flow estimation | |
Stuhmer et al. | Tree shape priors with connectivity constraints using convex relaxation on general graphs | |
Lo et al. | Joint trilateral filtering for depth map super-resolution | |
Li et al. | Optical flow estimation using laplacian mesh energy | |
Wang et al. | Effective stereo matching using reliable points based graph cut | |
Musialski et al. | Symmetry-Based Façade Repair. | |
Keren et al. | Denoising color images using regularization and “correlation terms” | |
Ghosh et al. | Robust simultaneous registration and segmentation with sparse error reconstruction | |
Zhang et al. | A hybrid image segmentation approach using watershed transform and FCM | |
Yang et al. | Hierarchical joint bilateral filtering for depth post-processing | |
Lombaert et al. | Fast 4D segmentation of large datasets using graph cuts | |
Slabaugh et al. | A variational approach to the evolution of radial basis functions for image segmentation | |
JP2009512040A (en) | How to segment an image | |
Ding et al. | Optimum inpainting for depth map based on l 0 total variation | |
Tseng et al. | Maximum-a-posteriori estimation for global spatial coherence recovery based on matting Laplacian | |
Yu et al. | Intensity-guided depth upsampling using edge sparsity and weighted $ L_ {0} $ gradient minimization | |
CN110751658A (en) | Matting method based on mutual information and point spread function | |
Rivera et al. | Two--level MRF Models for Image Restoration and Segmentation. | |
Medina et al. | Level set algorithms comparison for multi-slice ct left ventricle segmentation | |
Fukuda et al. | Graph cuts by using local texture features of wavelet coefficient for image segmentation | |
Duggan et al. | A simple boundary reinforcement technique for segmentation without prior | |
Digne et al. | Neighborhood Filters and the Recovery of 3D Information. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080328 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): DE FR GB |
|
17Q | First examination report despatched |
Effective date: 20080715 |
|
DAX | Request for extension of the european patent (deleted) | ||
RBV | Designated contracting states (corrected) |
Designated state(s): DE FR GB |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20081126 |