US20120257265A1 - One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions - Google Patents
One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions Download PDFInfo
- Publication number
- US20120257265A1 US20120257265A1 US13/447,244 US201213447244A US2012257265A1 US 20120257265 A1 US20120257265 A1 US 20120257265A1 US 201213447244 A US201213447244 A US 201213447244A US 2012257265 A1 US2012257265 A1 US 2012257265A1
- Authority
- US
- United States
- Prior art keywords
- representation
- image
- visible light
- infrared
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001914 filtration Methods 0.000 title description 10
- 230000007246 mechanism Effects 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000000694 effects Effects 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 62
- 230000002146 bilateral effect Effects 0.000 claims description 57
- 238000013507 mapping Methods 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 description 20
- 238000003384 imaging method Methods 0.000 description 13
- 230000007547 defect Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 239000000428 dust Substances 0.000 description 6
- 230000003628 erosive effect Effects 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 3
- 235000009499 Vanilla fragrans Nutrition 0.000 description 2
- 244000263375 Vanilla tahitensis Species 0.000 description 2
- 235000012036 Vanilla tahitensis Nutrition 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 235000015243 ice cream Nutrition 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000009290 primary effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10008—Still image; Photographic image from scanner, fax or copier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Definitions
- FIG. 1 illustratively depicts a process 100 by which distortions can be removed from a digital visible light representation 110 of an image 104 , according to an embodiment of the invention.
- distortions can include dust, scratches, artifacts, defects, noise bursts, and other distortions that may be present.
- the terminology “noise burst” refers to a pixel or a group of pixels within an image that has a noise signal which is particularly strong relative to the overall noise level of the image itself, and greater than the local contrast of the image.
- the image 104 may be originally fixed on a imaging medium 102 , such as film, like a film negative or a slide, or another type of imaging medium.
- the image 104 as fixed on the medium 102 may be optically scanned, as indicated by arrows 106 and 108 , to result in a digital visible light representation 110 and a digital infrared representation 112 , respectively, of the image 104 .
- the digital infrared representation 112 of the image 104 is likewise a digital representation in that it is data electronically or digitally representing the image 104 .
- the representation 112 is an infrared representation in that it at least substantially represents how the image 104 is perceived under infrared light.
- the infrared representation 112 of the image 104 has been found to generally include primarily just the scratches, dust, and other defects within the visible light representation 110 of the image 104 , as may be found on the image 104 as fixed on the medium 102 , or which may result from the scanning process represented by the arrows 106 and 108 . However, due to the scanning process indicated by the arrow 108 the digital infrared representation 112 of the image 104 can also commonly include a small portion of visible light aspects of the image 104 , commonly known as “crosstalk.”
- the primary effect of the first mechanism is that if the credibility of the central pixel is low, the weight of each of its neighboring pixels increases. For example, in the classical bilateral filter a neighboring pixel with a large photometric difference from the central pixel will receive a much lower weight than the central pixel. However, by comparison in an embodiment of the invention, if the central pixel has low credibility the same neighboring pixel may receive a significantly higher weight both absolutely, and relative to the central pixel.
- FIG. 8 shows a graphic 800 illustratively depicting a standard Gaussian distribution in relation to which the credibility value for a pixel can be determined in part 708 of the method 700 of FIG. 7 , according to an embodiment of the invention.
- the x-axis 802 denotes the normalized distance of the infrared value of a pixel from the baseline value, whereas the y-axis 804 denotes probability, which in this instance is the credibility value of the pixel.
- the curve 806 is the standard Gaussian distribution curve.
- the baseline value on the x-axis 802 corresponds to a credibility value of 1 within the curve 806 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
A scanning device includes a scanning mechanism and a processing mechanism. The scanning mechanism scans an image fixed on a medium to generate a digital infrared representation of the image and a digital visible light representation of the image. The processing mechanism substantially reduces effects of noise and distortions within the digital visible light representation of the image in one pass. The processing mechanism at least decorrelates visible light aspects from the infrared representation of the image and employs a one-pass filter that uses both the infrared and the visible light representations of the image.
Description
- The present patent application is a divisional patent application of the currently pending patent application of the same name, filed on Jan. 16, 2007, and assigned application Ser. No. 11/623,697.
- Scanning devices include standalone scanners, as well as “all-in-one” (AIO) devices that include scanning mechanisms. Scanning devices typically optically scan an image fixed on a medium, such as photographic film, to generate a digital representation of the image. The digital representation may then be manipulated by a computing device, such as by being emailed to desired recipients, uploaded to a web site, and so on.
- A drawback to scanning images fixed on media into corresponding digital representations of the images is that dust, scratches, defects, and other distortions on the images as fixed on the media may be reproduced within the corresponding digital representations of the images. Another drawback is that random noise and distortions of various types may be introduced into the corresponding digital representations of the images. Current solutions to removing all such noise and distortions from digital representations of images can be less than ideal.
-
FIG. 1 is a diagram depicting a process by which distortions are removed from a digital representation of an image, according to an embodiment of the invention. -
FIGS. 2A , 2B, 2C, and 2D are diagrams depicting exemplary performance of the process ofFIG. 1 , according to an embodiment of the invention.FIG. 3 is a flowchart of a method for decorrelating a visible light representation of an image from an infrared representation of the image, according to an embodiment of the invention. -
FIG. 4 is a diagram of a representative linear model that may be employed in the method ofFIG. 3 , according to an embodiment of the invention. -
FIG. 5 is a flowchart of a method for at least substantially reducing or removing distortions from a digital representation of an image by using a credibility-weighted bilateral filter, according to an embodiment of the invention. -
FIG. 6 is a diagram of a representative three-by-three neighborhood of pixels, which may be employed in relation to the method ofFIG. 5 , according to an embodiment of the invention. -
FIG. 7 is a flowchart of a method for determining a credibility value of a pixel of an image, and which can be employed in relation to the method ofFIG. 5 , according to an embodiment of the invention. -
FIG. 8 is a diagram of a standard Gaussian distribution in relation to which the credibility value of a pixel of an image can be determined in the method ofFIG. 7 , according to an embodiment of the invention. -
FIG. 9 is a block diagram of a rudimentary scanning device, according to an embodiment of the invention. -
FIG. 1 illustratively depicts aprocess 100 by which distortions can be removed from a digitalvisible light representation 110 of animage 104, according to an embodiment of the invention. Such distortions can include dust, scratches, artifacts, defects, noise bursts, and other distortions that may be present. For instance, the terminology “noise burst” refers to a pixel or a group of pixels within an image that has a noise signal which is particularly strong relative to the overall noise level of the image itself, and greater than the local contrast of the image. - The
image 104 may be originally fixed on aimaging medium 102, such as film, like a film negative or a slide, or another type of imaging medium. Theimage 104 as fixed on themedium 102 may be optically scanned, as indicated byarrows visible light representation 110 and a digitalinfrared representation 112, respectively, of theimage 104. - The digital
visible light representation 110 of theimage 104 is a digital representation in that it is data electronically or digitally representing theimage 104. Therepresentation 110 is a visible light representation in that it at least substantially represents how theimage 104 is perceived under visible white light, such as by the human visual system (HVS). Such visible white light can include all the wavelengths of light within the visible light spectrum. Thus, for instance, when the digitalvisible light representation 110 of theimage 104 is displayed on a display device, the HVS perceives therepresentation 110 of theimage 104 as at least substantially identical to theimage 104 as fixed on theimaging medium 102. - The
image 104 as fixed on themedium 102 may be a color image, a grayscale image, or a black-and-white image. The digitalvisible light representation 110 of theimage 104 may likewise represent theimage 104 in color, grayscale, or black-and-white, but not necessarily in correspondence with how theimage 104 is fixed on themedium 102. For example, theimage 104 may be fixed as a color image on themedium 102, and the digitalvisible light representation 110 thereof may likewise be in color. Alternatively, however, therepresentation 110 of theimage 104 may be in grayscale or in black-and-white, even if theimage 104 is itself fixed on themedium 102 in color, for instance. - The digital
infrared representation 112 of theimage 104 is likewise a digital representation in that it is data electronically or digitally representing theimage 104. Therepresentation 112 is an infrared representation in that it at least substantially represents how theimage 104 is perceived under infrared light. Theinfrared representation 112 of theimage 104 has been found to generally include primarily just the scratches, dust, and other defects within thevisible light representation 110 of theimage 104, as may be found on theimage 104 as fixed on themedium 102, or which may result from the scanning process represented by thearrows arrow 108 the digitalinfrared representation 112 of theimage 104 can also commonly include a small portion of visible light aspects of theimage 104, commonly known as “crosstalk.” -
FIGS. 2A and 2B show representative examples of the digitalvisible light representation 110 and the digitalinfrared representation 112 of theimage 104, respectively, according to an embodiment of the invention. The digitalvisible light representation 110 of theimage 104 inFIG. 2A is in grayscale for exemplary purposes only, but may alternatively be in color or in black-and-white. The digitalvisible light representation 110 of theimage 104 includes a large visible scratch, extending over the two scoops of vanilla ice cream in the lower quarter of theimage 104. - The digital infrared representation of the
image 104 inFIG. 2B substantially includes just the distortions found on theimage 104. For instance, the large visible scratch apparent within the digitalvisible light representation 110 of theimage 104 inFIG. 2A is much more discernable within the digital infrared representation of theimage 104 inFIG. 2B . Other distortions found on theimage 104, which are less apparent within the digitalvisible light representation 110, may in some embodiments also be much more discernable within the digitalinfrared light representation 112. - However, some crosstalk of the visible light aspects of the
image 104 is also apparent within the digitalinfrared representation 112 of theimage 104 inFIG. 2B . That is, a faint “ghosting” of the digitalvisible light representation 110 of theimage 104 inFIG. 2A is apparent within the digitalinfrared representation 112 of theimage 104 inFIG. 2B . Such visible light aspects are desirably not present within the digitalinfrared representation 112. However, as a result of the scanning process, such crosstalk between the digitalvisible light representation 110 and the digitalinfrared representation 112 commonly occurs to at least some extent, as is evident inFIG. 2B . - Referring back to
FIG. 1 , as indicated byarrow 114, the visible light aspects are decorrelated from theinfrared representation 112 of theimage 104, to result in what is referred to inFIG. 1 as the digital decorrelatedinfrared representation 116 of theimage 104. The decorrelation process is described in more detail in a later section of the detailed description. The result of the decorrelation process, which employs the digitalvisible light representation 110 of theimage 104, as indicated by dottedarrow 118, is that the visible light aspects, or crosstalk, are at least substantially removed from theinfrared representation 112 within the decorrelatedinfrared representation 116 of theimage 104. -
FIG. 2C shows a representative example of the digital decorrelatedinfrared representation 116 of theimage 104, according to an embodiment of the invention. As compared to the digitalinfrared representation 116 of theimage 104 inFIG. 2B , the decorrelatedinfrared representation 116 of theimage 104 has the visible light aspects, or crosstalk from the digitalvisible light representation 110 of theimage 104 inFIG. 2A , removed. That is, it can be said that theinfrared representation 112 inFIG. 2B has been decorrelated from the visiblelight representation 110 inFIG. 2A to result in the decorrelatedinfrared representation 116 of theimage 104 inFIG. 2C . - Referring back to
FIG. 1 , as indicated byarrow 120, the digital visiblelight representation 110 of theimage 104 is subjected to a credibility-weighted bilateral filter to result in what is referred to inFIG. 1 as the digital filtered visiblelight representation 122 of theimage 104. The credibility-weighted bilateral filtering process is described in more detail in a later section of the detailed description. The filtering represented by thearrow 120 at least substantially reduces or removes the distortions within visiblelight representation 110 that are identified within the decorrelatedinfrared representation 116, and thus employs the decorrelated infrared representation as indicated bydotted arrow 124 in generating the filtered visiblelight representation 122. - The filtering represented by the
arrow 120 further at least substantially reduces or removes random noise from the digital visiblelight representation 110 of theimage 104 in generating the digital filtered visiblelight representation 122 of theimage 104. Such noise is commonly inherent in the scanning process represented by thearrow 106 to capture the digital visiblelight representation 110 of theimage 104 fixed on the medium 102. Additionally, noise may further be present in theimage 104 itself as fixed on the medium 102, originating for instance, from halftoning or the texture of the media. -
FIG. 2C shows a representative example of the digital filtered visiblelight representation 122 of theimage 104, according to an embodiment of the invention. As compared to the digital visiblelight representation 110 of theimage 104 inFIG. 2A , the filtered visiblelight representation 122 has substantially reduced or removed distortions therewithin. For example, the scratch across the two scoops of vanilla ice cream in the lower quarter of theimage 104 within the visiblelight representation 110 inFIG. 2A has been removed within the filtered visiblelight representation 122 of theimage 104 inFIG. 2D , without affecting theimage 104 itself. -
FIG. 3 shows amethod 300 for decorrelating the digital visiblelight representation 110 of theimage 104 from the digitalinfrared representation 112 of theimage 104 to result in the digital decorrelatedinfrared representation 116 of theimage 104, according to an embodiment of the invention. Themethod 300 may be implemented as a computer program stored on a computer-readable medium. For example, the computer-readable medium may be part of a scanning device, such that the computer program stored thereon is executed within the scanning device. - It is noted that all of the
representations image 104 may be considered as having a number of pixels, where for each pixel a given representation has one or more values. The pixels may be organized in a rectangular grid, as can be appreciated by those of ordinary skill within the art, or in another manner. Theinfrared representations image 104 as fixed on the medium 102 during the scanning process indicated by thearrow 108. - By comparison, the visible
light representations image 104 as fixed on the medium 102 and as detected the scanning process indicated by thearrow 106. That is, each such pixel has a red value, a green value, and a blue value, which when combined result in the pixel's actual color, as can be appreciated by those of ordinary skill within the art. Alternatively, the visiblelight representations light representations image 104 as fixed on the medium 102 and as detected during the scanning process indicated by thearrow 106. - The digital
infrared representation 112 of theimage 104 is converted to a logarithmic domain (302), where theinfrared representation 112 is not already in the logarithmic domain. That is, theinfrared representation 112 may initially already be in a logarithmic domain, such as a log10 in domain. Where it is not, theinfrared representation 112 is converted to a logarithmic domain inpart 302, by employing an inverse gamma-correction function, or another type of appropriate transformation, as can be appreciated by those of ordinary skill within the art. The logarithmic domain is also referred to as a density domain, as can be appreciated by those of ordinary skill within the art. Where theinfrared representation 112 is not already in a logarithmic domain, it is converted to the logarithmic domain on a per-pixel basis. - Similarly, the digital visible
light representation 110 of theimage 104 is converted to a logarithmic domain, as to a particular color channel thereof (304). The visiblelight representation 110 may initially be in the linear domain, and may be converted to the logarithmic domain, such as a log10 domain, by employing an inverse gamma-correction function, or another type of appropriate transformation, as can be appreciated by those of ordinary skill within the art. The visible light representation is converted to the logarithmic domain on a per-pixel basis. - In one embodiment, just one color channel of the digital visible
light representation 110 of theimage 104 is converted to the logarithmic domain. For instance, just the red values, the green values, or the blue values of the pixels of the visiblelight representation 110 are converted to the logarithmic domain. In one embodiment, the red color channel of the digital visiblelight representation 110 is in particular converted. - One or more parameters of a model mapping the color channel of the digital visible
light representation 110 of theimage 104, as converted to the logarithmic domain, to the digitalinfrared representation 112 of theimage 104, as also converted to the logarithmic domain, are then determined (304), or estimated. For example, in one embodiment, a linear model of the form y=a*x+b can be employed, although more sophisticated non-linear models may also be employed. In this linear model, y corresponds to infrared values of the pixels of theinfrared representation 112, and x corresponds to red, green, or blue color channel values of the pixels of the visiblelight representation 110. The parameter a corresponds to the slope of the line of the model, and the parameter b corresponds to the y-intercept of the line of the model. -
FIG. 4 shows agraph 400 illustratively depicting a representative linear model for which the parameters thereof can be determined inpart 304 of themethod 300 ofFIG. 3 , according to an embodiment of the invention. Thex-axis 402 denotes red, green, or blue color channel values of the pixels of the digital visiblelight representation 110, and the y-axis 404 denotes infrared values of the pixels of the digitalinfrared representation 112. The parameter a corresponds to the slope of theline 406 of the model, and the parameter b corresponds to the value of theline 406 of the model at which theline 406 crosses the y-axis 404. - The parameters of a linear model in particular that maps a color channel of the digital visible
light representation 110 to the digitalinfrared representation 112 can be determined by employing any type of line-fitting methodology. For example, for each pixel of theimage 104, the pair of values (x, y) is mapped onto thegraph 400. The value x for a pixel of theimage 104 is one of the color channel values of the pixel within the visiblelight representation 110 of theimage 104, such as the pixel's red color channel value. The value y for a pixel of theimage 104 is the pixel's infrared value within theinfrared representation 112 of theimage 104. Once the pairs of values have been mapped onto thegraph 400 for all the pixels of theimage 104, a line that best describes the relationship between the values (x, y) for all the pixels is determined. Determining this line in turn determines the slope and y-intercept parameters of the linear model. - Referring back to
FIG. 3 , once the model parameters have been determined, the digital visiblelight representation 110 of theimage 104 is decorrelated from the digitalinfrared representation 112 of theimage 104 based on the model (308). In particular, the following is performed for each pixel of theinfrared representation 112 of the image 104 (310). First, the value corresponding to the crosstalk between this pixel and the corresponding pixel of the visible light representation of theimage 104 is determined (312). That is, the red color channel value of the pixel, for instance, is evaluated as the x value within the model y=a*x+b, to generate the crosstalk value of the pixel, as the y value. This crosstalk value can particularly correspond to the absorption of infrared light by the cyan colorant within theimage 104 as fixed on the medium 102, for instance. - Thereafter, this crosstalk value of the pixel in question is subtracted from the original value of the pixel within the digital
infrared representation 112 of the image 104 (314). That is, the infrared value of the pixel of theimage 104 within the digitalinfrared representation 112 has subtracted therefrom the crosstalk value of the pixel determined inpart 312 using the model. Therefore, the contribution to the infrared value by absorption of infrared light by the cyan colorant within theimage 104 as fixed on the medium 102 is effectively removed from the digitalinfrared representation 112 as to this pixel. - In one embodiment, in
part 314 of themethod 300 ofFIG. 3 , a constant may further be added to the resulting infrared value of each pixel, so that the resulting infrared values of all the pixels of theimage 104 are greater than zero or non-negative. For example, onceparts image 104, the lowest resulting infrared value −y for any pixel may be located. As such, a constant B equal to y or (y+1) may be added to the resulting infrared value of each pixel of theimage 104 so that none of the infrared values less than zero or non-positive. Adding the constant B to the resulting infrared value of each pixel can be referred to as adding a constant background level to the digitalinfrared representation 112. - Thereafter, the digital
infrared representation 112 of theimage 104, from which the digitalvisible image representation 110 of theimage 104 has been decorrelated to result in the digital decorrelatedinfrared representation 116 of theimage 104, is converted back from the logarithmic domain (316), where theinfrared representation 112 was not initially in the logarithmic domain. For instance, where theinfrared representation 112 was not initially in the logarithmic domain, the decorrelatedinfrared representation 116 is converted back from the logarithmic domain back to its original, linear domain. This conversion may be achieved by performing the inverse of the transform applied to theinfrared representation 112 inpart 302 of themethod 300. - The digital decorrelated
infrared representation 116 of theimage 104 is then output (318). Such outputting can encompass any of a number of different acts or steps, and is not limited by embodiments of the invention. For example, the decorrelatedinfrared representation 116 may be stored on a computer-readable storage medium, such as a volatile or a non-volatile storage of a scanning device or a computing device. As another example, the decorrelatedinfrared representation 116 may be transmitted or sent from one device to another device. As a third example, the decorrelatedinfrared representation 116 may even be printed by a printing device on a printing medium, such as paper or another type of printing media. -
FIG. 5 shows amethod 500 for at least substantially reducing or removing both noise and distortions from the digital visiblelight representation 110 of theimage 104, according to an embodiment of the invention. The term “noise” here collectively refers to random (un-biased) deviations of the digital visiblelight representation 110 from theactual image 104, while the term “distortions” refers to various types of biased deviations from theactual image 104. This process may be used to denoise the image like conventional denoising filters and, in addition, remove dust, scratches, defects, artifacts, noise bursts, and/or other distortions that may not be removed by conventional denoising filters. Such distortions may be in the visiblelight representation 110 due to their presence within theimage 104 as fixed on theimaging medium 102, and/or may be generated during the scanning process indicated by thearrow 106. Themethod 500 may be implemented as a computer program stored on a computer-readable medium. For example, the computer-readable medium may be part of a scanning device, such that the computer program stored thereon is executed within the scanning device. It is noted that whereas prior art approaches for reducing the effects of both noise and distortions within a visible light representation of an image typically have to perform separate passes over the visible light representation for dealing separately with noise and with distortions. At least some embodiments of the invention are unique in that they can just perform a single pass over the visible light representation to reduce the effect of both noise and distortions, thereby reducing the overall computational load. - The following is performed for each pixel of the digital visible
light representation 110 of the image 104 (502). First, a credibility value is determined for the pixel (504). The credibility value can in one embodiment correspond to the likelihood that the pixel's value does not correspond to distortions within the visiblelight representation 110. A pixel that does not correspond to a distortion within the visible light representation has credibility value of 1, even if it includes noise. The terminology “a pixel's value” encompasses the pixel having more than one value. For instance, where the visiblelight representation 110 is in color, a pixel may have a red value, a green value, and a blue value, as has been described in the previous section of the detailed description. Generally, the credibility value for a pixel may be determined based on the value of the pixel within the visiblelight representation 110, the value of the pixel within theinfrared representation 112, or on a combination thereof. The credibility value for a pixel may further be referred to as the confidence value of the pixel. - A particular manner by which the credibility value for a pixel can be determined, using the decorrelated
infrared representation 116 of theimage 104, is described in the next section of the detailed description. However, in general, the credibility value for a pixel can be denoted as C, where C has a value between 0 and 1. A credibility value of 0 means that the pixel's value does not contain any credible information. That is, the likelihood of the pixel's value being representative of a part of thetrue image 104 is identical to a random measurement. Thus, a credibility value of 0 means that the pixel's value is assumed not to be representative at all of a part of theimage 104 itself, but rather completely corresponds to a distortion within the visiblelight representation 110 of theimage 104. - By comparison, a credibility value of 1 means that it is certain that the value of the pixel in question does not correspond to a distortion within the visible
light representation 110 of theimage 104. However, this value may nevertheless still include the effect of regular random noise, such as sensor noise, and so on. A credibility value between 0 and 1 therefore means that there is a corresponding likelihood that the pixel's value is representative of a part of theimage 104 itself, such that there is a likelihood of (1−C) that the pixel's value corresponds to a distortion within the visiblelight representation 110 of theimage 104. - Next, a new value for the pixel is determined by using a credibility-weighted bilateral filter (506). That is, in lieu of a classical bilateral filter, which is known within the art, a bilateral filter is employed that uses the credibility values of the pixels of the digital visible
light representation 110 of theimage 104 to generate a new value for the pixel. Such a credibility-weighted bilateral filter may use a zero-order bilateral filter; an n-order bilateral filter, where n is greater than zero; or, a combined-order bilateral filter, which combines several n-order bilateral filters of different orders. Examples of such credibility-weighted bilateral filters are described later in the detailed description. - First however, operation of a classical zero-order bilateral filter is described. The classical zero-order bilateral filter operates on a local neighborhood of k×k pixels, centered at a central pixel for which a new value is to be determined. As such, k is odd where the neighborhood is a square grid of pixels centered at the central pixel. Each pixel within the neighborhood other than the central pixel is referred to as a neighboring, or neighborhood, pixel to the central pixel.
FIG. 6 shows an example three-by-threepixel neighborhood 600, according to an embodiment of the invention. Acentral pixel 602 is located at the center of theneighborhood 600. The other pixels of theneighborhood 600, includingpixels central pixel 602. - The classical bilateral filter can be mathematically described as:
-
- In equation (1), the index 0 refers to the central pixel that is to be filtered to have a new value thereof determined, and the index n refers to the n-th neighbor of this pixel. Furthermore, g(g)ε[0,1] is a photometric function, β is a scaling parameter related to edge strength, f is the value of a pixel, and Ln is the spatial weight of the neighboring pixel n relative to the central pixel. The sum over n includes n=0. The value u0 is the new value for the central pixel. The photometric function determines the relative importance of neighboring pixels due to their photometric difference from the central pixel, as known within the art. The value of the photometric function is between 0 and 1, where the larger the absolute photometric distance is, the smaller the value of the photometric function is. The scaling parameter β determines the sensitivity of the function to photometric differences. It may vary from pixel to pixel.
- In one embodiment, the formulation of the classical bilateral filter is generalized to use different combinations of multiple neighboring pixels for each neighboring pixel. In equation (1), for instance, the value of a neighboring pixel, fn, is replaced by a predictor function, Pn. A predictor function can be a linear combination of one or more neighboring pixels. The constant predictor function Pn equals fn provides the zero-order formulation of equation (1), and is best suited to reproduce image or signal profiles corresponding to a zero-order polynomial (piecewise constant). Higher-order predictors are obtained by using more neighboring pixels, and are better suited to reproduce image or signal profiles corresponding to higher order polynomials, such as piecewise linear polynomials, and so on. This property enables avoidance of undesirable smoothing across gradual edges characteristic of high-resolution images. Thus, the classical bilateral filter is reformulated as:
-
- The bilateral filter may be written equivalently as:
-
- In equation (3), δn=Pn−f0 is the change to the central pixel value f0 proposed by predictor Pn. This value δn is referred to as the prediction difference for predictor Pn. The classical zero-order prediction difference is δn=fn−f0. The credibility-weighted bilateral filter employed by embodiments of the invention, like the classical bilateral filter, determines a weighted average of pixels in the defined neighborhood. In both the credibility-weighted and the classical bilateral filters, the contribution from each pixel of the neighborhood to the new value of the central pixel depends on the spatial distance of the pixel from the central pixel, and the prediction difference relative to the central pixel. With respect to the former, a Gaussian function may be used to reduce the spatial weight of pixels that are farther away from the central pixel. With respect to the latter, the contribution of each prediction difference is weighted by the photometric function, such that if that difference is low, the weight is high, and vice-versa.
- Unlike the classical bilateral filter, however, the credibility-weighted bilateral filter further determines the contribution from each neighborhood predictor to the new value of the central pixel depending on the credibility values of the pixels in the neighborhood. For a predictor, Pn, a credibility value Cn is defined. The contribution from each predictor depends on the credibility value of the predictor, as well as the credibility of the central pixel itself. Two mechanisms use the pixel credibility values. First, the sensitivity of the photometric function, g in equation (3), is made to depend on the credibility of the central pixel. That is, the credibility value of the central pixel influences the sensitivity of the photometric function employed to all prediction differences.
- Second, the weight of each predictor difference is made proportional to the credibility value corresponding to the predictor.
- The primary effect of the first mechanism is that if the credibility of the central pixel is low, the weight of each of its neighboring pixels increases. For example, in the classical bilateral filter a neighboring pixel with a large photometric difference from the central pixel will receive a much lower weight than the central pixel. However, by comparison in an embodiment of the invention, if the central pixel has low credibility the same neighboring pixel may receive a significantly higher weight both absolutely, and relative to the central pixel.
- The credibility value of a predictor is a function of the credibility values of its constituent neighboring pixels. A zero-order predictor, for example, is made up of a single neighborhood pixel and its credibility is equal to the credibility value of that neighborhood pixel. A first-order predictor, Pn, is a linear combination of two neighborhood pixels, n1 and n2. For instance, the pixels might comprise opposing pairs relative to the central pixel, and the predictor function might be the average of the two pixel values. Its credibility value is a function of the two pixel credibility values. A function for combining two pixel-credibility values desirably has a range of [0, 1] and desirably is not larger than any of the individual credibility values. Suitable functions include, for instance, so-called “fuzzy AND” functions known within the art. More specifically, for example, such functions can include:
-
C n1,n2=min(C n1 ,C n2); (4) -
Cn1,n2=Cn1gCn2; and, (5) -
C n1,n2 =└C n1 +C n2−1┘0. (6) - In equation (6), the notation └g┘0 indicates clipping of negative values to zero. The functions of equations (4)-(6) are ordered by how severely they penalize the pair-credibility based on individual pixel credibility values.
- A credibility-weighted bilateral filter can be mathematically expressed as follows:
-
- The difference between the bilateral filter of equation (1) and the credibility-weighted bilateral filter of equation (7) is the presence of the credibility values C0, which is the credibility value of the central pixel, and Cn, which is the credibility value of the predictor Pn. The credibility-weighted zero-order bilateral filter can be equivalently written as:
-
- For instance, the behavior of the zero-order credibility-weighted bilateral filter ranges from two extreme cases. First, if the central pixel has zero credibility, such that C0=0, then its value is replaced by a weighted average of the values of its neighboring pixels, regardless of their contrast similarity to the central pixel, but whether their relative weights are equal to their credibility values. As such, the central pixel's value is replaced by an average of its neighboring pixels to the extent that the neighboring pixels are valid. In this case, the described process performs the functionality of in-filling (i.e., filling in missing data), rather than denoising.
- Second, if the central pixel has full credibility, such that C0=1, then its value is replaced by a modified bilateral weighted average of its value and the values of its neighboring pixels. The bilateral weight of each neighboring pixel depends both on its contrast similarity to the central pixel, and on its credibility. As such, the central pixel's value is adjusted according to a bilateral weighted average of it and its neighboring pixels, where the neighboring pixels contribute to this average to the extent that they are valid. Unlike the classical bilateral filter, low-credibility pixels are at least substantially excluded from consideration and hence do not distort the result.
- It has been observed that a zero-order bilateral filter can be less than ideal for higher-resolution images, which are images having a large number of pixels. In such images, edges between objects within the images can span a number of pixels, and appear as gradual changes. A first order filter may be used that employs pairs of pixels for prediction. A predictor difference may be, for instance,
-
- The first-order bilateral filter, as has been described, is better suited to higher-resolution images than the zero-order bilateral filter. However, it has been found that the first-order bilateral filter does not remove some types of distortions from images as well as the zero-order bilateral filter does. This is because often there is no pair of opposing pixels with high credibility in the neighborhood of the defect. Therefore, a desirable trade-off for at least substantially removing or reducing both distortions and noise can be obtained by weighting the response of both filters. When both pixels in a pair have high credibility, it is desirable to employ the first-order filter for better signal preservation. If, however, just one of the pixels in the pair has high credibility, it is desirable to employ the zero-order term so as not to lose the content of that pixel. The weighting function itself, therefore, is based on the similarity between the credibility values of the pair of pixels in question.
- A credibility-weighted combined zero-order and first-order bilateral filter can be expressed mathematically as:
-
u 0 =f 0 +αg[Credibility weighted first order bilateral filter]+(1−α)g[Credibility weighted zero order bilateral filter]. - In equation (9), the bracketed credibility-weighted first-order bilateral filter is the second term to the right of the equals sign in equation (9) (i.e., the right-hand-most term). The bracketed credibility-weighted zero-order bilateral filter is the third term to the right of the equals sign in equation (9) (i.e., the right-hand-most term). The weight given to the credibility weighted first-order bilateral filter may be in one embodiment
-
- This weighting factor α ensures that the first-order filter is used when the pair of pixels in question have similar credibility, but that the zero-order filter is used when the differences between the pixels' credibility values is significant. The combined filters approach described above may be employed in general with credibility-weighted bilateral filters of even higher order.
- Referring back to
FIG. 5 , the performance ofparts light representation 110 of theimage 104 results in the digital filtered visiblelight representation 122 of theimage 104, as indicated by thearrow 120. The filtered visiblelight representation 122 differs from the digital visiblelight representation 110 in that the latter results upon application of a credibility-weighted bilateral filter to the former, as has been described in detail. The filtered visiblelight representation 122 of theimage 104 is thus that from which effects of dust, scratches, noise, artifacts, defects, and/or other distortions have been substantially reduced or removed via the application of a credibility-weighted bilateral filter. - The digital filtered visible
light representation 122 of theimage 104 is finally output (508). Such outputting can encompass any of a number of different acts or steps, and is not limited by embodiments of the invention. For example, the filtered visiblelight representation 122 may be stored on a computer-readable storage medium, such as a volatile or a non-volatile storage of a scanning device or a computing device. As another example, the filtered visiblelight representation 122 may be transmitted or sent from one device to another device. As a third example, the filtered visiblelight representation 122 may be printed by a printing device on a printing medium, such as paper or another type of printing media. -
FIG. 7 shows amethod 700 for determining a credibility value of a pixel of theimage 104, according to an embodiment of the invention. Themethod 700 may be employed inpart 504 of themethod 500 ofFIG. 5 that has been described. Themethod 700 may be implemented as a computer program stored on a computer-readable medium. For example, the computer-readable medium may be part of a scanning device, such that the computer program stored thereon is executed within the scanning device. A credibility value, as has been described, can in one embodiment correspond to the likelihood that the pixel's value is valid in relation to theimage 104 itself, or that the pixel's value does not correspond to a distortion within the digital visiblelight representation 110 of theimage 104. - To determine the credibility values of the pixels within the visible
light representation 110 of theimage 104, a distortion model is employed. In one embodiment, the distortion model employed that is used to determine the credibility values from the decorrelatedinfrared representation 116 of theimage 104 assumes that a given pixel is a distortion if it is dark and has a low probability of originating from the noise model of the decorrelatedinfrared representation 116. This noise model can be a random Gaussian noise model, for instance. The following description describes how the parameters of this noise model are determined from statistics of the decorrelatedinfrared representation 116, as well as a method for using these parameters to determine pixel credibility values. - First a noise variance within the digital decorrelated
infrared representation 116 of theimage 104 is determined (702). The decorrelatedinfrared representation 116 can be assumed to follow a standard Gaussian distribution in one embodiment. The noise variance may be mathematically expressed as σ2, and can be determined from the decorrelatedinfrared representation 116 using any type of technique or algorithm used for determining noise variance within a data set or, more generally, a signal. In one embodiment, the noise variance may be mathematically determined as simply the sample variance within the decorrelatedinfrared representation 116 itself, or: -
σ2=Σn(i n−μ)2. (10) - In equation (10), in is the value of pixel n within the decorrelated
infrared representation 116 of theimage 104, and μ is the mean over all N pixels of the decorrelatedinfrared representation 116. In this situation, after decorrelation, μ is set equal to B, the infrared background, as has been described. - Next, for the pixel of the
image 104 in relation to which themethod 700 is being performed, the distance of the value of the pixel within the decorrelatedinfrared representation 116 of theimage 104 from a baseline value is determined (704). The baseline value can in one embodiment be the background value y that has been described in relation topart 314 of themethod 300 ofFIG. 3 . The distance of this infrared value of the pixel in question can thus be expressed as: -
d=i n −B. (11) - In equation (11), d is the distance of the pixel in question, in is its infrared value within the decorrelated
infrared representation 116, and B is the baseline value of the infrared background. - The distance is further normalized as a multiple of the standard deviation within the noise variance. That is,
-
- where d′ is the normalized distance, and s is the standard deviation within the noise variance. The standard deviation within the noise variance can be determined conventionally, via:
-
- In equation (13), N is the total number of pixels within the
image 104, ik is the infrared value of pixel k within the decorrelatedinfrared representation 116, and μ is the infrared background, B. - The credibility value for the pixel in relation to which the
method 700 ofFIG. 7 is being performed is determined based on this determined distance as has been normalized (708). In one particular embodiment, the credibility value is determined as the probability of this normalized distance within a standard normal variable, as can be appreciated by those of ordinary skill within the art. That is, the credibility value is determined from the probability of the normalized distance in relation to a standard normal distribution, which is the Gaussian distribution. -
FIG. 8 shows a graphic 800 illustratively depicting a standard Gaussian distribution in relation to which the credibility value for a pixel can be determined inpart 708 of themethod 700 ofFIG. 7 , according to an embodiment of the invention. Thex-axis 802 denotes the normalized distance of the infrared value of a pixel from the baseline value, whereas the y-axis 804 denotes probability, which in this instance is the credibility value of the pixel. Thecurve 806 is the standard Gaussian distribution curve. The baseline value on thex-axis 802 corresponds to a credibility value of 1 within thecurve 806. As the normalized distance decreases from the baseline value on thex-axis 802, the credibility value ultimately decreases to zero along thecurve 806. Therefore, the credibility value for a pixel is determined using thecurve 806 by locating the probability, or credibility value, on thecurve 806 that corresponds to the normalized distance for the pixel in question. - Referring back to
FIG. 7 , in another embodiment, in lieu of using a standard normal distribution to determine the credibility value for the pixel,parts part 706 is less than a first threshold multiple of the standard deviation within the noise variance, then the credibility value is set to zero (710). By comparison, if the normalized distance is greater than a second threshold multiple of the standard deviation, then the credibility value is set to one (712). Otherwise, if the normalized distance is between the first and the second threshold multiples of the standard deviation, then the credibility value is set as the normalized value of the normalized distance between these first and second threshold multiples (714). - For example, the first threshold multiple may be negative three, and the second threshold multiple may be negative one. In this instance, a pixel having a normalized distance that is at least three standard deviations below the baseline value is automatically considered as being defective, such that its credibility value is set to zero in
part 710. By comparison, a pixel having a normalized distance that is greater than one standard deviation below the baseline value is automatically considered as being valid, such that its credibility value is set to one inpart 712. In the third case, if a pixel has a normalized distance between three standard deviations below the baseline value and one standard deviation below the baseline value, then the normalized distance is normalized again between these two extremes. As such, the resulting credibility value is within the range [0, 1]. - Finally, in one embodiment of the invention, the credibility value may be subjected to a morphological erosion filter (716). Morphological erosion is a filtering technique known by those of ordinary skill within the art. Morphological erosion filtering is particularly beneficial where the digital
infrared representation 112 of theimage 104 is misregistered, or unaligned, in relation to the digital visiblelight representation 110 of theimage 104. - For example, the scanning processes of the
image 104 fixed on theimaging medium 102 indicated by thearrows imaging medium 102 to capture the visiblelight representation 110 of theimage 104. The scanning mechanism may then return to its initial position, and move across theimaging medium 102 again to capture theinfrared representation 112 of theimage 104. The scanning mechanism and/or theimaging medium 102 itself may become misaligned between these two scanning operations. As such, the pixel (x, y) of the visiblelight representation 110 may not corresponding to the pixel (x, y) of theinfrared representation 112, but rather correspond to the pixel (x+a, y+b) of theinfrared representation 112. Such misalignment is desirably corrected a priori using any suitable image registration technique, as can be appreciated by those of ordinary skill within the art. However, these registration techniques can often result in images that are not perfectly aligned, but where the alignment error is within a single pixel. Thus, applying a morphological erosion filter is a technique that can be used to correct defects despite such residual misregistration of pixels between therepresentations - As has been noted, the credibility value determination of the
method 700 ofFIG. 7 can be employed in relation to the credibility-weighted bilateral filtering of themethod 500 ofFIG. 5 . Both themethods image 104 that is fixed on aimaging medium 102 and that is then scanned, as indicated by thearrows light representation 110 thereof and a digitalinfrared representation 112 thereof. However, themethods - For example, in another embodiment, the
methods light representation 110 and a digitalinfrared representation 112 of an image that are acquired or generated in a manner other than scanning. Even more generally, themethods -
FIG. 9 shows a rudimentary andrepresentative scanning device 900, according to an embodiment of the invention. Thescanning device 900 may be a standalone scanner, or an “all-in-one” (AIO) device that includes printing and/or other functionality in addition to scanning functionality. Thescanning device 900 is depicted inFIG. 9 as including ascanning mechanism 902 and aprocessing mechanism 904. As can be appreciated by those of ordinary skill within the art, thescanning device 900 can and typically will include additional components or mechanism, besides those depicted inFIG. 9 . - The
scanning mechanism 902 includes the hardware by which thescanning device 900 is able to capture theimage 104 fixed on theimaging medium 102 as both the digital visiblelight representation 110 and the digitalinfrared representation 112, as indicated byarrows scanning mechanism 902 may include two sets of sensors, one for generating the visiblelight representation 110, and another for generating theinfrared representation 112. Alternatively, thescanning mechanism 902 may generate both therepresentations - The
scanning mechanism 902 may be movable or stationary. Where thescanning mechanism 902 is movable, typically theimaging medium 102 remains stationary, and thescanning mechanism 902 is moved over the medium 102 to capture theimage 104 as the digital visiblelight representation 110 and the digitalinfrared representation 112. Where thescanning mechanism 902 is stationary, typically theimaging medium 102 is moved by thescanning device 900 over thescanning mechanism 902 so that themechanism 902 captures theimage 104 as therepresentations scanning device 900 over theimaging medium 102 for thescanning mechanism 902 to capture therepresentations image 104 fixed on the medium 102. - The
processing mechanism 904 may be implemented in software, hardware, or a combination of software and hardware. Theprocessing mechanism 904 is to substantially reduce or remove the effects of dust, scratches, noise, artifacts, defects, and/or other distortions within the digital visiblelight representation 110 of theimage 104. Theprocessing mechanism 904 employs a credibility-weighted bilateral filter, as has been described. Furthermore, theprocessing mechanism 904 decorrelates the digital visiblelight representation 110 from the digitalinfrared representation 112 of theimage 104 to generate the credibility values used within the credibility-weighted bilateral filter. Therefore, it is said that the credibility-weighted bilateral filter uses both therepresentations image 104, insofar as theinfrared representation 112 is used to generate the credibility values employed during credibility-weighted bilateral filtering. Theprocessing mechanism 904 can thus be that within thescanning device 900 which performs the methods that have been described.
Claims (8)
1. A scanning device comprising:
a scanning mechanism to scan an image fixed on a medium to generate a digital infrared representation of the image and a digital visible light representation of the image;
a processing mechanism to substantially reduce effects of noise and distortions within the digital visible light representation of the image in one pass,
the processing mechanism at least decorrelating visible light aspects from the infrared representation of the image and employing a one-pass filter that uses both the infrared and the visible light representations of the image.
2. The scanning device of claim 1 wherein the one pass filter is a credibility-weighted bilateral filter that uses both the infrared and the visible light representations of the image.
3. The scanning device of claim 1 , wherein the processing mechanism is to decorrelate the visible light aspects from the infrared representation of the image by:
determining one or more parameters of a model mapping a color channel of the visible light representation to the infrared representation;
for each of a plurality of pixels of infrared representation of the scanned image,
determining a value corresponding to crosstalk between the pixel within the infrared representation and a corresponding pixel within the visible light representation, based on the model; and,
subtracting the value determined from an original value of the pixel to decorrelate the visible light aspects from the infrared representation as to the pixel.
4. The scanning device of claim 1 , wherein the processing mechanism is to employ the credibility-weighted bilateral filter that uses both the infrared and the visible light representations of the image by:
for each of a plurality of pixels of the visible light representation,
determining a credibility value corresponding to a likelihood that a value of the pixel is valid in relation to the image and that the value does not correspond to distortions within the visible light representation of the image;
determining a new value for the pixel by using the bilateral filter, taking into account the credibility values determined for the pixels within the visible light representation.
5. A method for decorrelating a digital visible light representation of an image from a digital infrared representation of the image, comprising:
determining one or more parameters of a model mapping a color channel of the visible light representation to the infrared representation;
decorrelating the digital visible light representation from the digital infrared representation based on the model; and,
outputting the infrared representation of the image from which the visible light representation has been decorrelated.
6. The method of claim 5 , wherein decorrelating the digital visible light representation from the digital infrared representation based on the model comprises, for each of a plurality of pixels of infrared representation of the image,
determining a value corresponding to crosstalk between the pixel within the infrared representation and a corresponding pixel within the visible light representation, based on the model; and,
subtracting the value determined from an original value of the pixel to decorrelate the visible light representation from the infrared representation as to the pixel.
7. The method of claim 5 , wherein the method is for decorrelating infrared light absorbed by cyan colorant within the image fixed on the medium from the digital infrared representation of the image, and
wherein determining the parameters of the model mapping the color channel of the visible light representation to the infrared representation comprises determining the parameters of the model mapping a red color channel of the visible light representation to the infrared representation.
8. The method of claim 5 , further comprising:
prior to determining the parameters of the model,
converting the infrared representation to a logarithmic domain;
converting the visible light representation as to the color channel to the logarithmic domain; and,
subsequent to decorrelating the digital visible light representation from the digital infrared representation based on the model,
converting the infrared representation from which the visible light representation has been decorrelated back from the logarithmic domain.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/447,244 US20120257265A1 (en) | 2007-01-16 | 2012-04-15 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/623,697 US8180168B2 (en) | 2007-01-16 | 2007-01-16 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
US13/447,244 US20120257265A1 (en) | 2007-01-16 | 2012-04-15 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/623,697 Division US8180168B2 (en) | 2007-01-16 | 2007-01-16 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120257265A1 true US20120257265A1 (en) | 2012-10-11 |
Family
ID=39617847
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/623,697 Expired - Fee Related US8180168B2 (en) | 2007-01-16 | 2007-01-16 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
US13/447,244 Abandoned US20120257265A1 (en) | 2007-01-16 | 2012-04-15 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/623,697 Expired - Fee Related US8180168B2 (en) | 2007-01-16 | 2007-01-16 | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
Country Status (1)
Country | Link |
---|---|
US (2) | US8180168B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10003758B2 (en) | 2016-05-02 | 2018-06-19 | Microsoft Technology Licensing, Llc | Defective pixel value correction for digital raw image frames |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200828983A (en) * | 2006-12-27 | 2008-07-01 | Altek Corp | Method of eliminating image noise |
US8189944B1 (en) * | 2008-03-21 | 2012-05-29 | Hewlett-Packard Development Company, L.P. | Fast edge-preserving smoothing of images |
US8428385B2 (en) * | 2009-06-24 | 2013-04-23 | Flir Systems, Inc. | Non-uniformity error correction with a bilateral filter |
US8854471B2 (en) * | 2009-11-13 | 2014-10-07 | Korea Institute Of Science And Technology | Infrared sensor and sensing method using the same |
US8885890B2 (en) * | 2010-05-07 | 2014-11-11 | Microsoft Corporation | Depth map confidence filtering |
US8532425B2 (en) * | 2011-01-28 | 2013-09-10 | Sony Corporation | Method and apparatus for generating a dense depth map using an adaptive joint bilateral filter |
WO2013133627A1 (en) * | 2012-03-06 | 2013-09-12 | 엘지전자 주식회사 | Method of processing video signals |
US20160098820A1 (en) * | 2014-10-03 | 2016-04-07 | Raghu Kopalle | System for robust denoising of images |
CN110830779B (en) * | 2015-08-28 | 2022-06-03 | 杭州海康威视数字技术股份有限公司 | An image signal processing method and system |
TWI608447B (en) * | 2015-09-25 | 2017-12-11 | 台達電子工業股份有限公司 | Stereo image depth map generation device and method |
CN106462957B (en) * | 2016-05-19 | 2019-03-08 | 深圳大学 | The minimizing technology and system of fringes noise in a kind of infrared image |
TWI818063B (en) | 2018-08-21 | 2023-10-11 | 大陸商北京字節跳動網絡技術有限公司 | Mapping improvement for bilateral filter |
CN110570374B (en) * | 2019-09-05 | 2022-04-22 | 湖北南邦创电科技有限公司 | Processing method for image obtained by infrared sensor |
KR102279867B1 (en) * | 2020-06-02 | 2021-07-21 | 주식회사 슈프리마아이디 | A method for generating an image that light noise is removed and an image generating device using the method for generating an image |
CN113674319B (en) * | 2021-08-23 | 2024-06-21 | 浙江大华技术股份有限公司 | Target tracking method, system, equipment and computer storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5266805A (en) * | 1992-05-05 | 1993-11-30 | International Business Machines Corporation | System and method for image recovery |
US5673336A (en) * | 1993-12-23 | 1997-09-30 | International Business Machines Corporation | Automatic cross color elimination |
US6393160B1 (en) * | 1998-03-13 | 2002-05-21 | Applied Science Fiction | Image defect correction in transform space |
US6442301B1 (en) * | 1997-01-06 | 2002-08-27 | Applied Science Fiction, Inc. | Apparatus and method for defect channel nulling |
US6711302B1 (en) * | 1999-10-20 | 2004-03-23 | Eastman Kodak Company | Method and system for altering defects in digital image |
JP2006180268A (en) * | 2004-12-22 | 2006-07-06 | Sony Corp | Image processing apparatus, image processing method, program, and recording medium |
US7200280B2 (en) * | 2001-09-27 | 2007-04-03 | Fuji Photo Film Co., Ltd. | Image processing apparatus |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7352911B2 (en) * | 2003-07-31 | 2008-04-01 | Hewlett-Packard Development Company, L.P. | Method for bilateral filtering of digital images |
US7720303B2 (en) | 2004-04-28 | 2010-05-18 | Hewlett-Packard Development Company, L.P. | Polynomial approximation based image filter methods, systems, and machine-readable media |
US7457477B2 (en) * | 2004-07-06 | 2008-11-25 | Microsoft Corporation | Digital photography with flash/no flash extension |
US7522782B2 (en) | 2005-04-06 | 2009-04-21 | Hewlett-Packard Development Company, L.P. | Digital image denoising |
US7599569B2 (en) * | 2006-01-13 | 2009-10-06 | Ati Technologies, Ulc | Method and apparatus for bilateral high pass filter |
-
2007
- 2007-01-16 US US11/623,697 patent/US8180168B2/en not_active Expired - Fee Related
-
2012
- 2012-04-15 US US13/447,244 patent/US20120257265A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5266805A (en) * | 1992-05-05 | 1993-11-30 | International Business Machines Corporation | System and method for image recovery |
US5673336A (en) * | 1993-12-23 | 1997-09-30 | International Business Machines Corporation | Automatic cross color elimination |
US6442301B1 (en) * | 1997-01-06 | 2002-08-27 | Applied Science Fiction, Inc. | Apparatus and method for defect channel nulling |
US6393160B1 (en) * | 1998-03-13 | 2002-05-21 | Applied Science Fiction | Image defect correction in transform space |
US6711302B1 (en) * | 1999-10-20 | 2004-03-23 | Eastman Kodak Company | Method and system for altering defects in digital image |
US7200280B2 (en) * | 2001-09-27 | 2007-04-03 | Fuji Photo Film Co., Ltd. | Image processing apparatus |
JP2006180268A (en) * | 2004-12-22 | 2006-07-06 | Sony Corp | Image processing apparatus, image processing method, program, and recording medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10003758B2 (en) | 2016-05-02 | 2018-06-19 | Microsoft Technology Licensing, Llc | Defective pixel value correction for digital raw image frames |
Also Published As
Publication number | Publication date |
---|---|
US8180168B2 (en) | 2012-05-15 |
US20080170800A1 (en) | 2008-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8180168B2 (en) | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions | |
US6987892B2 (en) | Method, system and software for correcting image defects | |
Young et al. | Fundamentals of image processing | |
US8427559B2 (en) | Image data processing method by reducing image noise, and camera integrating means for implementing said method | |
US6934056B2 (en) | Noise cleaning and interpolating sparsely populated color digital image using a variable noise cleaning kernel | |
DE69424920T2 (en) | Image-dependent sharpness improvement | |
US6137904A (en) | Method and apparatus for assessing the visibility of differences between two signal sequences | |
US7856150B2 (en) | Denoise method on image pyramid | |
US7672528B2 (en) | Method of processing an image to form an image pyramid | |
US20030206231A1 (en) | Method and apparatus for enhancing digital images utilizing non-image data | |
Li et al. | Color filter array demosaicking using high-order interpolation techniques with a weighted median filter for sharp color edge preservation | |
US20040120598A1 (en) | Blur detection system | |
JPH10187994A (en) | Method for evaluating and controlling digital image contrast | |
US10237519B2 (en) | Imaging apparatus, imaging system, image generation apparatus, and color filter | |
Al-Hatmi et al. | A review of Image Enhancement Systems and a case study of Salt &pepper noise removing | |
US7430334B2 (en) | Digital imaging systems, articles of manufacture, and digital image processing methods | |
Tanbakuchi et al. | Adaptive pixel defect correction | |
Koren | The Imatest program: comparing cameras with different amounts of sharpening | |
Bonanomi et al. | I3D: a new dataset for testing denoising and demosaicing algorithms | |
Yu | Colour demosaicking method using adaptive cubic convolution interpolation with sequential averaging | |
US20040169872A1 (en) | Blind inverse halftoning | |
JPH11261740A (en) | Picture evaluating method, its device and recording medium | |
Bonnier et al. | Measurement and compensation of printer modulation transfer function | |
Triantaphillidou | Introduction to image quality and system performance | |
CN118485580B (en) | Image enhancement method and device based on bilateral filtering, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |