WO2004019608A1 - 撮像システムおよび画像処理プログラム - Google Patents
撮像システムおよび画像処理プログラム Download PDFInfo
- Publication number
- WO2004019608A1 WO2004019608A1 PCT/JP2003/010614 JP0310614W WO2004019608A1 WO 2004019608 A1 WO2004019608 A1 WO 2004019608A1 JP 0310614 W JP0310614 W JP 0310614W WO 2004019608 A1 WO2004019608 A1 WO 2004019608A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- noise
- estimating
- parameter
- imaging system
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 111
- 230000009467 reduction Effects 0.000 claims abstract description 79
- 230000003287 optical effect Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 111
- 230000006870 function Effects 0.000 claims description 72
- 238000000034 method Methods 0.000 claims description 69
- 238000012937 correction Methods 0.000 claims description 46
- 238000009499 grossing Methods 0.000 claims description 44
- 238000001514 detection method Methods 0.000 claims description 27
- 238000000926 separation method Methods 0.000 claims description 19
- 238000005375 photometry Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 17
- 238000009826 distribution Methods 0.000 claims description 16
- 230000035945 sensitivity Effects 0.000 claims description 12
- 230000014509 gene expression Effects 0.000 claims description 7
- 230000000295 complement effect Effects 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 43
- 238000011156 evaluation Methods 0.000 description 38
- 238000000605 extraction Methods 0.000 description 33
- 235000019557 luminance Nutrition 0.000 description 31
- 238000012546 transfer Methods 0.000 description 18
- 238000001914 filtration Methods 0.000 description 13
- 238000009472 formulation Methods 0.000 description 13
- 239000000203 mixture Substances 0.000 description 13
- 230000000875 corresponding effect Effects 0.000 description 11
- 239000000284 extract Substances 0.000 description 10
- 230000003321 amplification Effects 0.000 description 9
- 238000003199 nucleic acid amplification method Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 239000003086 colorant Substances 0.000 description 6
- 230000002596 correlated effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 238000011946 reduction process Methods 0.000 description 4
- 101000980980 Arabidopsis thaliana Phosphatidate cytidylyltransferase 5, chloroplastic Proteins 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 239000006185 dispersion Substances 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 230000004907 flux Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000004321 preservation Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000003638 chemical reducing agent Substances 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/618—Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
Definitions
- the present invention relates to an imaging system and an image processing program for reducing random noise caused by an imaging device system.
- a digitized signal obtained from an image sensor and its associated analog circuit and A / D converter generally contains a noise component, which is fixed pattern noise and run-time noise. It can be broadly divided into dam noise.
- the fixed pattern noise is noise mainly caused by an image sensor, such as a defective pixel.
- random noise is generated in an image sensor and an analog circuit, and has characteristics close to white noise characteristics.
- Japanese Patent Application Laid-Open No. 2000-57090 discloses that a difference value ⁇ between a pixel of interest and a neighboring pixel is obtained, and the obtained difference value is statically calculated.
- noise reduction processing is performed without deteriorating the original signal such as edges.
- the noise amount dynamically changes due to factors such as the temperature of the image sensor at the time of photographing, the exposure time, and the gain.
- Such a technique using a static constant term cannot cope with a function corresponding to the noise amount at the time of photographing, and the estimation accuracy of the noise amount is inferior.
- the frequency characteristics of the filtering are controlled based on the noise amount.This filtering is performed based on the signal level so that the flat portion and the edge portion are processed equally without discrimination. The edge part in the area where the is estimated to be large will be degraded. That is, there is a problem that it is not possible to cope with a process in which the original signal and the noise are distinguished, and that the preservation of the original signal is not good.
- the selection as to whether or not to use the moving average method is made by comparison with a threshold value. Since it is given, it cannot cope with the fact that the noise amount varies depending on the signal level, and the average number of pixels and the selection of the moving average method cannot be optimally controlled. For this reason, it is not possible to cope with accurate noise amount estimation and noise reduction processing, and noise components remain and the original signal deteriorates.
- an imaging system calculates the amount of noise contained in a digitized signal from an image sensor having a plurality of pixels arranged for each pixel or for a predetermined unit area composed of a plurality of pixels. It has a noise estimating unit for estimating, and a noise reducing unit for reducing noise in the signal based on the noise amount estimated by the noise estimating unit.
- FIG. 1 is a block diagram showing the configuration of the imaging system according to the first embodiment of the present invention.
- FIG. 2 is a block diagram illustrating a configuration of a noise estimating unit according to the first embodiment.
- FIG. 3 is a diagram showing an example of the arrangement of the 0B region in the first embodiment.
- FIG. 4 is a diagram showing the relationship between the dispersion of the 0 B region and the temperature of the image sensor in the first embodiment.
- FIG. 5 is a diagram for explaining the formulation of the noise amount in the first embodiment.
- FIG. 6 is a diagram for explaining parameters used for formulation of a noise amount in the first embodiment.
- FIG. 7 is a block diagram illustrating a configuration of a noise reduction unit according to the first embodiment.
- FIG. 8 is a block diagram showing a configuration of an imaging system according to the second embodiment of the present invention.
- FIG. 9 is a diagram showing a primary color bay type filter configuration in the color filter of the second embodiment.
- FIG. 10 Block diagram showing the configuration of the noise estimator in the second embodiment.
- FIG. 11 A block diagram showing a configuration of a noise reduction unit according to the second embodiment.
- FIG. 12 is a flowchart showing noise reduction processing performed by an image processing program in the computer of the second embodiment.
- FIG. 13 is a block diagram illustrating a configuration of an imaging system according to a third embodiment of the present invention.
- FIG. 14 A diagram showing a primary color bayer type filter configuration in the color filter of the third embodiment.
- FIG. 15 is a block diagram showing a configuration of a shooting situation estimation unit in the third embodiment.
- FIG. 16 A diagram for explaining a divided pattern for evaluation photometry in the third embodiment.
- FIG. 17 A block diagram showing the configuration of the noise estimator in the third embodiment.
- FIG. 18 Diagram for explaining formulation of noise amount in the third embodiment.
- FIG. 19 Diagram for explaining formulation of noise amount in the third embodiment.
- FIG. 20 A diagram for explaining parameters used for formulation of a noise amount in the third embodiment.
- FIG. 21 is a block diagram showing a configuration of a noise reduction unit in the third embodiment.
- Fig. 22 Block diagram showing the configuration of the imaging system according to the fourth embodiment of the present invention.
- FIG. 23 is a block diagram showing an example of a configuration of a photographing situation estimating unit according to the fourth embodiment.
- FIG. 24 A block diagram showing the configuration of the noise estimator in the fourth embodiment.
- Figure 25 Image processing program on the computer of the fourth embodiment. ⁇ Flowchart showing a part of noise reduction processing performed by gram.
- FIG. 26 is a flowchart showing another part of the noise reduction processing performed by the image processing program in the computer of the fourth embodiment. '
- FIGS. 1 to 7 show a first embodiment of the present invention.
- FIG. 1 is a block diagram showing a configuration of an imaging system
- FIG. 2 is a block diagram showing a configuration of a noise estimating unit.
- FIG. 3 is a diagram showing an example of the arrangement of the 0B region
- FIG. 4 is a diagram showing the relationship between the dispersion of the 0B region and the temperature of the image sensor
- FIG. 5 is a diagram for explaining the formulation of the noise amount.
- FIG. 6 is a diagram for explaining parameters used for formulation of a noise amount
- FIG. 7 is a block diagram showing a configuration of a noise reduction unit.
- this imaging system includes a lens system 1 for forming an image of a subject and a lens system 1 which is arranged in the lens system 1 and defines a light beam passing range in the lens system 1.
- Aperture 2 an aperture filter 3 for removing unnecessary high-frequency components from the imaged light flux by the lens system 1, and an optical object image formed through the rho filter 3 CCD 4 that is a black-and-white image sensor that photoelectrically converts the signal and outputs an electrical image signal
- CDS Correlated Double Sampler
- an amplifier 6 for amplifying the signal output from the CDS 5
- an AZD converter 7 for converting an analog image signal amplified by the amplifier 6 into a digital signal
- an AZD converter 7 for the AZD.
- Output digital image data Image buffer 8 for temporarily storing data, and the images stored in this image buffer 8.
- a photometric evaluation section 9 that controls the aperture 2, CCD 4, and amplifier 6 based on the results of photometric evaluation of the subject based on the data, and focuses on the basis of the image data stored in the image buffer 8.
- a focus detection unit 10 that performs AF point detection and drives an AF motor 11 described later based on the detection result, and a focus lens and the like included in the lens system 1 controlled by the focus detection unit 10
- a noise estimating unit 13 that performs noise estimation based on the AF motor 11 that drives and the noise estimation based on the image data stored in the image buffer 8 as described later in detail.
- noise reduction unit 12 as noise reduction means for reducing noise of the image data read from the image buffer 8 using the estimation result of the noise estimation unit 13.
- noise reduction unit 12, noise estimation unit 13, signal processing unit 14, output unit 15, external I / F unit 17 are bidirectionally connected and include A control unit comprising a micro-commuter or the like for integrally controlling the imaging system, comprising a control unit 16 which also serves as a parameter calculation unit and a shutter speed calculation unit.
- This imaging system is configured so that shooting conditions such as ISO sensitivity can be set via the external I / F unit 17, and after these settings are made, a two-stage push button switch is used. Press the shirt button halfway down to enter the pre-imaging mode. 03 010614
- the video signal photographed and output by the CCD 4 through the lens system 1, the aperture 2, and the low-pass filter 3 is read out as an analog signal by the known correlated double sampling in the CDS 5.
- the analog signal is amplified by a predetermined amount by the amplifier 6, converted into a digital signal by the AZD converter 7, and transferred to the image buffer 8.
- the video signal in the image buffer 8 is then transferred to the photometry evaluation unit 9 and the focus detection unit 10.
- the photometric evaluation unit 9 obtains the brightness level in the image, considers the set ISO sensitivity and the shutter speed at the limit of camera shake, etc., and sets the aperture value by the aperture 2 and the electronic shutter speed of the CCD 4 and the The amplification factor of the amplifier 6 is controlled.
- the focus detection section 10 detects the edge strength in the image, and controls the AF motor 11 so that the edge strength becomes maximum, thereby obtaining a focused image.
- the main shooting is performed. Is performed.
- This actual photographing is performed based on the exposure conditions obtained by the photometric evaluation section 9 and the focusing conditions obtained by the focus point detecting section 10, and these photographing conditions are controlled by the control section. Transferred to 6.
- the video signal is transferred to and stored in the image buffer 8 in the same manner as in the pre-imaging.
- the video signal in the image buffer 8 is transferred to the noise estimating unit 13, which is further transmitted to the noise estimating unit 13 via the control unit 16. Exposure conditions determined by the camera and shooting conditions such as the IS0 sensitivity set by the external IZF unit 17 are also transferred. It is.
- the noise estimating unit 13 calculates a noise amount for each predetermined size, for example, for each pixel (in a pixel unit) in the present embodiment, based on the information and the video signal, and calculates the calculated noise amount.
- the noise amount is transferred to the noise reduction unit 12.
- the calculation of the noise amount in the noise estimating unit 13 is performed in synchronization with the processing of the noise reducing unit 12 based on the control of the control unit 16.
- the noise reduction unit 12 performs noise reduction processing on the video signal in the image buffer 8 based on the noise amount calculated by the noise estimation unit 13, and performs the noise reduction processing after the processing. Transfer video signal to signal processor 14 ⁇
- the signal processing unit 14 performs known enhancement processing and compression processing on the video signal after noise reduction based on the control of the control unit 16, and outputs the processed video signal to the output unit 15. Transfer to
- the output unit 15 records and stores the received video signal on a memory card or the like.
- the noise estimating unit 13 is provided on the right side of the image area of the CCD 4 from the video signal stored in the image buffer 8 under the control of the control unit 16 as shown in FIG. 3 for example.
- a 0B region extracting unit 21 for extracting a signal of the 0B (optical black) region, and a first bar for storing the signal of the 0B region extracted by the 0B region extracting unit 21. 2
- the information on the exposure condition transferred from the control unit 16 is obtained by reading out the signal in the 0B area stored in the first storage 22 and calculating the variance value.
- a variance calculation unit 23 as variance calculation means for correcting the amount of amplification of the amplifier 6 to the variance value by using the variance value; and R 0 M 25 for temperature estimation, which is an estimation means, and the variance calculation unit 2 3
- a temperature estimating unit 24 serving as a parameter estimating means and a temperature estimating means for obtaining the temperature of the image sensor based on the variance value output from the CPU and the information from the temperature estimating ROM 25;
- a local region extraction unit 26 serving as a signal value calculation unit for extracting a local region of a predetermined size at a predetermined position from the stored video signal; and a signal of the local region extracted by the local region extraction unit 26.
- a second buffer 27 to be stored; and a parameter calculating means for reading the signal of the local area stored in the second buffer 27, calculating the average value, and outputting the average value as the signal value level of the pixel of interest.
- the amplification amount of the amplifier 6 is calculated based on the average calculation unit 28 serving as a value calculation unit and the information on the exposure condition (IS0 sensitivity, exposure information, white balance information, etc.) transferred from the control unit 16.
- parameter A gain calculating unit 29 serving as a data calculating unit and a gain calculating unit;
- a standard value providing unit 30 serving as a providing unit for providing a standard value when any parameter is omitted; and a noise amount.
- the noise amount of the pixel of interest is estimated by a predetermined equation based on the information of the amplification amount output from 30 and the shutter speed output from the control unit 16 or the standard value providing unit 30.
- the noise amount is calculated using a coefficient calculation unit 31 serving as a noise amount calculation unit and a coefficient calculation unit, and a function formulated as described later using the coefficient output from the coefficient calculation unit 31.
- the function calculating unit 33 which is a noise amount calculating means and a function calculating means, and a limit is imposed so that the noise amount output from the function calculating unit 33 does not exceed a predetermined threshold value.
- an upper limit setting unit 3 4 serving as an upper limit value setting means for outputting to the noise reduction unit 1 2 Have been.
- the local region extraction unit 26 is, for example, in the case of horizontal processing, because the processing of the noise reduction unit 12 described later is separated into the horizontal direction and the vertical direction.
- vertical processing for example, in the 1 ⁇ 4 size unit, extraction is performed while sequentially scanning the entire image in 4 ⁇ 1 size units.
- the processing by the local area extraction unit 26 is performed in synchronization with the processing by the noise reduction unit 12.
- the upper limit setting unit 34 is provided in consideration of a case where the reduction processing for the theoretical noise amount becomes subjectively excessive. In other words, if the noise level is large, trying to remove it completely may damage the original signal and subjectively feel that the image quality has deteriorated. Therefore, even if the noise component remains, the preservation of the original signal is prioritized, and the total image quality is improved.
- the function of the upper limit setting section 34 can be stopped by the control section 16 by operating the function from the external IZF section 17.
- control section 16 includes the 0B area extraction section 21, the variance calculation section 23, the temperature estimation section 24, the local area extraction section 26, the average calculation section 28, and the gain calculation section 29.
- the standard value assigning section 30, the coefficient calculating section 31, the function calculating section 33, and the upper limit setting section 34 are bidirectionally connected.
- the temperature of the imaging element monotonically increases while drawing a curve and rises as the dispersion in the OB region increases.
- the random noise in the OB area is calculated as a variance value, and the relationship between this variance value and the temperature change of the image sensor is measured in advance and stored in the temperature estimation R 0 M 25. deep .
- the temperature estimating unit 24 is an image sensor using the correspondence stored in the temperature estimating R 0 M 25 from the variance value calculated by the variance calculating unit 23.
- the temperature of the CCD 4 can be estimated.
- the temperature of the imaging element is determined to be the same at any position on the element, and only one temperature is obtained.
- the temperature is not limited to this, and the local temperature at each point on the element is determined. You may ask for it.
- the 0B area is placed on the four sides of the image area, and the 0B areas located at the corresponding upper, lower, left, and right edges of the specific block in the image, respectively.
- the variance value of a specific block may be obtained by calculating the variance values of these and linearly interpolating these variance values. This makes it possible to perform highly accurate temperature estimation even when the temperature of the image sensor becomes uneven.
- N AL B + C (1)
- A, B, and C are constant terms, and the constant term is added to a function that is a power of the signal level L.
- Equation (2) a (T, G), b (T , G) and c (T, G).
- N a (T, G) Lb ( T'G) + c (T, G) (2)
- the curve represented by the equation (2) is converted into a plurality of temperatures T (in the example shown, the temperatures T1 to T1).
- FIG. 5B shows a plot of the case of multiple gains G (1, 2, and 4 times in the illustrated example) in 3).
- the independent variable is represented by the signal value level
- the dependent variable is represented by the noise amount N.
- the temperature T which is a parameter, is plotted as a coordinate axis in a direction orthogonal to these variables.
- the change of the curve shape due to the gain parameter G is represented by drawing a plurality of curves in each plane.
- FIG. 6A shows the outline of the characteristics of the above function a (T, G)
- Fig. 6B shows the outline of the characteristics of the above function b (T, G)
- Fig. 6C shows the outline of the characteristics of the above function c (T, G). It is shown. Since each of these functions is a two-variable function having temperature T and gain G as independent variables, FIGS. 6A to 6C are plotted as three-dimensional coordinates. It is a curved surface in a submerged space. However, here, instead of showing a specific curved surface shape, a rough change in characteristics is shown using a curved line.
- the constant terms A, B, and C are output.
- the specific shapes of these functions can be easily obtained by measuring the characteristics of the imaging device system including the CCD 4 and the amplifier 6 in advance.
- a correction coefficient d (S) using the shutter speed S as a parameter is introduced, and the correction by the formulation as shown in Equation (3) is performed by means of multiplying the correction coefficient by Equation 2. I'm trying to do it. 'N2 ⁇ a (T, G) L b (T ' G) + c (T, G) ⁇ d (S).-(3)
- the function shape of this correction coefficient d (S) is It can be obtained by measuring the characteristics of the system, for example, as shown in Fig. 6D.
- FIG. 6D shows an increase D of the noise amount with respect to the shutter speed S.
- the noise amount increment D has a property of rapidly increasing when the shutter speed S becomes smaller than a certain threshold value S TH (long exposure). Therefore, if the shutter speed S is longer or shorter than this threshold S TH , Is used as d (S), which can be simplified to use fixed coefficients for short-time exposure.
- the four functions a (T, G), b (T, G), c (T, G) and d (S) as described above are recorded in the parameter ROM 32.
- the correction for the shutter speed need not always be provided as a function, but may be provided as other means, for example, as a table.
- the coefficient calculator 31 receives the temperature T, the gain G, and the shutter speed S that are dynamically obtained (or obtained from the standard value providing unit 30) as input parameters, and ° Calculate the coefficients A, B, C, and D using the four functions recorded in the ROM 32.
- the function calculation unit 33 applies the coefficients A, B, C, and D calculated by the coefficient calculation unit 31 to the above-described equation 3 to obtain a function for calculating the noise amount N.
- the shape is determined, and the noise amount N is calculated based on the signal value level L output from the average calculation unit 28 via the coefficient calculation unit 31.
- the parameters such as the temperature T, the gain G, and the shirt speed S do not always need to be obtained for each photographing.
- the temperature T is stabilized after a certain period of time has elapsed since the power was turned on.
- the temperature information calculated by the temperature estimating unit 24 was stored by the control unit 16 in the standard value assigning unit 30.
- the standard value assigning section 30 sets a standard parameter. Is set and output, which makes it possible to speed up processing and save power. It should be noted that the standard value assigning section 30 can output a standard value for other necessary parameters.
- the noise reduction unit 12 extracts the video signal from the image buffer 8 sequentially in units of horizontal lines, and the horizontal line extraction unit 41 extracts the video signal.
- a first scanning means which scans the horizontal line video signal in pixel units and performs a known hysteresis smoothing as a noise amount using a threshold value from a threshold setting section 46 described later as a noise amount.
- a buffer 43 for storing a video signal for one screen by successively storing the horizontal lines smoothed by the smoothing section 42 and the first smoothing section 42.
- a vertical line extracting unit 44 for sequentially extracting video signals from the buffer 43 in units of vertical lines after the video signal for one screen is accumulated in the buffer 43.
- a known hysteresis smoothing is performed by using a threshold value from a threshold value setting unit 46 described later as a noise amount and sequentially output to the signal processing unit 14.
- the noise amount estimated by the second smoothing unit 45 serving as a jig and the noise estimating unit 13 is used to extract the horizontal line or the vertical line extracted by the horizontal line extracting unit 41.
- the amplitude value of the noise is obtained as a threshold value (small amplitude value) obtained in pixel units according to the vertical line extracted by the vertical extraction unit 44, and is set as the first smoothing unit 42.
- a threshold value setting unit 46 serving as a threshold value setting means for outputting to the second smoothing unit 45.
- the hysteresis smoothing in the first and second smoothing sections 42 and 45 is controlled by the control section 16 by the operation of the noise estimation section 13 and the threshold setting section 46. It is performed in synchronization with the operation.
- the control unit 16 is bidirectionally connected to the horizontal line extraction unit 41, the vertical line extraction unit 44, and the threshold value setting unit 46. These are controlled.
- the noise amount is estimated in pixel units.
- the present invention is not limited to this.
- the noise amount is estimated for each predetermined unit area such as 2 ⁇ 2 pixels or 4 ⁇ 4 pixels. You may do it. In this case, the noise estimation accuracy is reduced, but on the other hand, there is an advantage that higher-speed processing can be performed.
- the noise amount is estimated for each pixel or for each unit area, and the noise reduction process is performed in accordance with the local noise amount, Optimal noise reduction is possible from bright to dark areas, and high-quality images can be obtained.
- noise amount various parameters related to the noise amount are dynamically determined for each photographing, and the noise amount is calculated from these parameters. Highly accurate estimation of noise is possible.
- the noise amount is set as a threshold value, and signals below this threshold value are removed as noise, signals above this threshold value are stored as the original signal, and the edge part deteriorates. It is possible to obtain a high-quality image with only reduced noise.
- the calculated noise amount is limited so as not to exceed a predetermined upper limit, excessive noise reduction processing is prevented, and the original signal is saved for edge components. Nature can be secured. At this time, whether or not to set the upper limit can be set by the operation, so that a person who can obtain a subjectively better image quality can be selected.
- the signal level of the pixel of interest is obtained by averaging in the area near the pixel of interest, the effect of noise components can be reduced, and the noise amount can be estimated with high accuracy. It becomes possible.
- the temperature of the image sensor is estimated from the variance value in the 0B region of the image sensor and used as a parameter for estimating the noise amount, the temperature at the time of shooting is reduced. It is possible to estimate the noise amount with high accuracy by dynamically adapting to the degree change. At this time, since the 0B region is used, a low-cost imaging system can be realized.
- the gain at the time of shooting is determined based on the ISO sensitivity, exposure information, and white balance information and used as a parameter for estimating the amount of noise, it dynamically adapts to changes in gain during shooting to achieve high accuracy. This makes it possible to estimate the amount of noise.
- the correction amount for noise is determined according to the shutter speed to be used, it dynamically adapts to the shutter speed at the time of shooting, and can be used even for noise that increases during long exposure. It is possible to estimate the noise amount with high accuracy.
- a standard value is set for parameters that were not obtained at the time of shooting, and the noise amount calculation coefficient was obtained together with the obtained parameters.
- the noise amount was calculated from this coefficient. Even when a certain parameter cannot be obtained, the noise amount can be estimated, and a stable noise reduction effect can be obtained.
- the noise amount can be estimated, and a stable noise reduction effect can be obtained.
- the required memory amount is small and the cost can be reduced.
- the noise amount can be adaptively reduced in response to the dynamic change, and a high-quality image can be obtained. Obtainable.
- FIGS. 8 to 12 show a second embodiment of the present invention.
- FIG. 8 is a block diagram showing the configuration of an image pickup system.
- FIG. 9 is a diagram showing a primary color bayer type filter in a color filter.
- Fig. 10 is a block diagram showing the configuration of the noise reduction unit,
- Fig. 10 is a block diagram showing the configuration of the noise reduction unit, and
- Fig. 12 is an image processing program in the computer. Is a flowchart showing noise reduction processing performed by O
- the imaging system according to the second embodiment has, in addition to the configuration of the first embodiment described above, for example, a primary-color-bar-type, which is disposed in front of the CCD 4.
- a color filter 51 and a temperature sensor 52 arranged near the CCD 4 and constituting a parameter calculating means for measuring the temperature of the CCD 4 in real time and outputting the measurement result to the control unit 16
- the pre-WB unit 53 that performs simple white balance detection based on the video signal stored in the image buffer 8 and controls the amplifier 6 based on the detection result, and the image buffer 8
- a color signal separation unit 54 serving as separation means for reading out the stored video signal, separating the color signal, and outputting the separated signal to the noise reduction unit 12 and the noise estimation unit 13.
- Pre-WB section 53 and color signal separation section Reference numeral 54 denotes a bidirectional connection to the control unit 16 for control.
- the signal flow in the imaging system as shown in FIG. 8 is basically the same as in the first embodiment described above, and only different parts will be described.
- the subject image via the color filter 51 is captured by the CCD 4 and output as a video signal.
- This video signal is processed as described in the first embodiment, and is stored in the image buffer 8 as a digital video signal.
- the video signal stored in the image buffer 8 is transferred to the photometric evaluation unit 9 and the in-focus point detection unit 10 and also to the pre-WB unit 53.
- the pre-WB unit 53 converts a signal of a predetermined luminance level in the video signal. By integrating for each color signal, a simple white balance coefficient is calculated and transferred to the amplifier 6.
- the amplifier 6 performs white balance adjustment by multiplying a different gain for each color signal using the simple white balance coefficient received from the pre-WB section 53.
- the exposure condition obtained by the photometric evaluation unit 9, the focusing condition obtained by the focus detection unit 10, and the pre-WB unit 5 The main photographing is performed based on the white balance coefficient obtained in 3 and the photographing conditions are transferred to the control unit 16.
- the video signal obtained by the main shooting is stored in the image buffer 8, and then transferred to the color signal separation unit 54, where the video signal is separated for each color of the color filter.
- the filter configuration of the color filter 51 arranged in front of the CCD 4 is, for example, a primary color Bayer type as shown in FIG. 9, that is, 2 ⁇ 2 pixels. With green (G1, G2) filters at diagonal positions, and red (R) and blue (B) filters at the remaining diagonal positions. It has become. Note that 1 filters G 1 and G 2 are filters having the same optical characteristics, but are distinguished here as G 1 and G 2 for convenience of processing.
- the color signal separation section 54 separates the video signal in the image buffer 8 according to the four types of color filters R, G 1, G 2, and B. Based on the control of the control section 16, the processing is performed in synchronization with the processing of the noise reduction section 12 and the processing of the noise estimation section 13.
- Each color signal separated by the color signal separation unit 54 is transferred to the noise estimation unit 13 and the noise amount is estimated as described above, and the noise reduction is performed using the estimation result.
- the noise reduction processing is performed in the unit 12 and the processed color signals are integrated and transferred to the signal processing unit 14 Is done.
- the subsequent operation is the same as in the first embodiment described above.o
- the basic configuration of the noise estimating unit 13 is the same as that shown in FIG. 2 in the first embodiment described above, and the components having the same functions are denoted by the same names and reference numerals. Assigned.
- the noise estimation unit 13 includes a local region extraction unit 26 that extracts a local region of a predetermined size at a predetermined position for each color signal output from the color signal separation unit 54, and a local region extraction unit 26 A buffer 61 for storing the color signal of the local area extracted by the above, and an amplification amount of the amplifier 6 based on the information on the exposure condition and the information on the white balance coefficient transferred from the control unit 16. , A standard value assigning unit 30 that assigns a standard value when any parameter is omitted, and a signal from the buffer 61 to read out the average value and the variance value.
- Average variance calculator 6 to transfer to 6 2 and the shutter speed output from the control unit 16 or the standard value applying unit 30; the information about the temperature of the image sensor output from the temperature sensor 52 or the standard value applying unit 30; The amount of amplification output from the gain calculating section 29 or the standard value adding section 30, the signal value level output from the average variance calculating section 62 or the standard value adding section 30 and the noise amount Lookup table which is a noise amount calculation means and a lookup table means which is constructed by the same means as in the above-described first embodiment and recorded as a lookup table. And a part 63.
- the noise obtained by the lookup table section 63 The amount is transferred to the noise reduction unit 12.
- the processing of the local region extraction unit 26 is performed in synchronization with the processing of the noise reduction unit 12, and the processing of the noise reduction unit 12 described below is performed in block units.
- the extraction is performed while sequentially scanning the entire image in units of 4 ⁇ 4 pixels, for example.
- the control unit 16 is bidirectional with respect to the local region extraction unit 26, the average variance calculation unit 62, the gain calculation unit 29, the standard value assignment unit 30, and the look-up table unit 63. Are connected to and control these.
- the noise reduction unit 12 includes a size setting unit 74 that is a control value setting unit that sets a filter size based on the noise amount estimated by the noise estimation unit 13, and the color signal
- the pixel block corresponding to the filter size set by the size setting unit 74 is included in the target pixel from each color signal output from the separation unit 54 (for example, with the target pixel as the center).
- the coefficient of the corresponding filter size is read from the coefficient R 0 M 75 based on the filter size set by the above, and the pixel block extracted by the local area extraction unit 71 is publicly known.
- Filter processing for smoothing The filtering unit 72 serving as a smoothing means, and the filtered color signals output from the filtering unit 72 correspond to the signal output positions of the CCD 4 for all colors. And a buffer 73 stored in the memory.
- the size setting unit 74 is estimated by the noise estimation unit 13 Depending on the noise amount, select from a filter size of, for example, 1 X 1 pixel to 9 X 9 pixels, to a small size when the noise amount is small and a large size when it is large.
- This filter size is a control value for controlling the frequency characteristic of the smoothing process, and thereby reduces a specific frequency band in the video signal according to the frequency characteristic of the noise. Filtering processing (smoothing processing) is performed.
- the size setting unit 74 receives the variance value information on the signal value level near the pixel of interest from the control unit 16 and, when the variance value is small, identifies the pixel of interest as a flat region. If the area is large, the filter area is identified as an edge area, and based on the result of the identification, the filter size is not corrected if the area is a flat area. Is corrected.
- the control unit 16 is bidirectionally connected to the local area extraction unit 71, the filtering unit 72, and the size setting unit 74, and controls these. I have.
- the video signal output from the CCD 4 is set as unprocessed Raw data
- the Raw data includes the temperature, gain, shutter speed, and the like at the time of shooting from the control unit 16.
- the raw data to which the header information is added is output to a processing device such as a computer, and the processing device executes the raw data. It may be processed by hardware.
- step S1 all color signals composed of Raw data and header information including information such as temperature, gain, and shutter speed are read.
- the Raw data is separated into color signals (step S2), and scanning is performed individually for each color signal (step S3).
- a local area of a predetermined size for example, in units of 4 ⁇ 4 pixels is extracted centering on the target pixel (step S 4).
- an average value which is the signal value level of the pixel of interest, and a variance value used for distinguishing between the flat region and the edge region are calculated (step S5).
- step S6 parameters such as temperature, gain and shirt speed are obtained from the read header information.
- a predetermined standard value is assigned (step S6).
- Noise is obtained by using a look-up table based on the signal value level calculated in step S5 and the parameters set in step S6, such as temperature, gain, and shutter speed.
- the amount is calculated (step S7).
- a filtering size is determined based on the variance value calculated in step S5 and the noise amount calculated in step S7 (step S8).
- step S9 an area corresponding to the filtering size determined in step S8 is extracted (step S9).
- step S10 a coefficient corresponding to the filtering length determined in step S8 is read (step S10).
- smoothing is performed using the filtering size obtained in step S8 and the coefficient obtained in step S10.
- Perform filtering (step S11) o
- step S12 the smoothed signals are sequentially output (step S12), and it is determined whether or not scanning of the entire screen has been completed for one color (step S13). Return to step 3 and perform the above process until the process is completed.
- step S13 determines whether or not processing has been completed. If the processing has not been completed for all the color signals, the process returns to step S2 to perform the above-described processing, and if completed, this processing ends.
- the present invention is not limited to this.
- the present invention can be similarly applied to a case where the color filter 51 is a complementary color filter.
- the present invention can be similarly applied to the case of a double CCD or a triple CCD.
- substantially the same effects as those of the first embodiment described above can be obtained, and a signal from an imaging device having a color filter is separated into color signals for each color filter.
- the noise amount is estimated for each pixel or unit area, and noise reduction processing is performed in accordance with the local noise amount, so that optimal noise reduction is performed from bright to dark areas. And high-quality images can be obtained.
- it can be applied to various imaging systems such as primary colors, complementary colors, and single, double, and triple plates.
- the noise amount can be adaptively reduced in response to the dynamic change. High quality images can be obtained.
- a high-quality image can be obtained by appropriately reducing the amount of noise in the image.
- FIGS. 13 to 21 show a third embodiment of the present invention.
- FIG. 13 is a block diagram showing the configuration of an imaging system.
- FIG. 14 is a primary color layer in a color filter.
- Fig. 15 is a block diagram showing the configuration of the imaging situation estimating unit
- Fig. 16 is a diagram for explaining the evaluation photometry division pattern and evaluation parameters
- Fig. 17 is Block diagrams showing the configuration of the noise estimator
- Figs. 18 and 19 are diagrams for explaining the formulation of the noise amount
- Fig. 20 is used for the formulation of the noise amount.
- FIG. 21 is a block diagram illustrating a configuration of a noise reduction unit. As shown in FIG.
- this imaging system includes a lens system 101 for forming a subject image, and a light beam that is disposed in the lens system 101 and is radiated by the lens system 101.
- a CDS (Correlated Double Sampling) 107 that performs correlated double sampling on the video signal output from the CCD 105 and amplifies the signal output from the CDS 107.
- AZD converter 109 that converts the analog video signal amplified by the amplifier 108 into a digital signal, and the AZD converter 109 that is output from the AZD converter 109.
- An image buffer 110 for temporarily storing digital image data, and a photometric evaluation of a subject is performed based on the image data stored in the image buffer 110.
- An in-focus point detection unit that detects an in-focus point based on the image data stored in the image buffer and drives an AF motor that will be described later based on the detection result; and an in-focus point detection unit.
- the AF motor 114 that drives the focus lens and the like included in the lens system 101 under the control of the lens system 101, and the image buffer 111
- a color signal separation unit 115 that reads out the video signal stored in 0 and separates the color signal, and the image data output from the color signal separation unit 115 will be described in detail later.
- the correction unit 118 is a correction unit for correcting the estimation result obtained by the noise estimation unit 117 using the estimation result obtained by the correction unit 16, and the color is estimated by using the estimated noise corrected by the correction unit 118.
- Noise reduction unit 119 which is a noise reduction unit that reduces noise of image data output from signal separation unit 115, and image data output from noise reduction unit 119
- the signal processing unit 120 that processes the image and the image data from the signal processing unit 120
- An output unit 121 for outputting data to be recorded on a memory card, etc., and an external I with an interface to a power switch, a shutter button, and a mode switch for switching between various shooting modes.
- This imaging system is configured so that shooting conditions such as ISO sensitivity can be set via the external I / F section 123, and after these settings are made, a two-stage push button switch is used. Press the shirt button halfway to enter the pre-imaging mode. 0614
- the video signal photographed and output by the CCD 105 via the lens system 101, the aperture 102, the low-pass filter 103, and the color filter 104 is a known signal in the CDS 107. Perform correlated double sampling and read out as an analog signal.
- the analog signal is amplified by a predetermined amount by the amplifier 108, converted into a digital signal by the A / D converter 109, and transferred to the image sensor 110.
- the video signal in the image buffer 110 is then transferred to the photometry evaluation unit 111, pre-WB unit 112, and focus detection unit 113.
- the photometric evaluation unit 1 1 1 calculates the luminance level in the image, divides the image into multiple areas, and considers the set IS0 sensitivity and the camera shake limit, and combines the luminance levels in each area. Then, an appropriate exposure value is calculated, and the aperture value by the aperture 102, the electronic shutter speed of the CCD 105, the amplification factor of the amplifier 108, and the like are controlled so as to obtain the appropriate exposure value.
- the pre-WB unit 112 calculates a simple white balance coefficient by integrating a signal of a predetermined luminance level in the video signal for each color signal, transfers it to the amplifier 108, and Multiply by different gains to perform white balance.
- the focus detection unit 113 detects the edge intensity in the image, and controls the AF motor 114 so that the edge intensity is maximized to obtain a focused image.
- This actual shooting focuses on the exposure conditions obtained by the photometric evaluation unit 111 and the white balance coefficient obtained by the pre-WB unit 112.
- the focusing is performed based on the focusing condition obtained by the point detecting unit 113, and the photographing conditions are transferred to the control unit 122.
- the video signal is transferred to the image buffer 110 and stored in the same manner as in the pre-imaging.
- the video signal in the image buffer 110 is transferred to the color signal separation unit 115 and separated for each color of the color filter.
- the filter configuration of the color filter 104 arranged in front of the CCD 105 is, for example, a primary color Bayer type as shown in FIG.
- green (G1, G2) filters are arranged at diagonal positions
- red (R) and blue (B) filters are arranged at the remaining diagonal positions. I'm in love.
- the green filters G 1 and G 2 have the same optical characteristics, but are distinguished here as G 1 and G 2 for convenience of processing.
- the color signal separation section 115 separates the video signal in the image buffer 110 according to these four types of color filters R, G1, G2, and B.
- the separation operation is performed in synchronization with the processing of the noise estimating unit 117 and the processing of the noise reducing unit 119 based on the control of the control unit 122.
- control unit 122 sends the photometry information and focusing information at the time of imaging from the photometry evaluation unit 111, pre-WB unit 112, and focus detection unit 113 to the shooting condition estimation unit 116. Forward.
- the photographing state estimating unit 116 estimates the photographing state of the entire screen as, for example, landscape photographing, portrait photographing, close-up photographing, or night scene photographing based on the transferred information, and corrects it. 1 Transfer to 8. Such a process of estimating the photographing state by the photographing state estimating unit 116 is performed once for each photographing.
- the noise estimation unit 117 reads each color signal from the color signal separation unit 115 based on the control of the control unit 122, and
- the exposure estimation unit 1 17 is further provided with a control unit 1 22 via the control unit 1 2 2, such as an exposure condition obtained by the photometry evaluation unit 1 1 1 and an IS 0 sensitivity set by the external IZF unit 1 2 3.
- the shooting conditions are also transferred.
- the noise estimating unit 117 calculates a noise amount for each predetermined size, for example, for each pixel (in a pixel unit) in the present embodiment, based on the above information and each color signal, and calculates the calculated noise amount. Transfer the size to the corrector 1 1 8 o
- the correction unit 118 corrects the noise amount output from the noise estimation unit 117 based on the shooting condition output from the shooting condition estimation unit 116, and calculates the corrected noise amount. Transfer to noise reduction section 1 19.
- the processing in the noise estimation unit 117 and the processing in the correction unit 118 are synchronized with the processing of the noise reduction unit 119 based on the control of the control unit 122. Is being done.
- the noise reduction section 1 19 performs noise reduction processing on each color signal from the color signal separation section 1 15 based on the noise amount corrected by the correction section 1 18.
- the processed video signal is transferred to the signal processing unit 120.
- the signal processing unit 120 Based on the control of the control unit 122, the signal processing unit 120 performs known enhancement processing, compression processing, and the like on the video signal after the noise reduction, and outputs the processed video signal to the output unit. 1 2 Transfer to 1.
- the output unit 122 records and stores the received video signal in a memory card or the like.
- the photographing state estimating unit 116 acquires the AF information set by the focusing point detecting unit 113 from the control unit 122, and obtains, for example, 5 m to ⁇ (wind scene photographing), 1 m to 5 m (Portrait shooting), 1 m or less (Close-up shooting), etc.
- Focusing position estimating unit 131 which is a focusing position estimating means, and the control unit 122 Acquired 03 010614
- the AE information from the subject distribution estimating unit 132 which is the subject distribution estimating means for calculating the evaluation parameters S1 to S3 to be described later, and the AE information from the photometric evaluating unit 111 are acquired from the control unit 122.
- the gain for correcting the noise amount is determined.
- an overall estimating unit 1334 which is an overall estimating means for transferring the calculated value to the correcting unit 118.
- the in-focus position estimating unit 13 1, the subject distribution estimating unit 13 2, and the night scene estimating unit 13 33 are bidirectionally connected to the control unit 122.
- the photometric evaluation unit 111 divides the signal from the CCD 105 into 13 regions and performs known divided photometry.
- the imaging region of the CCD 105 is classified into a central portion, an inner peripheral portion surrounding the central portion, and an outer peripheral portion surrounding the inner peripheral portion. Divide each into the following areas.
- the center area is the middle area (the average luminance level is represented by a 1), the area to the left (the average luminance level is represented by a 2), and the right area (the average luminance level is a 3
- the inner part is the area above and below the area where the average luminance level is a1 (the average luminance levels are represented by a4 and a5, respectively), and the annual average luminance level
- the area to the left and right of the area where is a4 (the average luminance levels are represented by a6 and a7), and the area to the left and right of the area where the average luminance level is a5 (the average luminance levels are a8 and a7, respectively) 9) and is divided into.
- the outer peripheral area consists of the upper left area (average luminance level is represented by a10), the upper right area (average luminance level is represented by a11), and the lower left area (average luminance level is represented by a12). And the lower right area (the average brightness level is represented by a13), and.
- the average luminance level of each of the divided regions is transferred from the photometry evaluation unit 111 to the subject distribution estimation unit 132.
- the subject distribution estimation unit 1 3 2 calculates the following evaluation parameters as shown in the following Expression 11, Expression 12, and Expression 13, and transfers the calculation result to the overall estimation unit 1 34. Has become.
- a v ( ⁇ ai) / 1 3 (1 3)
- max () is a function that returns the maximum value of the numbers in parentheses
- AV represents the average luminance level (average luminance level of the entire screen) for all photometry areas.
- the evaluation parameter S 1 indicates the difference between the left and right luminances at the center (center), and the evaluation parameter S 2 indicates either the upper center or the upper left or right of the inner periphery.
- the evaluation parameter S 3 indicates the difference between the larger of the upper left and right sides of the outer periphery and the average luminance of the entire screen (the entire screen).
- the overall estimating unit 134 calculates the amount of noise according to the outputs of the in-focus position estimating unit 131, the subject distribution estimating unit 132, and the night view estimating unit 1333. At this time, the gain for correction is obtained. At this time, if the result of the night view estimating unit 133 is a night view shooting, the gain is “strong”, that is, for example, 1.5 to 2 .0, and immediately transfer this gain to the compensator 1 18 to complete the process
- the evaluation parameter S 3 is further compared with a predetermined value Th 1.
- the evaluation parameter S 3 is larger than the predetermined value Th 1
- at least one of a 10 or a 11 is at least somewhat brighter than the average brightness of the entire screen. Therefore, it is assumed that the landscape has a sky at the top. The sky is flat and noisy Is a region that is subjectively concerned, so “strong” (for example, 1.5 to 2.0) is specified as the gain for correction.
- the evaluation parameter S 3 is smaller than the predetermined value Th 1, on the contrary, it is estimated that there is no sky at the top or a landscape with little sky. In this case, a medium having a texture structure, such as a plant or a building, is considered to be the main subject. Therefore, “medium” (for example, 1.0 to 1.5) is designated as the gain for correction. You.
- the evaluation parameter S 2 is compared with a predetermined value Th 2.
- the evaluation parameter S 2 is larger than the predetermined value Th 2, and there is a luminance difference between the upper center a 4 of the inner peripheral portion and one of the upper left and right a 6, a 7. It is assumed that this is a portrait of a person.
- the area of the face that is, the area of the skin that is flat and prominent in the noise is relatively large. Because of the presence of this hair, if this hair is crushed, it will be evaluated as degraded image quality. For this reason, “medium” is designated as the gain for correction.
- the AF information is 1 m or less, it is determined that close-up photography has been performed, and the above evaluation parameter S 1 is compared with a predetermined value Th 3. At this time, if the evaluation parameter S 1 is larger than the predetermined value Th 3, there is a luminance difference between the left and right of the center, and it is estimated that multiple objects are closed up. In this case, since the main subject is considered to have a fine structure, “weak” (for example, 0.5 to 1.0) is designated as the gain for correction. On the other hand, if the evaluation parameter S 1 is smaller than the predetermined value Th 3, Four
- the gain for correction set by overall estimation section 134 in this way is transferred to correction section 118 as described above.
- the noise estimating unit 117 includes a local region extracting unit 141 that extracts a local region of a predetermined size at a predetermined position for each color signal output from the color signal separating unit 115.
- a buffer 144 for storing the color signal of the local area extracted by the extraction unit 141, and the information on the exposure condition and the information on the white balance coefficient transferred from the control unit 122 are used as described above.
- Gain calculating section 144 as a parameter calculating means for calculating the amount of amplification of amplifier 108, and standard value providing section 14 as a providing means for providing a standard value when any parameter is omitted.
- Coefficient calculating unit 14 which is a noise amount calculating means for calculating a coefficient according to a predetermined formula for estimating the noise
- the local region extraction unit 141 is used in the horizontal direction processing because the processing of the noise reduction unit 119 described later is separated into the horizontal direction and the vertical direction. For example, in the 4 ⁇ 1 size unit, and in the case of vertical processing, for example, in the 1 ⁇ 4 size unit, extraction is performed while sequentially scanning the entire image. The processing by the local area extraction unit 141 is performed in synchronization with the processing of the noise reduction unit 119.
- the control unit 122 includes the local area extraction unit 141, the average calculation unit 144, the gain calculation unit 144, the standard value assignment unit 144, the coefficient calculation unit 144, and the function. It is bidirectionally connected to the operation unit 148 and controls these.
- N AL B + C (1 4)
- A, B, and C are constant terms, and are constants added to a function that is a power of the signal value level.
- A, B, and C are constant terms, and are constants added to a function that is a power of the signal value level.
- the noise amount N does not depend only on the signal value level L, but also changes depending on the temperature of the CCD 105 serving as the image sensor and the gain of the amplifier 108. Therefore, these requirements
- the example shown in Figure 19 takes into account the factors.
- Equation 15 a (T, G), b ( T, G) and c (T, G) are introduced.
- N a (T, G) L b (T ' G) + c (T, G) (15)
- the curve represented by the equation 15 is divided into a plurality of temperatures T (in the example shown, temperatures T 1 to T Figure 19 shows a plot for multiple gains G (1, 2, and 4 times in the example shown) in 3).
- the individual curves indicated by each parameter have a form almost similar to the curve according to Equation 14 as shown in FIG. 18, but the coefficients a, b, and c are, of course, the temperature T And gain G according to each value.
- Fig. 20A shows the outline of the characteristics of the above function 3 (D, ⁇ )
- Fig. 208 shows the characteristics of the above function b (T, G)
- Fig. 20C shows the outline of the characteristics of the above function c (T, G). Are respectively shown.
- FIGS. 2OA to 20C are plotted as three-dimensional coordinates. It is a curved surface in this plotted space. However, instead of showing a specific curved shape here, The characteristic changes are roughly shown using the curves.
- the constant terms A, B, and C are output.
- the specific shapes of these functions can be easily obtained by measuring the characteristics of the imaging device system including the CCD 105 and the amplifier 108 in advance.
- N ⁇ a (T, G) L b (T ' G) + c (T, G) ⁇ d (S)
- FIG. 20D shows a state of an increase D in the noise amount with respect to the shutter speed S.
- the noise amount increment D has a property of rapidly increasing when the shirt speed S becomes smaller than a certain threshold S TH (long exposure). Therefore, the shutter speed S becomes equal to this threshold S
- T, G) and d (S) are recorded in the parameter ROM 147 described above.
- the correction to the shutter speed does not necessarily need to be prepared as a function, but may be prepared as other means, for example, as a table.
- the coefficient calculation unit 146 uses the temperature T, the gain G, and the shutter speed S that are dynamically acquired (or acquired from the standard value assignment unit 144) as input parameters, and stores the parameter ROM.
- the constant terms A, B, C, and D are calculated using the four functions recorded in 147, and are transferred to the function calculation unit 148. ,
- the function operation unit 148 calculates the noise amount N by applying each of the constant terms A, B, C, and D calculated by the coefficient calculation unit 144 to the above equation 16. And the noise amount N is calculated from the signal value level L output from the average calculation unit 144 via the coefficient calculation unit 144. I have.
- parameters such as the temperature T, the gain G, and the shirt evening speed S do not necessarily need to be obtained for each photographing.
- Arbitrary parameters may be stored in the standard value assigning section 144, and the calculation processing may be omitted. This makes it possible to speed up processing and save power.
- the correction unit 118 When the correction unit 118 receives the noise amount calculated from the noise estimation unit 117 in this way, based on the control of the control unit 122, the correction unit 118 adjusts the photographing amount to the noise amount. Multiplies the correction gain transferred from the situation estimator 1 16 and transfers the result to the noise reducer 1 19 o
- the noise reduction unit 119 includes a horizontal line extraction unit 151 that sequentially extracts a video signal in units of horizontal lines for each color signal output from the color signal separation unit 115. Extracted by line extraction unit 15 1 Smoothing means for scanning the video signal of the horizontal line obtained in units of pixels and performing a known hysteresis smoothing by using a threshold from a threshold setting unit 156 described later as a noise amount.
- One screen is obtained by sequentially storing the barrels of the first smoothing section 152 and the horizontal lines smoothed by the first smoothing section 152.
- Buffer 153 for storing video signals for all colors, and a video signal for one screen is stored for all colors in this buffer 1553, and then the video is sequentially read from the buffer 1553 in units of vertical lines.
- a vertical line extraction unit 154 that extracts signals for each color signal, and a vertical line video signal extracted by the vertical line extraction unit 154 is scanned in pixel units.
- the threshold value from a threshold value setting unit 156 described later is used as a noise amount and a known hysteresis system is used.
- the second smoothing unit 155 which is a smoothing means for performing a smoothing and sequentially outputting the noise to the signal processing unit 120, and a noise amount corrected by the correcting unit 118, Noise is obtained by acquiring pixel by pixel in accordance with the horizontal line extracted by the horizontal line extraction unit 151 or the vertical line extracted by the vertical line extraction unit 154.
- a threshold setting unit 156 which is a threshold setting means for setting the amplitude value of the noise as a threshold value (small amplitude value) and outputting it to the first smoothing unit 152 or the second smoothing unit 1555. It has a configuration.
- the hysteresis smoothing in the first and second smoothing sections 15 2 and 15 55 is controlled by the control section 122 so that the operation of the correction section 1 18 and the threshold setting section 1 It is performed in synchronization with the operation of 56.
- the control unit 122 is bidirectionally connected to the horizontal line extraction unit 151, the vertical line extraction unit 154, and the threshold setting unit 156, and controls these. It is supposed to.
- the noise amount is estimated in pixel units.
- the present invention is not limited to this.
- an arbitrary amount such as 2 ⁇ 2 pixels or 4 ⁇ 4 pixels can be used.
- the noise amount may be estimated for each predetermined unit area. In this case, the noise estimation accuracy is reduced, but on the other hand, there is an advantage that higher-speed processing can be performed.
- a single-chip CCD in which the color filter 104 is a primary color carrier type is described as an example.
- the present invention is not limited to this.
- the color filter 104 is a complementary color filter. The same can be applied to a single-chip CCD serving as a filter, and also to a two- or three-chip CCD.
- focusing information and photometric information are used for estimating a shooting situation.
- the present invention is not limited thereto, and at least one of zoom position information, line-of-sight input information, and flash light emission information is used. Either one may be used to estimate the shooting situation, or these may be combined as appropriate to more accurately estimate.
- the noise amount is estimated for each area of an image and for each color signal, it is possible to perform optimal noise reduction from a bright portion to a dark portion. This makes it possible to obtain high-quality images.
- various parameters such as the signal value level related to the noise amount, the temperature of the image sensor at the time of shooting, the shutter speed, and the gain are dynamically obtained for each shooting, and the noise amount is determined based on these parameters. Since noise is calculated, the noise amount can be estimated with high accuracy. In this case, the accuracy can be further improved by estimating the noise amount in pixel units.
- the estimated noise amount is corrected in accordance with the shooting situation, a subjectively favorable high-quality image can be obtained.
- various information at the time of shooting is integrated to obtain the shooting state of the entire screen, low-cost and high-speed processing can be realized.
- the signals from the image sensor with color filters are separated into color signals for each color filter, various types of imaging such as primary color, complementary color, or single, double, and triple Noise can be reduced for the system.
- the noise amount is set as a threshold value, and signals below this threshold value are removed as noise, signals above this threshold value are stored as the original signal, and the noise level is reduced. It is possible to obtain a high-quality image in which only the components are reduced.
- a standard value is set for a parameter not obtained at the time of photographing, a coefficient for calculating a noise amount is calculated using the standard value together with the obtained parameter, and a noise amount is calculated from the coefficient. Therefore, even when necessary parameters cannot be obtained at the time of shooting, it is possible to estimate a noise amount and obtain a stable noise reduction effect. At this time, since the function is used when calculating the noise amount, the required memory amount can be reduced, and the cost can be reduced. In addition, by intentionally omitting some parameter calculations, it is possible to construct an imaging system that achieves low cost and low power consumption.
- FIGS. 22 to 26 show a fourth embodiment of the present invention.
- FIG. 22 is a block diagram showing a configuration of an imaging system
- FIG. 23 is a professional diagram showing a configuration example of an imaging situation estimation unit.
- FIG. 24 is a block diagram showing a configuration of the noise estimating unit.
- FIG. 25 is a flowchart showing a part of a noise reduction process performed by an image processing program in a computer. Is a flowchart showing another part of the noise reduction processing performed by the image processing program in the computer.
- the fourth embodiment the same as in the third embodiment described above. Some parts are denoted by the same reference numerals, description thereof is omitted, and only different points will be described.
- the imaging system according to the fourth embodiment is configured to thin out video signals read from the image buffer 110 at predetermined intervals in addition to the configuration of the third embodiment described above.
- the known thinning unit 161, and the video signal thinned by the thinning unit 161 are subjected to a known linear interpolation process to generate a three-plate RGB image, and the result is used as the above-mentioned shooting condition estimation unit.
- the interpolator 162 to output to 1 16 and the information of the labeled image area estimated by the shooting condition estimator 1 16 are temporarily stored and output to the corrector 1 18 And a buffer 163.
- the interpolator 162 is bidirectionally connected to and controlled by the controller 122.
- the flow of signals in the imaging system as shown in FIG. 22 is basically the same as in the above-described third embodiment, and only different parts will be described.
- the main shooting is performed as described above, and the video signal is transferred to the image buffer 110. Is done.
- the thinning section 161 reads out the video signal in the image buffer 110, thins out the video signal at a predetermined interval, and transfers it to the interpolating section 162.
- the thinning processing by the thinning unit 161 is based on a 2 ⁇ 2 pixel as a basic unit because the present embodiment assumes a bay-type color filter as the color filter 104. Specifically, for example, a thinning-out operation is performed to read only the upper left 2 ⁇ 2 pixels for a unit of 16 ⁇ 16 pixels. By performing such thinning processing, the video signal is reduced to a (1/8) X (1/8) size, that is, a data size of 64 times smaller.
- the interpolation unit 162 performs a well-known linear interpolation process on the video signal thinned out by the thinning unit 161, thereby forming an RGB three-panel. An image is generated, and the generated video signal of the three-panel image is transferred to the photographing situation estimation unit 116.
- the photographing situation estimating unit 116 calculates information such as skin color, dark part, and high-frequency region from the three-plate video signal transferred from the interpolating unit 162, and based on the calculated information, calculates one image. Divide the video signal into multiple areas and label each area information.
- the buffer 163 stores the information transferred from the photographing situation estimation unit 116.
- the noise estimating unit 117 based on the control of the control unit 122, converts each color signal received from the color signal separating unit 115 for each predetermined size, for example, a pixel in the present embodiment.
- the noise amount is calculated in units, and the calculated noise amount is transferred to the correction unit 118.
- the correcting unit 118 corrects the noise amount output from the noise estimating unit 117 based on the label information read from the buffer 163, and converts the corrected noise amount to noise. To the noise reduction unit 1 1 9 At this time, the correction unit 118 enlarges the label information of the buffer 163 according to the ratio thinned out by the thinning unit 161, and outputs the pixel information from the noise estimation unit 117 in pixel units. Perform processing corresponding to the noise amount.
- the processing in the noise estimating unit 117 and the processing in the correcting unit 118 are performed under the control of the control unit 122 in synchronization with the processing of the noise reducing unit 119. I'm familiar.
- the photographing state estimating unit 116 reads out the RGB three-plate image from the interpolating unit 162, calculates the color difference signals Cb and Cr, extracts the skin color region by a predetermined threshold processing, and labels the image.
- the skin color detector 171, which is an image characteristic detecting means and a specific color detecting means, and the above-mentioned interpolator 162 A dark area detecting unit 172, which is an image characteristic detecting means and a specific luminance detecting means, which reads out an RGB three-panel image, calculates a luminance signal Y, extracts a dark area smaller than a predetermined threshold value, and labels the dark area.
- the area estimation result indicating whether or not the area is the flesh color area and whether or not the area is the partial area is stored in the buffer 1.
- an area estimating unit 17 3 serving as area estimating means for transferring the data to the area estimating unit 6.
- control unit 122 is bidirectionally connected to the skin color detection unit 171 and the dark part detection unit 172, and controls these.
- the skin color detecting unit 171 converts this into a signal of a predetermined color space, for example, ⁇ , as shown in the following Expressions 17 and 18.
- the color difference signals in Cb, Cr space are converted into Cb, Cr.
- the skin color detecting section 171 extracts only the skin color region by comparing these two color difference signals Cb and Cr with a predetermined threshold value.
- the skin color detecting section 17 1 uses the result to label the thinned-out three-plate image on a pixel-by-pixel basis and transfers the label to the area estimating section 17 3.
- 1 is assigned to a skin color area
- 0 is assigned to other areas.
- the dark part detecting unit 172 converts it into a luminance signal Y as shown in the following Expression 19.
- Y 0.29 9 0 0 R + 0 .5 8 7 0 0 G + 0 .1 1 4 0 0 ⁇
- the dark part detecting section 1772 compares the luminance signal ⁇ with a predetermined threshold, and extracts a region smaller than the threshold as a dark region.
- the dark part detecting unit 172 labels the thinned-out three-plate image on a pixel-by-pixel basis, and transfers the label to the region estimating unit 173.
- a specific label attached at this time for example, 2 is assigned to a dark area, and 0 is assigned to other areas.
- the area estimating section 1 73 is 1 for the flesh color area, 2 for the dark area, and 2 for the flesh color area.
- the area which is a dark area is set to 3 and the other areas are set to 0, and the result is output to the noise 163.
- the correction unit 118 reads the label information from the notifier 163 and adjusts the gain for correction according to the value of the label. For example, for label 3 (skin color area and partial area), the gain is strong (for example, 1.5 to 2.0), and for label 1 (skin color area) or label 2 (dark area), the gain is medium (for example, 1 to 2). 0 to 1.5), and at label 0 (others), the gain is weak (for example, 0.5 to 1.0).
- the color information and the luminance information are used for estimating the shooting state of each area in the screen.
- the present invention is not limited to using such information.
- frequency information may be used.
- FIG. 23B another example of the configuration of the imaging situation estimation unit 116 will be described.
- the photographing situation estimating unit 1 16 is a high-frequency detecting unit 1 75 that is an image characteristic detecting unit and a frequency detecting unit that reads out the RGB three-plate image from the interpolating unit 16 2 in block units and detects high-frequency components. And a region estimating unit 173 for applying a label proportional to the high-frequency component detected by the high-frequency detecting unit 175 and transferring the label to the buffer 163.
- Numeral 75 is bidirectionally connected to the control section 122 so as to be controlled.
- the high-frequency detecting section 1775 reads out the RGB three-plate image from the interpolating section 162 in a predetermined block size, for example, a unit of 8 ⁇ 8 pixels, and outputs a known DCT (Discrete Cosine).
- a predetermined block size for example, a unit of 8 ⁇ 8 pixels
- the region estimating unit 173 assigns a label proportional to the high-frequency component amount to the program, and transfers the label to the buffer 163.
- the correction unit 118 reads out the label information stored in the buffer 163, and calculates the ratio of the thinning-out performed by the thinning-out unit 161 and the block size used in the high-frequency detection unit 175.
- the label information is enlarged in accordance with, and the noise amount in pixel units transferred from the noise estimating unit 117 is corrected.
- the conversion into the frequency component is performed by the DCT.
- the conversion is not limited to this, and the Fourier (
- the basic configuration of the noise estimating unit 117 is the same as that of the noise estimating unit 117 shown in FIG. 17 in the third embodiment described above. The difference is that a lookup table section 181, which is a noise amount calculation means and a lookup table means, is provided in place of the parameter ROM 147 and the function operation section 148.
- the look-up table section 18 1 is bidirectionally connected to and controlled by the control section 122, and includes the above average calculation section 144, gain calculation section 144, and standard value assignment. Information is input from the unit 144 and the processing result is output to the correction unit 118.
- the look-up table section 18 1 stores the relationship among the signal value level, gain, shutter speed, image sensor temperature, and noise amount in the same manner as in the third embodiment described above.
- the signal value level of the pixel of interest calculated by the average calculation unit 144 and the gain value calculated by the gain calculation unit 144 Based on the gain, the information on the shirt evening speed and the temperature of the image sensor transferred from the control unit 122, and the standard value given as needed by the standard value giving unit 144, Referring to the lookup table, the noise amount is estimated and transferred to the correction unit 118.
- the noise reduction processing is performed at the time of shooting, but the present invention is not limited to this.
- the video signal output from the CCD 105 is set as unprocessed raw data, and the raw data includes the temperature of the image sensor at the time of shooting from the control unit 122 and the gain. And information such as the shutter speed are added as header information.
- the Raw data to which the header information has been added may be output to a processing device such as a computer, and the processing device may perform processing by software.
- Figures 25 and 26 relate to a series of processes.
- step S101 all color signals composed of Raw data and header information including information such as temperature, gain, and shutter speed are read (step S101).
- step S102 the video signal is thinned out to a predetermined size (step S102), and a RGB three-plate image is generated by known linear interpolation (step S103).
- the color difference signals Cb, Cr are obtained from the generated RGB, and the color in which the color difference signals Cb, Cr are within a predetermined range is extracted as a skin color region (step S104).
- a luminance signal Y is obtained from the RGB generated in step S103, and a region where the luminance signal Y is equal to or less than a predetermined threshold is extracted as a dark region (step S105).
- Labeling is performed on the skin color area extracted in step S104 and the partial area extracted in step S105 (step S106), and the attached label information is output (step S100). 7) o
- step S 101 the video signal read in step S 101 is separated for each color signal (step S 108), and a local area of a predetermined size centered on the pixel of interest, for example, a local area in units of 4 ⁇ 1 pixels Is extracted (step S109), and the signal value level of the pixel of interest is calculated as the average value of this local region (step S110).
- parameters such as temperature, gain, and shutter speed are obtained from the header information read in step S101. At this time, if the parameters required for the header information do not exist, the parameters are not found. In this case, a predetermined standard value is assigned (step S111).
- step S110 The signal value level determined in step S110 and the parameters such as temperature, gain, and shutter speed determined in step S111 Based on the above, the noise amount is calculated by referring to the lookup table (step S112)
- step S107 the label information output in step S107 is read, and the label corresponding to the current pixel of interest is transferred to step S114 (step S113).
- step S112 The noise amount obtained in step S112 is corrected based on the label information read in step S113 (step S114)
- step S108 The color signals separated in step S108 are extracted for each horizontal line (step S115), and a known hysteresis smoothing is performed based on the noise amount corrected in step S114. (Step S116).
- step S117 it is determined whether or not the processing has been completed for all the horizontal lines (step S117), and if not, the processing of steps S109 to S16 is repeated.
- a local region of a predetermined size centered on the pixel of interest for example, a local region of 1 ⁇ 4 pixels is extracted (step S 1 19).
- the signal value level of the pixel is calculated as the average value of the local area (step S120).
- step S122 parameters such as temperature, gain, and shutter speed are obtained from the header information read in step S11. At this time, if the necessary parameters do not exist in the header information, A predetermined standard value is assigned (step S122).
- the amount of noise is determined by referring to a look-up table based on the signal value level determined in step S120 and parameters such as temperature, gain, and shirt speed determined in step S121. Is calculated (step S122). On the other hand, the label information output in step S107 is read, and the label corresponding to the current pixel of interest is transferred to step S124 (step S123).
- step S122 Correct the noise amount obtained in step S122 based on the label information read in step S123 (step S124) o
- step S118 The color signals smoothed in the horizontal direction in step S118 are extracted in units of vertical lines (step S125), and a known signal is extracted based on the noise amount corrected in step S124. Hysteresis smoothing is performed (step S126).
- step S127 it is determined whether or not the processing for all the vertical lines has been completed (step S127), and if not, the processing of steps S119 to S26 is repeated.
- step S129 it is determined whether or not the processing for all color signals has been completed (step S129), and if not completed, the processing of steps S108 to S28 is repeated, and If the processing has been completed, the processing ends.
- the same effects as those of the third embodiment described above can be obtained, and the skin condition detection, the dark part detection, and the high frequency detection are performed, and the photographing state of each region in one screen is obtained. Since the noise amount is corrected for each area, high-precision noise reduction processing suitable for each area can be performed to obtain a subjectively favorable high-quality image. Wear.
- the case where the shooting situation of the entire screen as shown in the third embodiment is obtained and the case where the shooting situation of each area is obtained as shown in the fourth embodiment are selectively used as necessary. This makes it possible to construct a variety of imaging systems depending on the application.
- the shooting condition of each area is estimated from the signal whose size has been reduced by thinning out the video signal at predetermined intervals, high-speed processing is possible and the working memory size can be reduced. This makes it possible to construct an imaging system at low cost.
- the lookup table is used when calculating the noise amount, high-speed processing can be performed.
- a high-quality image can be obtained by appropriately reducing the amount of noise in an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03792792A EP1551173A4 (en) | 2002-08-22 | 2003-08-22 | PICTURE SYSTEM AND PICTURE PROCESSING PROGRAM |
CN038186179A CN1675919B (zh) | 2002-08-22 | 2003-08-22 | 摄像系统及图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-242400 | 2002-08-22 | ||
JP2002242400A JP3762725B2 (ja) | 2002-08-22 | 2002-08-22 | 撮像システムおよび画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004019608A1 true WO2004019608A1 (ja) | 2004-03-04 |
Family
ID=31944016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/010614 WO2004019608A1 (ja) | 2002-08-22 | 2003-08-22 | 撮像システムおよび画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (2) | US7812865B2 (ja) |
EP (1) | EP1551173A4 (ja) |
JP (1) | JP3762725B2 (ja) |
CN (2) | CN1675919B (ja) |
WO (1) | WO2004019608A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100377686C (zh) * | 2004-07-07 | 2008-04-02 | 奥林巴斯株式会社 | 被检体体内装置及相关医疗装置 |
US7554577B2 (en) | 2004-09-29 | 2009-06-30 | Sanyo Electric Co., Ltd. | Signal processing device |
US7916187B2 (en) | 2004-04-27 | 2011-03-29 | Olympus Corporation | Image processing apparatus, image processing method, and program |
US8035705B2 (en) | 2005-10-26 | 2011-10-11 | Olympus Corporation | Image processing system, image processing method, and image processing program product |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3934597B2 (ja) | 2003-12-09 | 2007-06-20 | オリンパス株式会社 | 撮像システムおよび画像処理プログラム |
JP2005303803A (ja) * | 2004-04-14 | 2005-10-27 | Olympus Corp | 撮像装置と画像記録媒体、および画像処理装置ならびに画像処理プログラムとその記録媒体 |
JP2005303802A (ja) * | 2004-04-14 | 2005-10-27 | Olympus Corp | 撮像装置および画像処理プログラム |
JP4605582B2 (ja) * | 2004-06-02 | 2011-01-05 | 富士重工業株式会社 | ステレオ画像認識装置及びその方法 |
JP4831941B2 (ja) * | 2004-06-08 | 2011-12-07 | オリンパス株式会社 | 撮像処理システム、プログラム及び記憶媒体 |
JP3926813B2 (ja) | 2004-08-23 | 2007-06-06 | 富士フイルム株式会社 | ノイズ低減装置および方法ならびにノイズ低減プログラム |
JP2006098223A (ja) * | 2004-09-29 | 2006-04-13 | Sanyo Electric Co Ltd | ノイズ除去回路及びそれを備えた温度測定処理装置 |
DE102006007092A1 (de) * | 2005-03-01 | 2006-09-07 | Denso Corp., Kariya | Bildgebungsvorrichtung |
JP4634842B2 (ja) * | 2005-03-31 | 2011-02-16 | 株式会社デンソーアイティーラボラトリ | 風景推定装置 |
JP4914026B2 (ja) * | 2005-05-17 | 2012-04-11 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
WO2007037325A1 (ja) | 2005-09-28 | 2007-04-05 | Olympus Corporation | 撮像装置 |
US7711200B2 (en) * | 2005-09-29 | 2010-05-04 | Apple Inc. | Video acquisition with integrated GPU processing |
JP4660342B2 (ja) * | 2005-10-12 | 2011-03-30 | オリンパス株式会社 | 画像処理システム、画像処理プログラム |
JP4628937B2 (ja) * | 2005-12-01 | 2011-02-09 | オリンパス株式会社 | カメラシステム |
KR101225060B1 (ko) | 2006-03-08 | 2013-01-23 | 삼성전자주식회사 | 이미지 센서에서 노이즈 판단 기준을 추정하는 방법 및장치 |
JP5009004B2 (ja) * | 2006-07-31 | 2012-08-22 | 株式会社リコー | 画像処理装置、撮像装置、画像処理方法、および画像処理プログラム |
JP2008107742A (ja) * | 2006-10-27 | 2008-05-08 | Pentax Corp | 焦点検出方法および焦点検出装置 |
JP4728265B2 (ja) * | 2007-02-20 | 2011-07-20 | 富士通セミコンダクター株式会社 | ノイズ特性測定装置及びノイズ特性測定方法 |
JP2008211367A (ja) * | 2007-02-23 | 2008-09-11 | Olympus Imaging Corp | 撮像装置 |
US8711249B2 (en) * | 2007-03-29 | 2014-04-29 | Sony Corporation | Method of and apparatus for image denoising |
US8108211B2 (en) * | 2007-03-29 | 2012-01-31 | Sony Corporation | Method of and apparatus for analyzing noise in a signal processing system |
JP5052189B2 (ja) * | 2007-04-13 | 2012-10-17 | オリンパス株式会社 | 映像処理装置及び映像処理プログラム |
JP4925198B2 (ja) * | 2007-05-01 | 2012-04-25 | 富士フイルム株式会社 | 信号処理装置および方法、ノイズ低減装置および方法並びにプログラム |
JP4980131B2 (ja) | 2007-05-01 | 2012-07-18 | 富士フイルム株式会社 | ノイズ低減装置および方法並びにプログラム |
JP5165300B2 (ja) * | 2007-07-23 | 2013-03-21 | オリンパス株式会社 | 映像処理装置および映像処理プログラム |
KR101341095B1 (ko) | 2007-08-23 | 2013-12-13 | 삼성전기주식회사 | 야경 환경에서 최적의 화질을 갖는 영상 획득 장치 및 방법 |
JP2009109782A (ja) * | 2007-10-31 | 2009-05-21 | Hitachi Ltd | ズームカメラ |
KR20090062049A (ko) * | 2007-12-12 | 2009-06-17 | 삼성전자주식회사 | 영상 데이터 압축 전처리 방법 및 이를 이용한 영상 데이터압축 방법과, 영상 데이터 압축 시스템 |
JP5529385B2 (ja) | 2008-02-12 | 2014-06-25 | キヤノン株式会社 | X線画像処理装置、x線画像処理方法、プログラム及び記憶媒体 |
JP2009260871A (ja) | 2008-04-21 | 2009-11-05 | Nikon Corp | 撮像装置 |
JP5274101B2 (ja) | 2008-05-19 | 2013-08-28 | キヤノン株式会社 | 放射線画像処理装置、放射線画像処理方法及びプログラム |
CN101594457B (zh) * | 2008-05-30 | 2012-11-21 | 深圳艾科创新微电子有限公司 | 一种分级降噪系统及方法 |
JP5515295B2 (ja) * | 2009-01-16 | 2014-06-11 | 株式会社ニコン | 測光装置および撮像装置 |
JP5197423B2 (ja) * | 2009-02-18 | 2013-05-15 | オリンパス株式会社 | 画像処理装置 |
JP5227906B2 (ja) * | 2009-06-30 | 2013-07-03 | 株式会社日立製作所 | 映像記録システム |
JP5523065B2 (ja) * | 2009-11-13 | 2014-06-18 | キヤノン株式会社 | 撮像装置及びその制御方法 |
JP5182312B2 (ja) * | 2010-03-23 | 2013-04-17 | 株式会社ニコン | 画像処理装置、および画像処理プログラム |
IT1404810B1 (it) * | 2011-01-26 | 2013-11-29 | St Microelectronics Srl | Riconoscimento di texture nell'elaborazione di immagini |
WO2013060373A1 (en) * | 2011-10-27 | 2013-05-02 | Robert Bosch Gmbh | Method of controlling a cooling arrangement |
EP2772046B1 (en) * | 2011-10-27 | 2020-01-08 | Robert Bosch GmbH | Method of controlling a cooling arrangement |
US20130159230A1 (en) * | 2011-12-15 | 2013-06-20 | Toyota Infotechnology Center Co., Ltd. | Data Forgetting System |
US8892350B2 (en) | 2011-12-16 | 2014-11-18 | Toyoda Jidosha Kabushiki Kaisha | Journey learning system |
JP5713932B2 (ja) * | 2012-02-24 | 2015-05-07 | 京セラドキュメントソリューションズ株式会社 | 画像形成装置 |
JP2013176468A (ja) * | 2012-02-28 | 2013-09-09 | Canon Inc | 情報処理装置、情報処理方法 |
JP6041502B2 (ja) * | 2012-03-12 | 2016-12-07 | キヤノン株式会社 | 画像処理装置および制御方法 |
US9230340B2 (en) * | 2012-05-04 | 2016-01-05 | Semiconductor Components Industries, Llc | Imaging systems with programmable fixed rate codecs |
US9077943B2 (en) * | 2012-05-31 | 2015-07-07 | Apple Inc. | Local image statistics collection |
US9101273B2 (en) * | 2012-06-22 | 2015-08-11 | Kabushiki Kaisha Toshiba | Apparatus, detector, and method for applying a pixel by pixel bias on demand in energy discriminating computed tomography (CT) imaging |
US9030571B2 (en) * | 2012-07-11 | 2015-05-12 | Google Inc. | Abstract camera pipeline for uniform cross-device control of image capture and processing |
KR102203232B1 (ko) * | 2014-08-29 | 2021-01-14 | 삼성전자주식회사 | 그리드 패턴 노이즈 제거 방법 및 그 전자 장치 |
CN104819785B (zh) * | 2015-04-24 | 2017-03-22 | 歌尔股份有限公司 | 一种基于摄像模组的温度测量方法 |
JP6521776B2 (ja) * | 2015-07-13 | 2019-05-29 | オリンパス株式会社 | 画像処理装置、画像処理方法 |
CN105141857B (zh) * | 2015-09-21 | 2018-12-11 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
JP6793325B2 (ja) * | 2016-05-25 | 2020-12-02 | パナソニックIpマネジメント株式会社 | 肌診断装置および肌診断方法 |
WO2019189210A1 (ja) * | 2018-03-30 | 2019-10-03 | 株式会社ニコン | 動画圧縮装置、伸張装置、電子機器、動画圧縮プログラム、および伸張プログラム |
WO2020124314A1 (zh) * | 2018-12-17 | 2020-06-25 | 深圳市大疆创新科技有限公司 | 图像处理方法、图像处理装置及图像采集装置、存储介质 |
JP7311994B2 (ja) | 2019-03-27 | 2023-07-20 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法、及びプログラム |
CN113840048B (zh) * | 2021-09-02 | 2024-04-12 | 信利光电股份有限公司 | 一种智能调整摄像头画面四角亮度的方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000023173A (ja) * | 1998-07-01 | 2000-01-21 | Eastman Kodak Japan Ltd | 固体カラー撮像デバイスのノイズ除去方法 |
JP2003153290A (ja) * | 2001-08-31 | 2003-05-23 | Stmicroelectronics Srl | ベイヤーパターン画像データ用ノイズフィルタ |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2565260B2 (ja) * | 1987-10-17 | 1996-12-18 | ソニー株式会社 | 固体撮像装置用画像欠陥補正装置 |
JP3216022B2 (ja) * | 1992-09-14 | 2001-10-09 | キヤノン株式会社 | 撮像装置 |
US5661521A (en) * | 1995-06-05 | 1997-08-26 | Eastman Kodak Company | Smear correction of CCD imager using active pixels |
US6545775B1 (en) * | 1995-07-21 | 2003-04-08 | Canon Kabushiki Kaisha | Control system and units removably attachable to the same |
JPH10282400A (ja) * | 1997-04-04 | 1998-10-23 | Canon Inc | 撮影レンズシステム |
EP0877524B1 (en) * | 1997-05-09 | 2006-07-19 | STMicroelectronics S.r.l. | Digital photography apparatus with an image-processing unit |
JP4344964B2 (ja) * | 1999-06-01 | 2009-10-14 | ソニー株式会社 | 画像処理装置および画像処理方法 |
US7158183B1 (en) * | 1999-09-03 | 2007-01-02 | Nikon Corporation | Digital camera |
JP2001157057A (ja) | 1999-11-30 | 2001-06-08 | Konica Corp | 画像読取装置 |
US6813389B1 (en) * | 1999-12-15 | 2004-11-02 | Eastman Kodak Company | Digital image processing method and system including noise reduction and tone scale adjustments |
JP2001208959A (ja) * | 2000-01-24 | 2001-08-03 | Fuji Photo Film Co Ltd | 撮像装置、自動合焦方法及び合焦の手順を記録した記録媒体 |
JP2001297304A (ja) * | 2000-04-13 | 2001-10-26 | Nec Eng Ltd | 文字認識装置及びそれに用いるノイズ除去方法 |
JP4064038B2 (ja) * | 2000-06-09 | 2008-03-19 | 富士フイルム株式会社 | 固体撮像素子を用いた画像取得装置および画像取得方法並びにその方法を実行するためのプログラムを記録した記録媒体 |
JP4210021B2 (ja) * | 2000-06-21 | 2009-01-14 | 富士フイルム株式会社 | 画像信号処理装置および画像信号処理方法 |
US6633683B1 (en) * | 2000-06-26 | 2003-10-14 | Miranda Technologies Inc. | Apparatus and method for adaptively reducing noise in a noisy input image signal |
JP3664231B2 (ja) | 2000-08-09 | 2005-06-22 | 日本電気株式会社 | カラー画像処理装置 |
JP4154847B2 (ja) * | 2000-09-26 | 2008-09-24 | コニカミノルタビジネステクノロジーズ株式会社 | 画像処理装置、画像処理方法および画像処理プログラムを記録したコンピュータ読取可能な記録媒体 |
US7054501B1 (en) * | 2000-11-14 | 2006-05-30 | Eastman Kodak Company | Estimating noise for a digital image utilizing updated statistics |
US7064785B2 (en) * | 2002-02-07 | 2006-06-20 | Eastman Kodak Company | Apparatus and method of correcting for dark current in a solid state image sensor |
US6934421B2 (en) * | 2002-03-20 | 2005-08-23 | Eastman Kodak Company | Calculating noise from multiple digital images having a common noise source |
-
2002
- 2002-08-22 JP JP2002242400A patent/JP3762725B2/ja not_active Expired - Fee Related
-
2003
- 2003-08-22 EP EP03792792A patent/EP1551173A4/en not_active Withdrawn
- 2003-08-22 CN CN038186179A patent/CN1675919B/zh not_active Expired - Fee Related
- 2003-08-22 US US10/646,637 patent/US7812865B2/en active Active
- 2003-08-22 WO PCT/JP2003/010614 patent/WO2004019608A1/ja active Application Filing
- 2003-08-22 CN CNA2008101663514A patent/CN101374196A/zh active Pending
-
2008
- 2008-03-06 US US12/074,822 patent/US20080158395A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000023173A (ja) * | 1998-07-01 | 2000-01-21 | Eastman Kodak Japan Ltd | 固体カラー撮像デバイスのノイズ除去方法 |
JP2003153290A (ja) * | 2001-08-31 | 2003-05-23 | Stmicroelectronics Srl | ベイヤーパターン画像データ用ノイズフィルタ |
Non-Patent Citations (1)
Title |
---|
See also references of EP1551173A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7916187B2 (en) | 2004-04-27 | 2011-03-29 | Olympus Corporation | Image processing apparatus, image processing method, and program |
CN100377686C (zh) * | 2004-07-07 | 2008-04-02 | 奥林巴斯株式会社 | 被检体体内装置及相关医疗装置 |
US7630754B2 (en) | 2004-07-07 | 2009-12-08 | Olympus Corporation | Intra-subject device and related medical device |
US7554577B2 (en) | 2004-09-29 | 2009-06-30 | Sanyo Electric Co., Ltd. | Signal processing device |
US8035705B2 (en) | 2005-10-26 | 2011-10-11 | Olympus Corporation | Image processing system, image processing method, and image processing program product |
Also Published As
Publication number | Publication date |
---|---|
US20050099515A1 (en) | 2005-05-12 |
JP2004088149A (ja) | 2004-03-18 |
US7812865B2 (en) | 2010-10-12 |
CN1675919B (zh) | 2010-10-06 |
EP1551173A1 (en) | 2005-07-06 |
US20080158395A1 (en) | 2008-07-03 |
JP3762725B2 (ja) | 2006-04-05 |
EP1551173A4 (en) | 2011-08-03 |
CN1675919A (zh) | 2005-09-28 |
CN101374196A (zh) | 2009-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004019608A1 (ja) | 撮像システムおよび画像処理プログラム | |
US7595825B2 (en) | Image pickup system and image processing program | |
JP3934506B2 (ja) | 撮像システムおよび画像処理プログラム | |
US8300120B2 (en) | Image processing apparatus and method of processing image for reducing noise of the image | |
US8290263B2 (en) | Image processing apparatus | |
US6825884B1 (en) | Imaging processing apparatus for generating a wide dynamic range image | |
US8035853B2 (en) | Image processing apparatus which calculates a correction coefficient with respect to a pixel of interest and uses the correction coefficient to apply tone correction to the pixel of interest | |
US7916187B2 (en) | Image processing apparatus, image processing method, and program | |
JP4427001B2 (ja) | 画像処理装置、画像処理プログラム | |
WO2015119271A1 (ja) | 画像処理装置、撮像装置、画像処理方法、コンピュータにより処理可能な一時的でない記憶媒体 | |
JP2006023959A (ja) | 信号処理システム及び信号処理プログラム | |
JP4637812B2 (ja) | 画像信号処理装置、画像信号処理プログラム、画像信号処理方法 | |
US8351695B2 (en) | Image processing apparatus, image processing program, and image processing method | |
WO2006109702A1 (ja) | 画像処理装置と撮像装置、および画像処理プログラム | |
WO2005099356A2 (ja) | 撮像装置 | |
US20020163586A1 (en) | Method and apparatus for capturing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 20038186179 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003792792 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2003792792 Country of ref document: EP |