WO2009096232A1 - Image processing device and image processing method - Google Patents
Image processing device and image processing method Download PDFInfo
- Publication number
- WO2009096232A1 WO2009096232A1 PCT/JP2009/050453 JP2009050453W WO2009096232A1 WO 2009096232 A1 WO2009096232 A1 WO 2009096232A1 JP 2009050453 W JP2009050453 W JP 2009050453W WO 2009096232 A1 WO2009096232 A1 WO 2009096232A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- spectral
- imaging
- matrix
- illumination
- subject
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 110
- 238000003672 processing method Methods 0.000 title claims description 10
- 230000003595 spectral effect Effects 0.000 claims abstract description 365
- 239000011159 matrix material Substances 0.000 claims abstract description 240
- 238000005286 illumination Methods 0.000 claims abstract description 232
- 238000004364 calculation method Methods 0.000 claims abstract description 114
- 230000035945 sensitivity Effects 0.000 claims abstract description 48
- 238000002834 transmittance Methods 0.000 claims abstract description 29
- 238000003384 imaging method Methods 0.000 claims description 300
- 238000000034 method Methods 0.000 claims description 73
- 238000011156 evaluation Methods 0.000 claims description 32
- 230000003287 optical effect Effects 0.000 claims description 16
- 238000009792 diffusion process Methods 0.000 claims description 12
- 239000003086 colorant Substances 0.000 claims description 9
- 239000006185 dispersion Substances 0.000 abstract 1
- 238000001228 spectrum Methods 0.000 description 42
- 230000008569 process Effects 0.000 description 41
- 238000006243 chemical reaction Methods 0.000 description 33
- 230000006870 function Effects 0.000 description 22
- 238000013500 data storage Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 14
- 238000012986 modification Methods 0.000 description 13
- 230000004048 modification Effects 0.000 description 13
- 238000005070 sampling Methods 0.000 description 9
- 238000003860 storage Methods 0.000 description 9
- 229910052724 xenon Inorganic materials 0.000 description 9
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000012935 Averaging Methods 0.000 description 4
- 239000000654 additive Substances 0.000 description 4
- 230000000996 additive effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 3
- 229910052753 mercury Inorganic materials 0.000 description 3
- 238000007726 management method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000295 emission spectrum Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B33/00—Colour photography, other than mere exposure or projection of a colour film
- G03B33/06—Colour photography, other than mere exposure or projection of a colour film by additive-colour projection apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J2003/467—Colour computing
Definitions
- the present invention relates to an image processing apparatus and an image processing method capable of calculating the spectral radiance of illumination light applied to a subject when the subject is imaged.
- a color management technique based on the spectral reflectance (reflection spectrum) of a subject is known. This technique is realized by handling the color of the subject in the wavelength region, and enables accurate color reproduction regardless of the illumination environment of the subject.
- An imaging method based on the spectral reflectance of such a subject is disclosed in âYoichi Miyake,â Introduction to Spectral Image Processing â, The University of Tokyo Press, February 24, 2006â. Has been.
- the spectral radiance from the subject is determined in accordance with the spectral radiance of the illumination light and the spectral reflectance of the subject, so if the spectral radiance of the illumination light is not known, imaging data obtained by imaging the subject This is because the spectral reflectance of the subject cannot be accurately calculated based on the above.
- the spectral radiance of the illumination light is also used for white balance adjustment of the imaging device.
- This white balance adjustment is an operation for determining a coefficient for mutually adjusting the levels of luminance values output from a plurality of image sensors constituting the imaging apparatus.
- white balance adjustment is disclosed in Japanese Patent Application Laid-Open Nos. 2001-057680 and 2005-328386.
- Japanese Patent Laid-Open No. 2001-056780 JP 2005-328386 A Yoichi Miyake, âIntroduction to Spectral Image Processingâ, The University of Tokyo Press, February 24, 2006
- the spectral radiance of illumination light has been measured exclusively using a dedicated measuring device such as a spectroradiometer.
- a spectroradiometer separates light incident through an optical system with a diffraction grating (grating), and receives the light with an image sensor (CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor), etc.) for each wavelength. Get the brightness value of. If it is going to implement
- CCD Charge Coupled Device
- CMOS Complementary Metal Oxide Semiconductor
- the present invention has been made to solve such a problem, and an object of the present invention is to easily reduce the spectral radiance of illumination light applied to the subject using an imaging device for imaging the subject. It is to provide an image processing apparatus and an image processing method that can be calculated.
- an image processing device capable of performing image processing on imaging data imaged by an imaging device.
- the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit Using the first estimation matrix calculated based on the autocorrelation matrix of the spectral radiance of the light source candidates that can be used to provide the environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, A first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data.
- the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source.
- the diffusing member is disposed on the optical axis of the imaging apparatus, and the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis.
- the function value is a cosine function with respect to an angle with respect to the optical axis.
- the imaging device is configured to output coordinate values defined in the RGB color system as imaging data.
- the image processing apparatus uses a spectral radiance of the illumination light and a color matching function to calculate a coordinate value in the RGB color system corresponding to the spectral radiance of the illumination light, and a second calculator And a third calculation unit that calculates white balance in the imaging apparatus based on the calculated ratio of coordinate values.
- the image processing apparatus has a second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging apparatus, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject.
- a fourth calculation unit that calculates the spectral reflectance of the subject from the second imaging data obtained by imaging the subject with the imaging device in an illumination environment.
- the image processing device includes a generation unit that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. In addition.
- an image processing device capable of image processing with respect to imaging data imaged by an imaging device.
- the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of the predetermined calculation matrices for each type of a plurality of light source candidates that can be used to provide an environment according to an external command, a calculation matrix selected by the selection unit, A first calculation for calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using the first estimation matrix calculated based on the spectral transmittance of the diffusing member and the spectral sensitivity of the imaging device.
- Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
- an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus.
- the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of a plurality of first estimation matrices according to an external command, which is calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an environment, and a selection unit And a first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data using the selected first estimation matrix.
- the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
- an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus.
- the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit The spectral radiance of illumination light incident on the subject from the first imaging data using a plurality of first estimation matrices respectively calculated based on the spectral radiances of a plurality of light source candidates that can be used to provide an environment
- a first calculation unit for calculating each candidate and each calculated spectral radiance candidate are evaluated by comparison with a predetermined reference pattern, and one of them is output as the spectral radiance of illumination light in an illumination environment.
- the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
- the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device.
- a first estimation matrix calculated based on the autocorrelation matrix of spectral radiance of light source candidates that can be used to provide an illumination environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
- the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of predetermined calculation matrices for each type of light source candidates that can be used to provide an illumination environment, the selected calculation matrix, and spectral transmission of the diffusing member Calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using a first estimation matrix calculated based on the rate and the spectral sensitivity of the imaging device.
- Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
- the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment, and the selected first estimation matrix And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
- the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
- the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. And using the plurality of first estimation matrices calculated based on the spectral radiances of the plurality of light source candidates that can be used to provide the illumination environment, the spectrum of the illumination light incident on the subject from the first imaging data
- Each radiance candidate calculation step is evaluated by comparing each calculated spectral radiance candidate with a predetermined reference pattern, one of which is output as the spectral radiance of the illumination light in the illumination environment Including the step of.
- the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
- the present invention it is possible to easily calculate the spectral radiance of the illumination light applied to the subject using the imaging device for imaging the subject.
- FIG. 1 is a functional configuration diagram of an image processing device according to a first embodiment of the present invention. It is a figure for demonstrating the acquisition method of the imaging data used as the process target in the image processing apparatus according to Embodiment 1 of this invention. It is a figure for demonstrating the production
- Ax1, Ax2 optical axis, OBJ subject 1, 1A, 1B image processing device, 10, 20 input unit, 11, 11A, 11B, 11C, 11D spectral radiance calculation unit, 12 estimation matrix calculation unit, 13, 13A light source data Storage unit, 14 Tristimulus value conversion unit, 15 Coordinate conversion unit, 16 White balance calculation unit, 17 Estimation matrix storage unit, 18 Selection unit, 19 Evaluation unit, 21 Spectral reflectance calculation unit, 22 Estimation matrix calculation unit, 23 Spectroscopy Reflectance data storage unit, 24 image data generation unit, 25 coordinate conversion unit, 100, 100A, 100B, 100C illumination spectrum estimation unit, 150 computer body, 152 monitor, 154 keyboard, 156 mouse, 162 memory, 164 fixed disk, 166 FD drive, 168 CD-ROM drive 170 communication interface, 200-color reproduction unit, 300 light source, 400 an imaging device, 402 diffusing member.
- FIG. 1 is a functional configuration diagram of an image processing apparatus 1 according to the first embodiment of the present invention.
- the image processing apparatus 1 includes first imaging data g (1) RGB (m, n) and second imaging data g (2) RGB (m, n) captured by an imaging apparatus described later.
- the image processing method according to the present embodiment can be executed.
- the image processing apparatus 1 includes an illumination spectrum estimation unit 100 and a color reproduction unit 200.
- the illumination spectrum estimation unit 100 calculates the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject using the first imaging data g (1) RGB (m, n).
- the color reproduction unit 200 calculates the spectral reflectance of the subject from the second imaging data g (2) RGB (m, n) using the calculated spectral radiance E (1) .
- the color reproduction unit 200 outputs image data g (OUT) RGB (m, n) obtained by performing color reproduction of the subject based on the calculated spectral reflectance of the subject.
- the image data g (OUT) RGB (m, n) output from the color reproduction unit 200 is typically output to an output device (not shown) such as a display device (display) or a printing device (printer). Alternatively, it may be stored in a storage device (not shown).
- an output device such as a display device (display) or a printing device (printer).
- the image processing apparatus 1 is typically realized by hardware, but a part or all of the image processing apparatus 1 may be realized by software as will be described later.
- FIG. 2 is a diagram for describing a method for acquiring imaging data to be processed in image processing apparatus 1 according to the first embodiment of the present invention.
- FIG. 2 shows a case where the subject OBJ is imaged under a predetermined illumination environment.
- 2A shows a procedure for acquiring the first imaging data g (1) RGB (m, n)
- FIG. 2B shows the second imaging data g (2) RGB (m, n). The procedure to acquire is shown.
- the imaging device 400 is used for acquisition (imaging) of imaging data.
- the imaging apparatus 400 is a digital still camera or a digital video camera, and includes an imaging element (typically, CCD, CMOS, etc.) having spectral sensitivity characteristics in a specific wavelength band.
- the imaging element includes a plurality of pixels arranged in a matrix, and outputs luminance corresponding to the intensity of light incident on each pixel as imaging data.
- the luminance output from each image sensor has a value corresponding to the spectral sensitivity.
- a specific wavelength band that can be imaged by the imaging apparatus is referred to as a band.
- the imaging device 400 A case of using the imaging device 400 will be described.
- the device structure either a structure in which a plurality of types of image sensors are formed on the same substrate or a structure in which a corresponding type of image sensor is formed on a plurality of substrates can be adopted.
- the spectral sensitivity of the element itself may be made different, or an element having the same spectral sensitivity is used, and R, G are input to the input light side of each element.
- B may be provided.
- the imaging data output by the imaging apparatus 400 is three-dimensional color information of R, G, and B luminance values (typically, each of 12 bits: 0 to 4095 gradations).
- the imaging data output by the imaging apparatus 400 is defined in the RGB color system.
- (m, n) of the imaging data g (1) RGB (m, n), g (2) RGB (m, n) represents the coordinates of the corresponding pixel in the imaging device of the imaging device 400.
- the imaging data g (1) RGB (m, n), g (2) RGB (m, n) [(the luminance value detected by the R imaging element at coordinates (m, n)), (coordinate ( (luminance value detected by G image sensor at m, n)), (luminance value detected by B image sensor at coordinates (m, n))].
- illumination light emitted from some light source 300 is irradiated on the subject OBJ.
- the imaging device 400 is used to enter the subject OBJ from the light source 300.
- the light obtained through the diffusing member 402 that is, the light after passing through the diffusing member 402 is imaged at least part of the light to be transmitted.
- the optical axis Ax1 is on any path on which the illumination light enters the subject OBJ.
- a device 400 is arranged.
- a diffusing member 402 is disposed between the imaging device 400 and the light source 300 on this optical axis Ax1 (preferably in the immediate vicinity of the imaging device 400).
- the path of illumination light incident on the subject OBJ includes a path directly incident on the subject OBJ from the light source 300 and a path incident on the subject OBJ after being reflected from the light source 300 by a wall material or the like.
- the diffusion member 402 is a member for spatially diffusing the light imaged by the imaging device 400, that is, for spatially averaging, and a milky white diffusion plate having a known spectral transmittance is typically used. Alternatively, an integrating sphere or the like may be used. By using such a diffusing member 402, the intensity distribution of the illumination light incident on the imaging device 400 can be made uniform, thereby increasing the estimation accuracy of the spectral radiance of the illumination light described later.
- a diffusion plate having a predetermined incident angle characteristic (generally referred to as a cosine collector, a cosine diffuser, a cosine receptor, or the like).
- a cosine collector a cosine collector
- a cosine diffuser a cosine receptor
- the incident intensity of light after passing through the diffusing member 402 is indicated by a cosine function (cosine) with respect to an angle (solid angle) with respect to the optical axis Ax1 of the imaging device 400.
- the first imaging data g (1) RGB (m, n) acquired according to the above procedure includes color information reflecting illumination light incident on the subject OBJ under the illumination environment.
- the second imaging data g (2) RGB (m, n) is obtained by imaging the subject OBJ using the same imaging device 400 as in FIG. To be acquired.
- the diffusing member 402 is not disposed on the optical axis Ax2 of the imaging device 400.
- the imaging device 400 used for acquiring the first imaging data g (1) RGB (m, n) and the imaging device 400 used for acquiring the second imaging data g (2) RGB (m, n). are not necessarily the same, and different imaging devices 400 may be used as long as at least the spectral sensitivity of the imaging device is substantially known.
- the optical axis Ax1 of the imaging device 400 when imaging the first imaging data g (1) RGB (m, n), and the second imaging data g (2) in FIG. 2B It is preferable to match the optical axis Ax2 of the imaging device 400 when imaging RGB (m, n).
- the second imaging data g (2) RGB (m, n) acquired in FIG. 2B is determined mainly depending on the reflected light from the subject OBJ. This reflected light is reflected by the subject OBJ and propagates in the opposite direction on the optical axis Ax2, and the illumination light that generates this reflected light is mainly directed toward the subject OBJ on the optical axis Ax2 of the imaging device 400. Propagate. Therefore, by capturing the illumination light that generates the reflected light as the first imaging data g (1) RGB (m, n), a more appropriate spectral radiance of the illumination light can be calculated.
- the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject executed by the illumination spectrum estimation unit 100 will be described.
- the spectral radiance E (1) of the illumination light should originally be a continuous function with respect to the wavelength â , but in the present embodiment, the spectral radiance E (1) is represented as a visible light region (380 to For 780 nanometers), discrete values sampled with a predetermined wavelength width (1 nanometer width) shall be used.
- a matrix In the matrix indicating the spectral radiance E (1) , the luminance value at each wavelength is set in the diagonal element, and zero is set in elements other than the diagonal element.
- the illumination spectrum estimation unit 100 includes an input unit 10, a spectral radiance calculation unit 11, an estimation matrix calculation unit 12, and a light source data storage unit 13.
- the input unit 10 is obtained by imaging at least a part of light incident on the subject OBJ through the diffusing member 402 using an imaging device 400 in an illumination environment.
- First imaging data g (1) RGB (m, n) is received. Further, the input unit 10 based on the first image data g (1) RGB (m, n) , the first imaging data g (1) RGB (m, n) image pickup data g (1) representing the RGB Is output.
- the imaging data g (1) RGB is linearized color data composed of three luminance values (representative values) of R, G, and B.
- the input unit 10 the first imaging data g (1) RGB (m, n) includes logic for averaging the luminance value included in the first imaging data g (1) RGB (m, n ) Is averaged separately for each of R, G, and B, and the averaged value (R, G, B) is output as imaging data g (1) RGB .
- the input unit 10 performs processing for canceling the inverse gamma characteristic.
- the first imaging data g (1) RGB (m, n) may be linearized.
- a display device has a non-linear relationship (gamma characteristic) between an input signal level and an actually displayed luminance level.
- the imaging device 400 has a non-linearity (inverse gamma characteristic) opposite to the gamma characteristic of the display device so that an image adapted to human vision is displayed by canceling such non-linearity in the display device.
- imaging data is often output.
- Imaging data g (1) RGB (m, n) is generated.
- the gamma characteristic and the inverse gamma characteristic can be expressed as a power function.
- the first imaging data g (1) RGB (m, n) can be linearized according to the following arithmetic expression.
- This lookup table is a data table in which the result of the above-described conversion formula is stored in advance in association with each of all the luminance values that can be taken by the input imaging data. Refer to the correspondence between this input and output. Since the converted value can be acquired simply by doing this, the amount of calculation can be greatly reduced.
- the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated by the estimation matrix calculation unit 12 to be described later, and the spectral radiance of illumination light incident on the subject OBJ from the imaging data g (1) RGB. E (1) is calculated. More specifically, the spectral radiance calculation unit 11 calculates the spectral radiance E (1) of the illumination light based on the matrix product of the first estimation matrix W (1) and the imaging data g (1) RGB .
- the spectral radiance E (1) of 401 rows â 401 columns sampled with a predetermined wavelength width is used, so the first estimation matrix W ( 1) is the number of wavelength components â the number of bands of the imaging device 400, that is, a matrix of 401 rows â 3 columns.
- the estimation matrix calculation unit 12 includes the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the spectral of the imaging device 400. Based on the sensitivity S, a first estimation matrix W (1) is calculated.
- the spectral sensitivity S is a matrix of 401 rows â 3 columns
- the spectral transmittance f (1) is a matrix of 401 rows â 3 columns.
- the light (spectrum) that has passed through the diffusing member 402 incident on the imaging device 400 includes the spectral radiance E (1) ( â ) of the illumination light applied to the diffusing member 402 (or the subject OBJ), and the diffusing member.
- the spectral transmittance f (1) ( â ) of the diffusing member 402 is assumed to be constant over the entire diffusing member 402.
- Such a relationship can be expressed as a relational expression shown in Expression (1).
- n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
- a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (1) is expressed by the spectral sensitivity S that is a matrix indicating the sensitivity at each wavelength of each image sensor and the spectral radiance E 1 that is a matrix indicating the radiance at each wavelength. ) And spectral transmittance f (1) which is a matrix indicating the transmittance at each wavelength.
- the spectral sensitivity S and the spectral transmittance f (1) are already known.
- the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (1), the following matrix operation expression can be derived from the expression (1).
- W (1) W (1) â g (1) (3)
- W (1) is a first estimation matrix.
- the first estimation matrix W (1) is calculated by the winner estimation method described below. Specifically, the first estimation matrix W (1) is derived as shown in equation (5) by modifying the equation (2) after defining the system matrix I as the following equation (4).
- B is an autocorrelation matrix (hereinafter also referred to as âcalculation matrixâ) of spectral radiances of light source candidates that can be used to provide an illumination environment.
- spectral radiance for a plurality of light source candidates is acquired in advance, and from a statistical viewpoint, the spectral radiance of illumination light is used by utilizing the correlation with the spectral radiance of each light source.
- E (1) is estimated. That is, statistical data acquired for each type of light source is prepared in advance, and the spectral radiance E (1) of illumination light is calculated according to the characteristics of the statistical data.
- the autocorrelation matrix B of the spectral radiance serves as a reference for estimating the spectral radiance E (1) of the illumination light
- the type of the light source that is likely to be used to provide the illumination environment For example, it is preferable to use appropriate statistical data according to the light emission principle of fluorescent lamps, incandescent lamps, xenon lamps, mercury lamps, and the like.
- the spectral radiance of such a light source can be obtained experimentally in advance for each light source, or standardized by the International Commission on Illumination (CIE), ISO (International Organization for Standardization), or JIS (Japan Industrial Standards). Statistical data may be used.
- CIE International Commission on Illumination
- ISO International Organization for Standardization
- JIS Japan Industrial Standards
- FIG. 3 is a diagram for describing processing for generating an autocorrelation matrix B of spectral radiance according to the first embodiment of the present invention.
- a light source group matrix Est is generated with spectral radiance values of at least one type of light source candidates (light source 1 to light source N) as elements. That is, assuming that the component value (radiance) at each sampling wavelength â j (1 â j â k) of the light source i (1 â i â N) is e i ( â j ), the component value e i ( â Create a light source group matrix Est with j ) arranged in the row direction.
- an autocorrelation matrix B is calculated based on the group matrix Est according to the following arithmetic expression.
- a group matrix E st having the same sampling interval (number of elements ) is calculated. It is necessary to use it. Accordingly, a group matrix Est in which n matrixes of 401 rows â 1 column indicating the spectral radiance of one light source are combined is a 401 row â n column matrix, and the autocorrelation matrix of the group matrix Est is , 401 rows â 401 columns matrix.
- the spectral radiance of the light source for example, spectral radiance emitted from a single light source such as a fluorescent lamp or an incandescent lamp may be used, or spectral radiance generated by combining a plurality of types of light sources may be used. Further, outdoors, spectral radiance such as sunlight may be combined. That is, in this embodiment, in order to estimate the spectral radiance E (1) of illumination light, the autocorrelation matrix B obtained from the spectral radiance of a light source that is likely to be used to provide an illumination environment. Is preferably used.
- the light source data storage unit 13 stores in advance the autocorrelation matrix B, which is an arithmetic matrix, calculated by the procedure as described above.
- the estimation matrix calculation unit 12 calculates the system matrix I based on the spectral transmittance f (1) of the diffusing member 402 stored in advance and the spectral sensitivity S of the imaging device 400 according to the above-described equation (4).
- the first estimation matrix W (1) is calculated based on the system matrix I and the autocorrelation matrix B read from the light source data storage unit 13 in accordance with the above equation (5).
- the spectral radiance calculation unit 11 converts the first estimation matrix W (1) from the estimation matrix calculation unit 12 and the imaging data g (1) RGB from the input unit 10 according to the above equation (3). Based on this, the spectral radiance E (1) of the illumination light is calculated.
- the spectral radiance E (1) of the illumination light calculated by the estimation matrix calculation unit 12 is used for white balance calculation processing and color reproduction processing described later.
- the illumination spectrum estimation unit 100 further includes a tristimulus value conversion unit 14, a coordinate conversion unit 15, and a white balance calculation unit 16. These parts calculate the white balance of the imaging apparatus 400 based on the calculated spectral radiance E (1) of the illumination light. Based on this white balance value, it is possible to perform white balance adjustment for mutually adjusting the levels of the R, G, and B luminance values output from the image sensor of the imaging apparatus 400.
- the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z in the XYZ color system from the spectral radiance E (1) defined in the wavelength region.
- the tristimulus values X, Y, and Z indicate characteristic values when it is assumed that the human has observed the spectral radiance E (1) in the illumination environment where the subject OBJ is imaged. More specifically, the tristimulus values X, Y, and Z of the XYZ color system for the spectral radiance E (1) of the illumination light are expressed by the following equation (7).
- This color matching function h i ( â ) is defined by the International Commission on Illumination (CIE).
- the tristimulus value conversion unit 14 performs an operation corresponding to Equation (7) by a matrix operation shown below. Realize.
- Tristimulus values [X (1) , Y (1) , Z (1) ] h t â E (1) (8)
- the matrix h is a matrix of 401 rows â 3 columns whose elements are values at the respective sampling wavelengths of the color matching function h i ( â ).
- the coordinate conversion unit 15 converts the tristimulus values X (1) , Y (1) , Z (1) into coordinate values R (1) , G (1) , B ( defined in the RGB color system. Convert to 1) . More specifically, the coordinate conversion unit 15 calculates coordinate values R (1) , G (1) , and B (1) defined in the RGB color system according to the arithmetic expression shown below.
- R (1) a 11 X (1) + a 12 Y (1) + a 13 Z (1)
- G (1) a 21 X (1) + a 22 Y (1) + a 23 Z (1)
- B (1) a 31 X (1) + a 32 Y (1) + a 33 Z (1)
- a 11 to a 33 are 3 rows â 3 columns representing the correspondence between the colorimetric values of the subject (XYZ color system) and the signal values (RGB color system) actually recorded in the imaging apparatus. Is the transformation matrix.
- the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) .
- the white balance is adjusted by independently adjusting the output gains of the image pickup elements of the respective colors constituting the image pickup apparatus 400. That is, the adjustment gain to be multiplied by the R, G, and B image sensors is 1 / R (1) : 1 / G (1) : 1 / B (1) .
- the white balance calculation unit 16 calculates the ratio of the coordinate values R (1) , G (1) , B (1) or the inverse ratio 1 / R (1) : 1 / G (1) : 1 / B (1) is output as white balance.
- the white balance output from the white balance calculation unit 16 is used for manual gain adjustment by the user.
- a gain adjustment unit (not shown) of the imaging device 400 may be provided so that the gain adjustment unit automatically adjusts the gain of the imaging device 400.
- the color reproduction unit 200 includes an input unit 20, a spectral reflectance calculation unit 21, an estimation matrix calculation unit 22, a spectral reflectance data storage unit 23, an image data generation unit 24, and a coordinate conversion unit 25.
- the input unit 20 receives second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ using the imaging device 400. Then, the input unit 20 outputs the second imaging data g (2) RGB (m, n) to the spectral reflectance calculation unit 21 according to the processing.
- the input unit 20 when the second imaging data g (2) RGB (m, n) is provided with an inverse gamma characteristic (nonlinearity), the input unit 20 also has the inverse gamma, as with the input unit 10 described above. You may make it perform the process for negating a characteristic. That is, when the inverse gamma value in the imaging apparatus 400 is â c, the second imaging data g (2) RGB (m, n) can be linearized according to the following arithmetic expression.
- the spectral reflectance calculator 21 calculates the spectral reflectance of the subject OBJ from the second imaging data g (2) using the second estimation matrix W (2) calculated by the estimation matrix calculator 22 described later. Further, the spectral reflectance calculator 21 outputs image data g (OUT) RGB (m, n) that is color reproduction data of the subject OBJ under an arbitrary illumination environment. This color reproduction data is a reproduction of how the subject OBJ is observed under an arbitrary illumination environment based on the spectral reflectance of the subject OBJ.
- the estimation matrix calculation unit 22 includes an autocorrelation matrix A calculated from spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, and imaging. Based on the spectral sensitivity S of the apparatus 400, a second estimation matrix W (2) is calculated.
- n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
- a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (9) is expressed as follows: spectral sensitivity S that is a matrix indicating the spectral sensitivity at each wavelength of each image sensor, and spectral radiance E that is a matrix indicating the spectral radiance at each wavelength. This is realized by matrix calculation of (1) and spectral reflectance f (2) (m, n) which is a matrix indicating the spectral reflectance of the subject OBJ at each wavelength.
- the spectral reflectance f (2) (m, n) is expressed as a matrix of 401 rows â 1 column for each element. Become.
- the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (9), the following matrix operation expression can be derived from the expression (9).
- W (2) is the second estimation matrix.
- the second estimation matrix W (2) is calculated by the winner estimation method, similarly to the calculation of the first estimation matrix W (1) described above. Specifically, the second estimation matrix W (2) is derived as the following equation (13) by modifying the equation (11) after defining the system matrix H as the following equation (12).
- A is an autocorrelation matrix calculated from spectral reflectances of colors that can be included in the subject OBJ, and serves as a reference for estimating the spectral reflectance of the subject OBJ.
- the autocorrelation matrix A can be determined by referring to a standard object color sample (SOCS) which is a database of spectral reflectance standardized by ISO.
- SOCS standard object color sample
- the spectral reflectance of the subject OBJ itself may be measured in advance by another method to determine the autocorrelation matrix A.
- This autocorrelation matrix A is generated by a process similar to the process of generating the autocorrelation matrix B in FIG.
- the group matrix used to generate the autocorrelation matrix A for example, the spectral reflectance of each color of a color chart composed of a plurality of color samples can be used.
- a spectral radiance E (1) of 401 rows â 401 columns obtained by sampling the visible light region (380 to 780 nanometers) with a width of 1 nanometer is used.
- the matrix is 401 rows â 401 columns.
- the autocorrelation matrix A is stored in advance in the spectral reflectance data storage unit 23. Further, a principal component analysis technique may be used instead of the above-described winner estimation technique.
- the estimation matrix calculation unit 22 can be included in the spectral radiance E (1) of the illumination light, the spectral sensitivity S of the imaging device 400, and the subject OBJ according to the equations (12) and (13). Based on the autocorrelation matrix A obtained from the spectral radiance of the color, a second estimation matrix W (2) is calculated. Then, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) according to the equation (11), and uses the second imaging data g (2) RGB (m, n) as the spectral reflectance of the subject OBJ. f (2) Calculate (m, n).
- the spectral reflectance f (2) (m, n) calculated in this way is the essence of the color of the subject OBJ.
- the spectral reflectance f (2) (m, n) which subject OBJ is selected Even if it is observed under such an illumination environment, the color reproduction can be performed.
- the tristimulus values X, Y, and Z of the XYZ color system when an object having a spectral reflectance f (m, n; â ) is observed under the condition of an arbitrary spectral radiance E ( â ) are as follows: (14) shown in FIG.
- the spectral radiance E ( â ) used for color reproduction can be arbitrarily determined. However, in this embodiment, color reproduction is performed under the same illumination environment as when the subject OBJ is imaged. Illustrate.
- g (OUT) XYZ (m, n) ht â E (1) â W (2) â g (2) RGB (m, n) (15)
- the image data g (OUT) XYZ (m, n) is defined as coordinate values of the XYZ color system.
- the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) into image data g (OUT) RGB (m, n) defined in the RGB color system. Since the coordinate conversion process executed by coordinate conversion unit 25 is the same as the process in coordinate conversion unit 15 described above, detailed description will not be repeated.
- image data g (OUT) RGB (m, n) which is color reproduction data of the subject OBJ, is generated from the second imaging data g (2) RGB (m, n).
- the coordinate conversion unit 25 may include a process for giving a gamma characteristic.
- the process for imparting the gamma characteristic is realized by calculating the power of the gamma value â d for the generated image data g (OUT) RGB (m, n). .
- the amount of calculation can be significantly reduced by using a lookup table (LUT).
- the configuration in which the image data generation unit 24 performs color reproduction under the same illumination environment as when the subject OBJ is imaged is illustrated, but the illumination environment in which color reproduction is performed may be different. That is, the spectral radiance E used by the image data generation unit 24 to generate the image data g (OUT) XYZ (m, n) can be arbitrarily determined.
- FIG. 4 is a flowchart showing an overall processing procedure in image processing apparatus 1 according to the first embodiment of the present invention.
- the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
- the estimation matrix calculation unit 12 uses the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the imaging device 400.
- the first estimation matrix W (1) is calculated based on the spectral sensitivity S (step S104).
- the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
- the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
- the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
- the input unit 20 linearizes the second imaging data as necessary.
- the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
- the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m ) of the subject OBJ calculated in step S124. , N) is used to generate image data g (OUT) XYZ (m, n) obtained by performing color reproduction of the subject OBJ (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
- the spectral radiance of the illumination light applied to the subject OBJ can be calculated using the imaging device for imaging the subject OBJ. Therefore, the spectral radiance can be easily acquired without using a dedicated measuring device for measuring the spectral radiance of the illumination light.
- the spectral reflectance of the subject OBJ is accurately estimated, and then the color that will be imaged (observed) in the illumination environment to be imaged. Can be reproduced appropriately.
- the white balance of the imaging device can be adjusted appropriately based on the spectral radiance of the illumination light, more accurate color reproduction can be realized without being affected by variations in the characteristics of the imaging device.
- the second embodiment exemplifies a configuration in which a plurality of autocorrelation matrices are stored for each type of a plurality of light sources (for each category), and the user can select a suitable one for the illumination environment for imaging the subject OBJ.
- FIG. 5 is a functional configuration diagram of an image processing apparatus 1A according to the second embodiment of the present invention.
- the image processing apparatus 1 â / b> A includes an illumination spectrum estimation unit 100 â / b> A in place of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1.
- color reproduction unit 200 is similar to color reproduction unit 200 of image processing apparatus 1 shown in FIG. 1, and therefore detailed description thereof will not be repeated.
- the illumination spectrum estimation unit 100A is provided with a light source data storage unit 13A in place of the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
- the light source data storage unit 13A stores in advance an autocorrelation matrix B 1 , B 2 ,..., B M, which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment. Store. Then, the light source data storage unit 13A selects the selected one of the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance in accordance with an external command from the user or the like. Output to.
- an autocorrelation matrix B 1 , B 2 ,..., B M which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment.
- the spectral irradiation luminance (spectrum) of a general fluorescent lamp has a waveform having a peak at a wavelength corresponding to an emission line spectrum of mercury or the like enclosed therein.
- the spectral illumination luminance (spectrum) of the incandescent lamp due to its emission principle.
- the spectral irradiation brightness (spectrum) of the light source candidate is different for each type. Therefore, in order to estimate the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject OBJ, it is necessary to appropriately select the autocorrelation matrix B serving as a reference.
- a user having a certain level of prior knowledge can determine what light source is used in the illumination environment when the subject OBJ is imaged using the imaging device 400. For example, whether the subject OBJ is imaged indoors or outdoors, or if the imaged location is indoors, it is possible to determine whether a fluorescent light or an incandescent light is used as the light source. It is. Therefore, if a plurality of autocorrelation matrices are prepared in advance for each type of light source under such a classification that can be determined by the user and the user can arbitrarily select according to the imaging state of the subject OBJ, The estimation accuracy of the spectral radiance E (1) (illumination spectrum) can be increased.
- the light source data storage unit 13A includes a plurality of autocorrelation matrices for each type of âfluorescent lampâ, âincandescent lampâ, âxenon lampâ, âmercury lampâ, and âsunlightâ.
- B 1 , B 2 ,..., B M are stored in advance, and in response to a selection command SEL by a user or the like, the corresponding one is output as an autocorrelation matrix B to the estimation matrix calculation unit 12. That is, the autocorrelation matrix B 1 is generated only from the statistical data of the spectral radiance of âfluorescent lampâ, and the autocorrelation matrix B 2 is generated only from the statistical data of the spectral radiance of âincandescent lampâ. is there.
- the estimation matrix calculation unit 12 estimates the spectral radiance E (1) of the illumination light based on the autocorrelation matrix B received from the light source data storage unit 13A.
- an autocorrelation matrix that further classifies the type of light source.
- an autocorrelation matrix may be generated based on the spectral radiance generated when, for example, a âfluorescent lampâ and an âincandescent lampâ are combined. That is, it is preferable to store in advance in the light source data storage unit 13A autocorrelation matrices generated based on various spectral radiances that are assumed as illumination environments when the subject OBJ is imaged.
- FIG. 6 is a flowchart showing an overall processing procedure in image processing apparatus 1A according to the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 6, steps having the same contents as the steps in the flowchart shown in FIG.
- the input unit 10 captures at least part of the light incident on the subject OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
- one autocorrelation matrix is estimated as an autocorrelation matrix B according to the selection command SEL. It outputs to the calculation part 12 (step S103). Thereafter, the estimation matrix calculator 12 determines the first estimation matrix W ( based on the autocorrelation matrix B from the light source data storage 13 â / b> A, the spectral transmittance f (1) of the diffusing member 402, and the spectral sensitivity S of the imaging device 400. 1) is calculated (step S104).
- the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
- the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
- the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
- the input unit 20 linearizes the second imaging data as necessary.
- the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
- the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
- any one of a plurality of autocorrelation matrices is selected in response to a selection command SEL by a user or the like, and the first estimation matrix is further based on the selected autocorrelation matrix.
- the configuration in which W (1) is generated has been illustrated.
- the first estimation matrix W (1) is generated using the spectral transmittance f (1) of the diffusing member 402 and the spectral sensitivity S of the imaging device 400. These values are used for the imaging device 400 and the diffusing member 402. As long as is not exchanged.
- FIG. 7 is a functional configuration diagram of an image processing device 1B according to a modification of the second embodiment of the present invention.
- the image processing apparatus 1 â / b> B includes an illumination spectrum estimation unit 100 â / b> B instead of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1.
- color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
- the illumination spectrum estimation unit 100B includes an estimation matrix storage unit 17 instead of the estimation matrix calculation unit 12 and the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
- the estimation matrix storage unit 17 includes first estimation matrices W (1) 1 , W (1) 2 ,... That are calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. ., W (1) M is stored in advance. And the estimation matrix storage part 17 respond
- the first estimation matrix W (1) 1, W ( 1) 2, â , W (1) M is, B 1 is stored in the source data storage section 13A of the image processing apparatus 1A according to the second embodiment, Calculated from B 2 ,..., B M.
- FIG. 8 is a flowchart showing an overall processing procedure in image processing apparatus 1B according to the modification of the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 8, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
- the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least a part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
- the estimation matrix storage unit 17 selects one of the first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) M stored in advance according to the selection command SEL.
- One estimation matrix is output to the spectral radiance calculation unit 11 as a first estimation matrix W (1) (step S105).
- the spectral radiance calculation unit 11 uses the first estimation matrix W (1) selected in step S105, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
- the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
- the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
- the input unit 20 linearizes the second imaging data as necessary.
- the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
- the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
- Embodiment 3 In Embodiment 2 described above, a plurality of autocorrelation matrices are stored in advance for each type of light source data, and the spectral radiance (spectrum) of illumination light is estimated using an arbitrarily selected autocorrelation matrix. The configuration in which is performed is illustrated. On the other hand, in Embodiment 3 described below, the most appropriate estimation result is output after evaluating the estimation result of the spectral radiance of the illumination light using each of the plurality of autocorrelation matrices. The configuration will be exemplified.
- the image processing apparatus is the image processing apparatus 1 according to the first embodiment shown in FIG. 1 except that an illumination spectrum estimation unit 100C is provided instead of the illumination spectrum estimation unit 100.
- color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
- FIG. 9 is a functional configuration diagram of illumination spectrum estimation unit 100C of the image processing device according to the third embodiment of the present invention.
- the color reproduction unit 200 included in the image processing apparatus according to the present embodiment is not shown.
- the illumination spectrum estimation unit 100C includes an input unit 10, spectral radiance calculation units 11A, 11B, 11C, and 11D, a selection unit 18, an evaluation unit 19, and a tristimulus value conversion unit 14.
- the coordinate conversion unit 15 and the white balance calculation unit 16 are further included.
- the input unit 10, the tristimulus value conversion unit 14, the coordinate conversion unit 15, and the white balance calculation unit 16 have been described in the first embodiment (FIG. 1), detailed description will be repeated. Absent.
- Spectral radiance calculation units 11A, 11B, 11C, and 11D calculate first estimation matrices W (1) 1 , W ( based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. 1) Spectral radiance E (1) 1 , E (1) 2 , E of illumination light incident on subject OBJ from imaging data g (1) using 2 , W (1) 3 , W (1) 4 (1) 3 and E (1) Calculate 4 respectively.
- the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is the first estimation matrix W (1) stored in the estimation matrix storage unit 17 shown in FIG. 1 , W (1) 2 , W (1) 3 , and W (1) 4 are substantially the same.
- the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is previously stored for each type of a plurality of light source candidates that can be used to provide an illumination environment. Based on the determined autocorrelation matrices B 1 , B 2 , B 3 , B 4 , the calculation is performed according to the same procedure as described above.
- the number is not restrict
- FIG. 9 illustrates a configuration in which the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is calculated in advance. Like the illumination spectrum estimation unit 100A, these may be dynamically calculated for each calculation process.
- the first estimation matrix W (1) 1 is calculated from the autocorrelation matrix B 1 created based on the fluorescent lamp statistical data
- the first estimation matrix W (1) 2 is calculated from the autocorrelation matrix B 2 created based on the incandescent lamp statistical data
- the first estimation matrix W (1) 3 is the autocorrelation matrix created based on the xenon lamp statistical data. It assumed to be calculated from the B 3.
- the first estimation matrix W (1) 4 is calculated from an autocorrelation matrix B 4 created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp.
- Spectral radiance calculation units 11A, 11B, 11C, and 11D select the calculated spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light, respectively. 18 respectively.
- the selection unit 18 selects one of the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the input illumination light according to the evaluation result by the evaluation unit 19 described later. Are output as the spectral radiance E (1) of the illumination light.
- the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light calculated by the spectral radiance calculation units 11A, 11B, 11C, and 11D, respectively. Of these, the one that is most appropriately estimated is evaluated. More specifically, the evaluation unit 19 compares the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) of the illumination light by comparing with a reference pattern defined in advance. 4 is evaluated.
- first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 (or corresponding autocorrelation matrices B 1 , B 2 , B 3 ), respectively.
- the reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE calculated from the spectral radiance (statistical value or actually measured value) of the light source used at times are used. More specifically, for example, a reference pattern E (1) 1AVE corresponding to the first estimation matrix W (1) 1, the generation of the autocorrelation matrix B 1 used in the calculation of the first estimated matrix W (1) 1 It is calculated by averaging each element of the original light source group matrix E st (see FIG. 3). That is, as shown in FIG.
- spectral radiance (spectrum) representative of each of a fluorescent lamp, an incandescent lamp, and a xenon lamp is calculated in advance as a reference pattern.
- the reference pattern corresponding to the first estimation matrix W (1) 4 is not necessarily calculated. This is because the autocorrelation matrix B 4 corresponding to the first estimation matrix W (1) 4 is created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp. even create a reference pattern from the B 4, will blurred characteristics of each light source, because the effect of the reference pattern Usumaru.
- the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light. explain.
- FIG. 10 shows the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) of the illumination light by the evaluation unit 19. It is a figure for demonstrating the comparison process with 3AVE .
- FIG. 11 is a diagram for explaining the similarity calculation process in FIG. 10.
- the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of illumination light calculated by the spectral radiance calculation units 11A, 11B, and 11C, respectively.
- the reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are respectively compared, and the comparison result (similarity as an example) is calculated.
- the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are all 0. It is assumed that it is standardized in the range of â 1.
- the evaluation unit 19 evaluates how similar each spectral radiance is to the corresponding reference pattern. Typically, the evaluation unit 19 calculates the similarity based on the deviation between the waveforms on the wavelength region.
- FIG. 11 is a diagram for explaining a comparison process between the spectral radiance E (1) 1 of the illumination light calculated by the spectral radiance calculation unit 11A and the reference pattern E (1) 1AVE .
- FIG. 11A shows a state where the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE are plotted on the same wavelength region, and FIG. The process which calculates a deviation is shown.
- the evaluation unit 19 calculates the deviation (standardized value) err j between the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE at each sampling wavelength â j (1 â j â k). (1 â j â k) is calculated sequentially. Subsequently, the evaluation unit 19 calculates an evaluation result (similarity) by calculating a total average of all the deviations err j of the sampling wavelength â j . That is, the similarity SM can be calculated by the following arithmetic expression using the deviation err j of the sampling wavelength â j .
- FIG. 10 shows the measured similarity when the image processing method according to the present embodiment is performed under the illumination environment of a fluorescent lamp.
- the similarity of the spectral radiance E (1) 1 of the illumination light estimated based on the first estimation matrix W (1) 1 is the highest.
- the spectral radiance E (1) 1 of the illumination light is output as the spectral radiance E (1) .
- this evaluation result agrees with the fact that it was actually measured under the fluorescent lamp illumination environment.
- the spectral radiance E (1) of the illumination light is provided by combining a fluorescent lamp, an incandescent lamp, and a xenon lamp, or provided by a light source other than these.
- the spectral radiance E (1) 4 of the illumination light estimated based on the first estimation matrix W (1) 4 reflecting all the characteristics of the fluorescent lamp, the incandescent lamp, and the xenon lamp is used as the illumination light. It may be appropriate to output as the spectral radiance E (1) .
- the evaluation unit 19 is used when the evaluation results (similarities) for the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of the illumination light are all below the allowable value. outputs the spectral radiance E (1) 4 as the spectral radiance E of the illumination light (1).
- the spectral radiance calculation units 11A, 11B, 11C, and 11D perform spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of illumination light.
- the calculation process of the spectral radiance of the illumination light and the calculation process of the similarity degree are sequentially executed for each of the first estimation matrices. May be.
- a predetermined threshold for example, 95%)
- the similarity may be calculated using a correlation coefficient or the like.
- FIG. 12 is a flowchart showing an overall processing procedure in the image processing apparatus according to the third embodiment of the present invention. Of the steps in the flowchart shown in FIG. 12, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
- FIG. 13 is a flowchart showing the procedure of the evaluation subroutine shown in step S108 shown in FIG.
- the input unit 10 captures at least part of the light incident on the object OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
- the spectral radiance calculation units 11A, 11B, 11C, and 11D capture images using the first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 , and W (1) 4 , respectively.
- Data g (1) Spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light incident on the subject OBJ from RGB is calculated (step S107).
- the evaluation unit 19 executes an evaluation subroutine, and the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light calculated in step S107. Among them, the one with the highest estimation accuracy is evaluated (step S108). Further, the selection unit 18, according to the evaluation result in step S108, one of the spectral radiance E of the illumination light (1) 1, E (1 ) 2, E (1) 3, E (1) 4, lighting Output as spectral radiance E (1) of light (step S109).
- the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
- the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
- the input unit 20 linearizes the second imaging data as necessary.
- the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
- the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
- the evaluation unit 19 compares the spectral radiance E (1) 1 of the illumination light with a predetermined reference pattern E (1) 1AVE , thereby determining the similarity SM 1 between the two. Is calculated (step S200). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 2 of the illumination light with a predetermined reference pattern E (1) 2AVE to calculate the similarity SM 2 between them ( Step S202). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 3 of the illumination light with a predetermined reference pattern E (1) 3AVE to calculate a similarity SM 3 between them ( Step S204).
- the evaluation unit 19 extracts the one having the highest value among the similarities SM 1 , SM 2 , SM 3 calculated in steps S200, S202, S204 (step S206). Furthermore, the evaluation unit 19 determines whether or not the similarity extracted in step S206 is greater than or equal to a predetermined allowable value (step S208).
- the evaluation unit 19 evaluates that the spectral radiance corresponding to the similarity extracted in step S206 has the highest estimation accuracy (Ste S210).
- the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) of the illumination light. It is evaluated that the spectral radiance E (1) 4 of illumination light other than 3 has the highest estimation accuracy (step S212).
- the processing proceeds to step S109 in FIG. â Operational effects of the present embodiment>
- the same operational effects as those of the above-described first embodiment can be obtained, and the spectral radiance of illumination light is high even for a user who does not have any prior knowledge. It can be obtained with estimation accuracy. Therefore, even if the subject OBJ is performed under various conditions, the estimation accuracy of the spectral radiance of the illumination light can be maintained.
- FIG. 14 is a schematic configuration diagram of a computer that realizes an image processing apparatus 1 # according to a modification of the embodiment of the present invention.
- the computer includes a computer main body 150 equipped with an FD (Flexible Disk) driving device 166 and a CD-ROM (Compact Disk-Read Only Memory) driving device 168, a monitor 152, a keyboard 154, a mouse. 156.
- FD Flexible Disk
- CD-ROM Compact Disk-Read Only Memory
- the computer main body 150 further includes a CPU (Central Processing Unit) 160 that is an arithmetic device, a memory 162, a fixed disk 164 that is a storage device, and a communication interface 170 that are connected to each other via a bus.
- a CPU Central Processing Unit
- Image processing apparatus 1 # can be realized by CPU 160 executing software using computer hardware such as memory 162.
- such software is stored in a recording medium such as the FD 166a or the CD-ROM 168a, or distributed via a network or the like.
- Such software is read from the recording medium by the FD driving device 166 or the CD-ROM driving device 168 or received by the communication interface 170 and stored in the fixed disk 164. Further, it is read from the fixed disk 164 to the memory 162 and executed by the CPU 160.
- the monitor 152 is a display unit for displaying information output by the CPU 160, and includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like.
- the mouse 156 receives a command from a user corresponding to an operation such as click or slide.
- the keyboard 154 receives a command from the user corresponding to the input key.
- the CPU 160 is an arithmetic processing unit that executes various arithmetic operations by sequentially executing programmed instructions.
- the memory 162 stores various types of information according to the program execution of the CPU 160.
- the communication interface 170 converts the information output from the CPU 160 into, for example, an electrical signal and sends it to another device, and receives the electrical signal from the other device and converts it into information that can be used by the CPU 160.
- Fixed disk 164 is a non-volatile storage device that stores programs executed by CPU 160 and predetermined data. In addition, other output devices such as a printer may be connected to the computer as necessary.
- the program according to the present embodiment is a program module that is provided as a part of a computer operating system (OS) and calls necessary modules in a predetermined arrangement at a predetermined timing to execute processing. There may be. In that case, the program itself does not include the module, and the process is executed in cooperation with the OS. A program that does not include such a module can also be included in the program according to the present invention.
- OS computer operating system
- the program according to the present embodiment may be provided by being incorporated in a part of another program. Even in this case, the program itself does not include the module included in the other program, and the process is executed in cooperation with the other program.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Color Image Communication Systems (AREA)
- Color Television Image Signal Generators (AREA)
- Spectrometry And Color Measurement (AREA)
Abstract
An estimate matrix calculation unit (12) calculates a first estimate matrix (W(1)) based on autocorrelation matrix (B) of spectral radiance of a light source candidate that can be used for providing an illumination environment in an object, a spectral transmittance (f(1)) of a dispersion member and a spectral sensitivity (S) of an image picking-up device. A spectral radiance calculation unit (11) calculates spectral radiance (E(1)) of illumination light that is incident on an object (OBJ) from image picking-up data (g(1)RGB) by using the first estimate matrix (W(1)) calculated by the estimate matrix calculation unit (12).
Description
ãæ¬çºæã¯ã被åäœãæ®åããéã®åœè©²è¢«åäœã«ç
§å°ãããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºå¯èœãªç»ååŠçè£
眮ããã³ç»ååŠçæ¹æ³ã«é¢ãããã®ã§ããã
The present invention relates to an image processing apparatus and an image processing method capable of calculating the spectral radiance of illumination light applied to a subject when the subject is imaged.
ãè¿å¹Žãæ§ã
ãªç
§æç°å¢äžã«ãããŠæ®åããã被åäœã®è²ãã衚瀺è£
眮ãå°å·è£
眮ãªã©ã®åºåè£
眮ã«ãããŠãæ£ç¢ºã«åçŸããããã®æè¡ãææ¡ãããŠããã
In recent years, techniques for accurately reproducing the color of a subject imaged under various lighting environments in an output device such as a display device or a printing device have been proposed.
ã代衚çãªæè¡ãšããŠã被åäœã®åå
åå°çïŒåå°ã¹ãã¯ãã«ïŒã«åºã¥ãã«ã©ãŒãããžã¡ã³ãæè¡ãç¥ãããŠããããã®æè¡ã¯ã被åäœã®è²ãæ³¢é·é åã§æ±ãããšã§å®çŸããã被åäœã«ãããç
§æç°å¢ã«ãããããæ£ç¢ºãªè²åçŸãå¯èœãšãªãããã®ãããªè¢«åäœã®åå
åå°çã«åºã¥ããæ®ååŠçã«ã€ããŠã¯ãâäžå®
æŽäžç·šããåå
ç»ååŠçå
¥éãã財å£æ³äººæ±äº¬å€§åŠåºçäŒãïŒïŒïŒïŒå¹ŽïŒæïŒïŒæ¥âã«ãã®åççãªæ¹æ³ãé瀺ãããŠããã
As a representative technique, a color management technique based on the spectral reflectance (reflection spectrum) of a subject is known. This technique is realized by handling the color of the subject in the wavelength region, and enables accurate color reproduction regardless of the illumination environment of the subject. An imaging method based on the spectral reflectance of such a subject is disclosed in âYoichi Miyake,â Introduction to Spectral Image Processing â, The University of Tokyo Press, February 24, 2006â. Has been.
ããšããã§ã被åäœã®åå
åå°çãæšå®ããããã«ã¯ããã®åæãšãªãæ®åæã®ç
§æç°å¢äžã«ãããåå
æŸå°èŒåºŠïŒç
§æã¹ãã¯ãã«ïŒãäºãååŸããŠããå¿
èŠãããããªããªãã°ã被åäœããã®åå
æŸå°èŒåºŠã¯ãç
§æå
ã®åå
æŸå°èŒåºŠããã³è¢«åäœã®åå
åå°çã«å¿ããŠå®ãŸããããç
§æå
ã®åå
æŸå°èŒåºŠãæ¢ç¥ã§ãªããã°ã被åäœãæ®åããŠåŸãããæ®åããŒã¿ã«åºã¥ããŠã被åäœã®åå
åå°çãæ£ç¢ºã«ç®åºã§ããªãããã§ããã
By the way, in order to estimate the spectral reflectance of the subject, it is necessary to acquire in advance the spectral radiance (illumination spectrum) under the illumination environment at the time of imaging, which is the premise. This is because the spectral radiance from the subject is determined in accordance with the spectral radiance of the illumination light and the spectral reflectance of the subject, so if the spectral radiance of the illumination light is not known, imaging data obtained by imaging the subject This is because the spectral reflectance of the subject cannot be accurately calculated based on the above.
ããŸããç
§æå
ã®åå
æŸå°èŒåºŠã¯ãæ®åè£
眮ã®ãã¯ã€ããã©ã³ã¹èª¿æŽãªã©ã«ãçšããããããã®ãã¯ã€ããã©ã³ã¹èª¿æŽãšã¯ãæ®åè£
眮ãæ§æããè€æ°ã®æ®åçŽ åããåºåãããèŒåºŠå€ã®ã¬ãã«ãäºãã«èª¿æŽããããã®ä¿æ°ã決å®ããäœæ¥ã§ããããã¯ã€ããã©ã³ã¹ã厩ãããšãçœè²ã®è¢«åäœãæ®åè£
眮ã«ãã£ãŠæ®åããå Žåã«ãæ¬æ¥ã®çœè²ãšã¯ç°ãªã£ãè²ïŒããšãã°ãèµ€å³ã垯ã³ãè²ãªã©ïŒãåºåããããªã©ãæ£ç¢ºã«è²åçŸãè¡ãªãããšãã§ããªãããªãããã¯ã€ããã©ã³ã¹èª¿æŽã«ã€ããŠã¯ãç¹éïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒå·å
¬å ±ãç¹éïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒå·å
¬å ±ã«é瀺ãããŠããã
ç¹éïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒå·å
¬å ±
ç¹éïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒå·å
¬å ±
äžå®
æŽäžç·šããåå
ç»ååŠçå
¥éãã財å£æ³äººæ±äº¬å€§åŠåºçäŒãïŒïŒïŒïŒå¹ŽïŒæïŒïŒæ¥
The spectral radiance of the illumination light is also used for white balance adjustment of the imaging device. This white balance adjustment is an operation for determining a coefficient for mutually adjusting the levels of luminance values output from a plurality of image sensors constituting the imaging apparatus. When white balance is lost, when a white subject is imaged by an imaging device, a color that is different from the original white (for example, a reddish color) is output, and color reproduction is performed accurately. I can't. Note that white balance adjustment is disclosed in Japanese Patent Application Laid-Open Nos. 2001-057680 and 2005-328386.
Japanese Patent Laid-Open No. 2001-056780 JP 2005-328386 A Yoichi Miyake, "Introduction to Spectral Image Processing", The University of Tokyo Press, February 24, 2006
ãåŸæ¥ãç
§æå
ã®åå
æŸå°èŒåºŠã¯ããã£ã±ãåå
æŸå°èšãšãã£ãå°çšã®æž¬å®è£
眮ãçšããŠæž¬å®ãããŠãããåå
æŸå°èšã¯ãå
åŠç³»ãéããŠå
¥å°ããå
ãåææ ŒåïŒã°ã¬ãŒãã£ã³ã°ïŒã«ãã£ãŠåå
ããã€ã¡ãŒãžã»ã³ãµïŒïŒ£ïŒ£ïŒ€ïŒïŒ£ïœïœïœïœïœ
ãïœïœïœïœïœ
ïœãïœ
ïœïœïœïœ
ïŒãïŒïŒ¯ïŒ³ïŒïŒ£ïœïœïœïœïœ
ïœïœ
ïœïœïœïœïœãïŒïœ
ïœïœïœãïœïœïœïœ
ãïœ
ïœïœïœïœïœïœïœïœïœïœïœïŒãªã©ãçšããããïŒã«ãŠåå
ããæ³¢é·æ¯ã®èŒåºŠå€ãååŸãããäžè¿°ãããããªã«ã©ãŒãããžã¡ã³ãæè¡ãå®çŸããããšãããšãã¹ãã«ã«ã¡ã©ããããªã«ã¡ã©ãšãã£ãæ®åè£
眮ã®ä»ã«ãåå
æŸå°èŒåºŠã枬å®ããããã®æž¬å®è£
眮ãå¿
èŠãšãªããã³ã¹ããé«ããªããšãã課é¡ããã£ãã
Conventionally, the spectral radiance of illumination light has been measured exclusively using a dedicated measuring device such as a spectroradiometer. A spectroradiometer separates light incident through an optical system with a diffraction grating (grating), and receives the light with an image sensor (CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor), etc.) for each wavelength. Get the brightness value of. If it is going to implement | achieve the color management techniques as mentioned above, in addition to an imaging device such as a still camera or a video camera, a measuring device for measuring the spectral radiance is required, and there is a problem that costs increase.
ããŸããéåžžã被åäœã«ç
§å°ãããç
§æå
ã枬å®ããããã«ã¯ãåå
åå°çãæ¢ç¥ã§ããæšæºçœè²æ¿ã被åäœä»è¿ã«èšçœ®ãããã®æ¿é¢ããã®åå°å
ãåå
æŸå°èšã«ãã£ãŠæž¬å®ããå¿
èŠããããæ®åã ãã§ãªãã枬å®ã®ããã®æéãããããšãã課é¡ããã£ãã
Normally, in order to measure the illumination light irradiated to the subject, it is necessary to install a standard white plate with a known spectral reflectance near the subject and measure the reflected light from the plate surface with a spectroradiometer. There is a problem that it takes time and effort for measurement as well as imaging.
ãããã§ããã®çºæã¯ãããã課é¡ã解決ããããã«ãªããããã®ã§ããããã®ç®çã¯ã被åäœãæ®åããããã®æ®åè£
眮ãçšããŠãåœè©²è¢«åäœã«ç
§å°ãããŠããç
§æå
ã®åå
æŸå°èŒåºŠã容æã«ç®åºå¯èœãªç»ååŠçè£
眮ããã³ç»ååŠçæ¹æ³ãæäŸããããšã§ããã
Accordingly, the present invention has been made to solve such a problem, and an object of the present invention is to easily reduce the spectral radiance of illumination light applied to the subject using an imaging device for imaging the subject. It is to provide an image processing apparatus and an image processing method that can be calculated.
ããã®çºæã®ããå±é¢ã«ããã°ãæ®åè£
眮ã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£
眮ãæäŸããããã®ç»ååŠçè£
眮ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå
¥ããå
¥åéšãšãç
§æç°å¢ãæäŸããããã«çšãããåŸãå
æºåè£ã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšãšãå«ãã
According to an aspect of the present invention, there is provided an image processing device capable of performing image processing on imaging data imaged by an imaging device. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit Using the first estimation matrix calculated based on the autocorrelation matrix of the spectral radiance of the light source candidates that can be used to provide the environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, A first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data.
ã奜ãŸããã¯ãå
æºåè£ã®åå
æŸå°èŒåºŠã¯ãå
æºã®çš®é¡æ¯ã«äºãååŸãããç¹æ§å€ã§ããã
Preferably, the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source.
ã奜ãŸããã¯ãæ¡æ£éšæã¯ãæ®åè£
眮ã®å
軞äžã«é
眮ãããæ¡æ£éšæã®å
¥å°åŒ·åºŠã¯ãå
軞ã«å¯Ÿããè§åºŠã«ã€ããŠã®æå®ã®é¢æ°å€ã§ç€ºãããã
Preferably, the diffusing member is disposed on the optical axis of the imaging apparatus, and the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis.
ãããã«å¥œãŸããã¯ãé¢æ°å€ã¯ãå
軞ã«å¯Ÿããè§åºŠã«ã€ããŠã®äœåŒŠé¢æ°ã§ããã
ã奜ãŸããã¯ãæ®åè£ çœ®ã¯ãæ®åããŒã¿ãšããŠã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ãåºåããããã«æ§æããããç»ååŠçè£ çœ®ã¯ãç §æå ã®åå æŸå°èŒåºŠãšçè²é¢æ°ãšãçšããŠãç §æå ã®åå æŸå°èŒåºŠã«å¯Ÿå¿ãã衚è²ç³»ã«ããã座æšå€ãç®åºãã第ïŒç®åºéšãšã第ïŒç®åºéšã«ãããŠç®åºããã座æšå€ã®æ¯ã«åºã¥ããŠãæ®åè£ çœ®ã«ããããã¯ã€ããã©ã³ã¹ãç®åºãã第ïŒç®åºéšãšãããã«å«ãã More preferably, the function value is a cosine function with respect to an angle with respect to the optical axis.
Preferably, the imaging device is configured to output coordinate values defined in the RGB color system as imaging data. The image processing apparatus uses a spectral radiance of the illumination light and a color matching function to calculate a coordinate value in the RGB color system corresponding to the spectral radiance of the illumination light, and a second calculator And a third calculation unit that calculates white balance in the imaging apparatus based on the calculated ratio of coordinate values.
ã奜ãŸããã¯ãæ®åè£ çœ®ã¯ãæ®åããŒã¿ãšããŠã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ãåºåããããã«æ§æããããç»ååŠçè£ çœ®ã¯ãç §æå ã®åå æŸå°èŒåºŠãšçè²é¢æ°ãšãçšããŠãç §æå ã®åå æŸå°èŒåºŠã«å¯Ÿå¿ãã衚è²ç³»ã«ããã座æšå€ãç®åºãã第ïŒç®åºéšãšã第ïŒç®åºéšã«ãããŠç®åºããã座æšå€ã®æ¯ã«åºã¥ããŠãæ®åè£ çœ®ã«ããããã¯ã€ããã©ã³ã¹ãç®åºãã第ïŒç®åºéšãšãããã«å«ãã More preferably, the function value is a cosine function with respect to an angle with respect to the optical axis.
Preferably, the imaging device is configured to output coordinate values defined in the RGB color system as imaging data. The image processing apparatus uses a spectral radiance of the illumination light and a color matching function to calculate a coordinate value in the RGB color system corresponding to the spectral radiance of the illumination light, and a second calculator And a third calculation unit that calculates white balance in the imaging apparatus based on the calculated ratio of coordinate values.
ã奜ãŸããã¯ããã®ç»ååŠçè£
眮ã¯ãç
§æå
ã®åå
æŸå°èŒåºŠãšãæ®åè£
眮ã®åå
æ床ãšã被åäœã«å«ãŸãåŸãè²ã®åå
åå°çã®èªå·±çžé¢è¡åãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãç
§æç°å¢äžã«ãããŠæ®åè£
眮ã«ãã被åäœãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ãã被åäœã®åå
åå°çãç®åºãã第ïŒç®åºéšãããã«å«ãã
Preferably, the image processing apparatus has a second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging apparatus, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject. And a fourth calculation unit that calculates the spectral reflectance of the subject from the second imaging data obtained by imaging the subject with the imaging device in an illumination environment.
ãããã«å¥œãŸããã¯ããã®ç»ååŠçè£
眮ã¯ã第ïŒç®åºéšã«ãã£ãŠç®åºããã被åäœã®åå
åå°çã«åºã¥ããŠã被åäœãæå®ã®ç
§æç°å¢äžã«ãããŠæ®åããå Žåã«ååŸãããç»åããŒã¿ãçæããçæéšãããã«å«ãã
More preferably, the image processing device includes a generation unit that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. In addition.
ããã®çºæã®å¥ã®å±é¢ã«ããã°ãæ®åè£
眮ã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£
眮ãæäŸããããã®ç»ååŠçè£
眮ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå
¥ããå
¥åéšãšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®çš®é¡æ¯ã«äºãå®ããããæŒç®è¡åã®ãã¡äžã€ããå€éšæ什ã«å¿ããŠéžæããéžæéšãšãéžæéšã«ãã£ãŠéžæãããæŒç®è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšãšãå«ããæŒç®è¡åã®åã
ã¯ãå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åã§ããã
According to another aspect of the present invention, there is provided an image processing device capable of image processing with respect to imaging data imaged by an imaging device. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of the predetermined calculation matrices for each type of a plurality of light source candidates that can be used to provide an environment according to an external command, a calculation matrix selected by the selection unit, A first calculation for calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using the first estimation matrix calculated based on the spectral transmittance of the diffusing member and the spectral sensitivity of the imaging device. Part. Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
ããã®çºæã®ããã«å¥ã®å±é¢ã«ããã°ãæ®åè£
眮ã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£
眮ãæäŸããããã®ç»ååŠçè£
眮ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå
¥ããå
¥åéšãšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠäºãç®åºããããè€æ°ã®ç¬¬ïŒæšå®è¡åã®ãã¡äžã€ãå€éšæ什ã«å¿ããŠéžæããéžæéšãšãéžæéšã«ãã£ãŠéžæããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšãšãå«ãã第ïŒæšå®è¡åã¯ã察å¿ããå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºãããã
According to still another aspect of the present invention, there is provided an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of a plurality of first estimation matrices according to an external command, which is calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an environment, and a selection unit And a first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data using the selected first estimation matrix. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
ããã®çºæã®ããã«å¥ã®å±é¢ã«ããã°ãæ®åè£
眮ã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£
眮ãæäŸããããã®ç»ååŠçè£
眮ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå
¥ããå
¥åéšãšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠã®åè£ãããããç®åºãã第ïŒç®åºéšãšãããããç®åºãããåå
æŸå°èŒåºŠã®åè£ãäºãå®ããããåºæºãã¿ãŒã³ãšã®æ¯èŒã«ãã£ãŠè©äŸ¡ãããã®ãã¡äžã€ãç
§æç°å¢äžã«ãããç
§æå
ã®åå
æŸå°èŒåºŠãšããŠåºåããè©äŸ¡éšãšãå«ãã第ïŒæšå®è¡åã¯ã察å¿ããå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºãããã
According to still another aspect of the present invention, there is provided an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit The spectral radiance of illumination light incident on the subject from the first imaging data using a plurality of first estimation matrices respectively calculated based on the spectral radiances of a plurality of light source candidates that can be used to provide an environment A first calculation unit for calculating each candidate and each calculated spectral radiance candidate are evaluated by comparison with a predetermined reference pattern, and one of them is output as the spectral radiance of illumination light in an illumination environment. And an evaluation unit. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
ããã®çºæã®ããã«å¥ã®å±é¢ã«åŸãç»ååŠçæ¹æ³ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ããããšãç
§æç°å¢ãæäŸããããã«çšãããåŸãå
æºåè£ã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºããã¹ããããšãå«ãã
The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. A first estimation matrix calculated based on the autocorrelation matrix of spectral radiance of light source candidates that can be used to provide an illumination environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
ããã®çºæã®ããã«å¥ã®å±é¢ã«åŸãç»ååŠçæ¹æ³ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ããããšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®çš®é¡æ¯ã«äºãå®ããããè€æ°ã®æŒç®è¡åã®ãã¡äžã€ãéžæããã¹ããããšãéžæãããæŒç®è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºããã¹ããããšãå«ããæŒç®è¡åã®åã
ã¯ãå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åã§ããã
The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of predetermined calculation matrices for each type of light source candidates that can be used to provide an illumination environment, the selected calculation matrix, and spectral transmission of the diffusing member Calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using a first estimation matrix calculated based on the rate and the spectral sensitivity of the imaging device. Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
ããã®çºæã®ããã«å¥ã®å±é¢ã«åŸãç»ååŠçæ¹æ³ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ããããšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠäºãç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åã®ãã¡äžã€ãéžæããã¹ããããšãéžæããã第ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠãç®åºããã¹ããããšãå«ãã第ïŒæšå®è¡åã¯ã察å¿ããå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºãããã
The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment, and the selected first estimation matrix And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
ããã®çºæã®ããã«å¥ã®å±é¢ã«åŸãç»ååŠçæ¹æ³ã¯ãæ®åè£
眮ãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ããããšãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åãçšããŠã第ïŒæ®åããŒã¿ãã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠã®åè£ãããããç®åºããã¹ããããšãããããç®åºãããåå
æŸå°èŒåºŠã®åè£ãäºãå®ããããåºæºãã¿ãŒã³ãšã®æ¯èŒã«ãã£ãŠè©äŸ¡ãããã®ãã¡äžã€ãç
§æç°å¢äžã«ãããç
§æå
ã®åå
æŸå°èŒåºŠãšããŠåºåããã¹ããããšãå«ãã第ïŒæšå®è¡åã¯ã察å¿ããå
æºåè£ã®åå
æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãæ¡æ£éšæã®åå
ééçãšãæ®åè£
眮ã®åå
æ床ãšãã«åºã¥ããŠç®åºãããã
The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. And using the plurality of first estimation matrices calculated based on the spectral radiances of the plurality of light source candidates that can be used to provide the illumination environment, the spectrum of the illumination light incident on the subject from the first imaging data Each radiance candidate calculation step is evaluated by comparing each calculated spectral radiance candidate with a predetermined reference pattern, one of which is output as the spectral radiance of the illumination light in the illumination environment Including the step of. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
ããã®çºæã«ããã°ã被åäœãæ®åããããã®æ®åè£
眮ãçšããŠãåœè©²è¢«åäœã«ç
§å°ãããŠããç
§æå
ã®åå
æŸå°èŒåºŠã容æã«ç®åºã§ããã
According to the present invention, it is possible to easily calculate the spectral radiance of the illumination light applied to the subject using the imaging device for imaging the subject.
ãïœïŒïŒïŒ¡ïœïŒãå
軞ãã被åäœãïŒïŒïŒïŒ¡ïŒïŒïŒ¢ãç»ååŠçè£
眮ãïŒïŒïŒïŒïŒãå
¥åéšãïŒïŒïŒïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ãåå
æŸå°èŒåºŠç®åºéšãïŒïŒãæšå®è¡åç®åºéšãïŒïŒïŒïŒïŒïŒ¡ãå
æºããŒã¿æ ŒçŽéšãïŒïŒãäžåºæ¿å€å€æéšãïŒïŒã座æšå€æéšãïŒïŒããã¯ã€ããã©ã³ã¹ç®åºéšãïŒïŒãæšå®è¡åæ ŒçŽéšãïŒïŒãéžæéšãïŒïŒãè©äŸ¡éšãïŒïŒãåå
åå°çç®åºéšãïŒïŒãæšå®è¡åç®åºéšãïŒïŒãåå
åå°çããŒã¿æ ŒçŽéšãïŒïŒãç»åããŒã¿çæéšãïŒïŒã座æšå€æéšãïŒïŒïŒïŒïŒïŒïŒïŒ¡ïŒïŒïŒïŒïŒ¢ïŒïŒïŒïŒïŒ£ãç
§æã¹ãã¯ãã«æšå®éšãïŒïŒïŒãã³ã³ãã¥ãŒã¿æ¬äœãïŒïŒïŒãã¢ãã¿ãïŒïŒïŒãããŒããŒããïŒïŒïŒãããŠã¹ãïŒïŒïŒãã¡ã¢ãªãïŒïŒïŒãåºå®ãã£ã¹ã¯ãïŒïŒïŒãé§åè£
眮ãïŒïŒïŒãïŒïŒ²ïŒ¯ïŒé§åè£
眮ãïŒïŒïŒãéä¿¡ã€ã³ã¿ãŒãã§ãŒã¹ãïŒïŒïŒãè²åçŸéšãïŒïŒïŒãå
æºãïŒïŒïŒãæ®åè£
眮ãïŒïŒïŒãæ¡æ£éšæã
Ax1, Ax2 optical axis, OBJ subject, 1, 1A, 1B image processing device, 10, 20 input unit, 11, 11A, 11B, 11C, 11D spectral radiance calculation unit, 12 estimation matrix calculation unit, 13, 13A light source data Storage unit, 14 Tristimulus value conversion unit, 15 Coordinate conversion unit, 16 White balance calculation unit, 17 Estimation matrix storage unit, 18 Selection unit, 19 Evaluation unit, 21 Spectral reflectance calculation unit, 22 Estimation matrix calculation unit, 23 Spectroscopy Reflectance data storage unit, 24 image data generation unit, 25 coordinate conversion unit, 100, 100A, 100B, 100C illumination spectrum estimation unit, 150 computer body, 152 monitor, 154 keyboard, 156 mouse, 162 memory, 164 fixed disk, 166 FD drive, 168 CD-ROM drive 170 communication interface, 200-color reproduction unit, 300 light source, 400 an imaging device, 402 diffusing member.
ããã®çºæã®å®æœã®åœ¢æ
ã«ã€ããŠãå³é¢ãåç
§ããªãã詳现ã«èª¬æããããªããå³äžã®åäžãŸãã¯çžåœéšåã«ã€ããŠã¯ãåäžç¬Šå·ãä»ããŠãã®èª¬æã¯ç¹°è¿ããªãã
Embodiments of the present invention will be described in detail with reference to the drawings. Note that the same or corresponding parts in the drawings are denoted by the same reference numerals and description thereof will not be repeated.
ãå®æœã®åœ¢æ
ïŒïŒœ
ãïŒå šäœæ§æïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã®æ©èœæ§æå³ã§ããã [Embodiment 1]
<Overall configuration>
FIG. 1 is a functional configuration diagram of animage processing apparatus 1 according to the first embodiment of the present invention.
ãïŒå šäœæ§æïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã®æ©èœæ§æå³ã§ããã [Embodiment 1]
<Overall configuration>
FIG. 1 is a functional configuration diagram of an
ãå³ïŒãåç
§ããŠãç»ååŠçè£
眮ïŒã¯ãåŸè¿°ããæ®åè£
眮ã«ãã£ãŠæ®åããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒããã³ç¬¬ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã«å¯ŸããŠãæ¬å®æœã®åœ¢æ
ã«ä¿ãç»ååŠçæ¹æ³ãå®è¡å¯èœã§ããã
Referring to FIG. 1, the image processing apparatus 1 includes first imaging data g (1) RGB (m, n) and second imaging data g (2) RGB (m, n) captured by an imaging apparatus described later. On the other hand, the image processing method according to the present embodiment can be executed.
ãããå
·äœçã«ã¯ãç»ååŠçè£
眮ïŒã¯ãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒãšãè²åçŸéšïŒïŒïŒãšãå«ããç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãçšããŠã被åäœã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒç
§æã¹ãã¯ãã«ïŒãç®åºãããç¶ããŠãè²åçŸéšïŒïŒïŒããç®åºãããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãã被åäœã®åå
åå°çãç®åºãããããã«ãè²åçŸéšïŒïŒïŒã¯ããã®ç®åºãã被åäœã®åå
åå°çã«åºã¥ããŠã被åäœã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããããã®è²åçŸéšïŒïŒïŒããåºåãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã¯ã代衚çã«è¡šç€ºè£
眮ïŒãã£ã¹ãã¬ã€ïŒãå°å·è£
眮ïŒããªã³ã¿ïŒãªã©ã®å³ç€ºããªãåºåè£
眮ãžåºåãããããããã¯ãå³ç€ºããªãèšæ¶è£
眮ãªã©ã«æ ŒçŽãããŠãããã
More specifically, the image processing apparatus 1 includes an illumination spectrum estimation unit 100 and a color reproduction unit 200. The illumination spectrum estimation unit 100 calculates the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject using the first imaging data g (1) RGB (m, n). Subsequently, the color reproduction unit 200 calculates the spectral reflectance of the subject from the second imaging data g (2) RGB (m, n) using the calculated spectral radiance E (1) . Furthermore, the color reproduction unit 200 outputs image data g (OUT) RGB (m, n) obtained by performing color reproduction of the subject based on the calculated spectral reflectance of the subject. The image data g (OUT) RGB (m, n) output from the color reproduction unit 200 is typically output to an output device (not shown) such as a display device (display) or a printing device (printer). Alternatively, it may be stored in a storage device (not shown).
ããªããç»ååŠçè£
眮ïŒã¯ã代衚çã«ãããŒããŠã§ã¢ã«ãã£ãŠå®çŸãããããåŸè¿°ããããã«ããã®äžéšãŸãã¯å
šéšããœãããŠã§ã¢ã«ãã£ãŠå®çŸããŠãããã
The image processing apparatus 1 is typically realized by hardware, but a part or all of the image processing apparatus 1 may be realized by software as will be described later.
ãïŒæ®åããŒã¿ã®ååŸïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã«ãããŠåŠç察象ãšãªãæ®åããŒã¿ã®ååŸæ¹æ³ã説æããããã®å³ã§ããããªããå³ïŒã«ã¯ãæå®ã®ç §æç°å¢äžã«ãããŠã被åäœïŒ¯ïŒ¢ïŒªãæ®åããå Žåã瀺ããå³ïŒïŒïŒ¡ïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒãååŸããæé ã瀺ããå³ïŒïŒïŒ¢ïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒã®ååŸããæé ã瀺ãã <Acquisition of imaging data>
FIG. 2 is a diagram for describing a method for acquiring imaging data to be processed inimage processing apparatus 1 according to the first embodiment of the present invention. FIG. 2 shows a case where the subject OBJ is imaged under a predetermined illumination environment. 2A shows a procedure for acquiring the first imaging data g (1) RGB (m, n), and FIG. 2B shows the second imaging data g (2) RGB (m, n). The procedure to acquire is shown.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã«ãããŠåŠç察象ãšãªãæ®åããŒã¿ã®ååŸæ¹æ³ã説æããããã®å³ã§ããããªããå³ïŒã«ã¯ãæå®ã®ç §æç°å¢äžã«ãããŠã被åäœïŒ¯ïŒ¢ïŒªãæ®åããå Žåã瀺ããå³ïŒïŒïŒ¡ïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒãååŸããæé ã瀺ããå³ïŒïŒïŒ¢ïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒã®ååŸããæé ã瀺ãã <Acquisition of imaging data>
FIG. 2 is a diagram for describing a method for acquiring imaging data to be processed in
ããŸããæ®åããŒã¿ã®ååŸïŒæ®åïŒã«ã¯ãæ®åè£
眮ïŒïŒïŒãçšããããããã®æ®åè£
眮ïŒïŒïŒã¯ãäžäŸãšããŠãããžã¿ã«ã¹ãã«ã«ã¡ã©ãããžã¿ã«ãããªã«ã¡ã©ã§ãããç¹å®ã®æ³¢é·åž¯åã«ãããåå
æ床ç¹æ§ããã€æ®åçŽ åïŒä»£è¡šçã«ããïŒïŒ¯ïŒ³ãªã©ïŒãæãããæ®åçŽ åã¯ãè¡åç¶ã«é
眮ãããè€æ°ã®ç»çŽ ãå«ã¿ãåç»çŽ ã«å
¥å°ããå
ã®åŒ·åºŠã«å¿ããèŒåºŠãæ®åããŒã¿ãšããŠåºåããããã®ãšããåæ®åçŽ åããåºåãããèŒåºŠã¯ããã®åå
æ床ã«å¿ããå€ãšãªããæ®åè£
眮ã®æ®åå¯èœãªç¹å®ã®æ³¢é·åž¯åã®ããšããã³ããšåŒã³ãæ¬å®æœã®åœ¢æ
ã§ã¯ã代衚çã«ãäž»ãšããŠïŒ²ïŒèµ€ïŒïŒïŒ§ïŒç·ïŒïŒïŒ¢ïŒéïŒã®åå
æ床ç¹æ§ãæããïŒãã³ãã®æ®åè£
眮ïŒïŒïŒãçšããå Žåã«ã€ããŠèª¬æããããªããããã€ã¹æ§é ãšããŠã¯ãåäžã®åºæ¿äžã«è€æ°çš®é¡ã®æ®åçŽ åã圢æããæ§é ããããã¯è€æ°ã®åºæ¿äžã«ãããã察å¿ããïŒçš®é¡ã®æ®åçŽ åã圢æããæ§é ã®ããããæ¡çšããããšãã§ããããŸããåãã³ãã®åå
æ床ç¹æ§ã決å®ä»ããæ¹æ³ãšããŠã¯ãçŽ åèªäœã®åå
æ床ãç°ãªãããããã«ããŠãããããåãåå
æ床ãæããçŽ åãçšããŠãåçŽ åã®å
¥åå
åŽã«ïŒ²ïŒïŒ§ïŒïŒ¢ã®ãã£ã«ã¿ãèšããããšã§å®çŸããŠãããã
First, the imaging device 400 is used for acquisition (imaging) of imaging data. As an example, the imaging apparatus 400 is a digital still camera or a digital video camera, and includes an imaging element (typically, CCD, CMOS, etc.) having spectral sensitivity characteristics in a specific wavelength band. The imaging element includes a plurality of pixels arranged in a matrix, and outputs luminance corresponding to the intensity of light incident on each pixel as imaging data. At this time, the luminance output from each image sensor has a value corresponding to the spectral sensitivity. A specific wavelength band that can be imaged by the imaging apparatus is referred to as a band. In the present embodiment, typically, three bands mainly having spectral sensitivity characteristics of R (red), G (green), and B (blue). A case of using the imaging device 400 will be described. As the device structure, either a structure in which a plurality of types of image sensors are formed on the same substrate or a structure in which a corresponding type of image sensor is formed on a plurality of substrates can be adopted. Further, as a method for determining the spectral sensitivity characteristics of each band, the spectral sensitivity of the element itself may be made different, or an element having the same spectral sensitivity is used, and R, G are input to the input light side of each element. , B may be provided.
ãäžè¿°ã®ããã«ãæ®åè£
眮ïŒïŒïŒãåºåããæ®åããŒã¿ã¯ãïŒïŒ§ïŒïŒ¢ã®åèŒåºŠå€ïŒä»£è¡šçã«ããããããïŒïŒãããïŒïŒïœïŒïŒïŒïŒé調ïŒã®ïŒæ¬¡å
ã®è²æ
å ±ãšãªãããã®ããã«ãæ®åè£
眮ïŒïŒïŒãåºåããæ®åããŒã¿ã¯ã衚è²ç³»ã«ãããŠå®çŸ©ãããã以äžãæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïœïŒïŒïŒ
ïŒïœïŒïœïŒã®ïŒïœïŒïœïŒã¯ãæ®åè£
眮ïŒïŒïŒã®æ®åçŽ åã«ããã察å¿ããç»çŽ ã®åº§æšãè¡šãããããªãã¡ãæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïŒ»ïŒåº§æšïŒïœïŒïœïŒã«ãããã®æ®åçŽ åã§æ€åºãããèŒåºŠå€ïŒïŒïŒåº§æšïŒïœïŒïœïŒã«ãããã®æ®åçŽ åã§æ€åºãããèŒåºŠå€ïŒïŒïŒåº§æšïŒïœïŒïœïŒã«ãããã®æ®åçŽ åã§æ€åºãããèŒåºŠå€ïŒïŒœãšãªãã
As described above, the imaging data output by the imaging apparatus 400 is three-dimensional color information of R, G, and B luminance values (typically, each of 12 bits: 0 to 4095 gradations). As described above, the imaging data output by the imaging apparatus 400 is defined in the RGB color system. Hereinafter, (m, n) of the imaging data g (1) RGB (m, n), g (2) RGB (m, n) represents the coordinates of the corresponding pixel in the imaging device of the imaging device 400. That is, the imaging data g (1) RGB (m, n), g (2) RGB (m, n) = [(the luminance value detected by the R imaging element at coordinates (m, n)), (coordinate ( (luminance value detected by G image sensor at m, n)), (luminance value detected by B image sensor at coordinates (m, n))].
ãå³ïŒïŒïŒ¡ïŒãåç
§ããŠã被åäœïŒ¯ïŒ¢ïŒªã«ã¯ãäœããã®å
æºïŒïŒïŒãçºããç
§æå
ãç
§å°ãããŠãããšããããã®ãããªå
æºïŒïŒïŒã«ãã£ãŠå®çŸãããç
§æç°å¢äžã«ãããŠããŸã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãååŸããå Žåã«ã¯ãæ®åè£
眮ïŒïŒïŒãçšããŠãå
æºïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠåŸãããå
ïŒããªãã¡ãæ¡æ£éšæïŒïŒïŒãééããåŸã®å
ïŒãæ®åããã
Referring to FIG. 2A, it is assumed that illumination light emitted from some light source 300 is irradiated on the subject OBJ. In the illumination environment realized by such a light source 300, when first imaging data g (1) RGB (m, n) is first acquired, the imaging device 400 is used to enter the subject OBJ from the light source 300. The light obtained through the diffusing member 402 (that is, the light after passing through the diffusing member 402) is imaged at least part of the light to be transmitted.
ãããå
·äœçã«ã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãååŸããå Žåã«ã¯ããã®å
軞ïœïŒã被åäœïŒ¯ïŒ¢ïŒªã«ç
§æå
ãå
¥å°ããããããã®çµè·¯äžã«ãªãããã«æ®åè£
眮ïŒïŒïŒãé
眮ãããããã«ããã®å
軞ïœïŒäžã®æ®åè£
眮ïŒïŒïŒãšå
æºïŒïŒïŒãšã®éïŒå¥œãŸããã¯ãæ®åè£
眮ïŒïŒïŒã®çŽè¿ïŒã«æ¡æ£éšæïŒïŒïŒãé
眮ãããããªãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®çµè·¯ã«ã¯ãå
æºïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«çŽæ¥çã«å
¥å°ããçµè·¯ãããã³å
æºïŒïŒïŒããå£æãªã©ã§åå°ãããŠè¢«åäœïŒ¯ïŒ¢ïŒªã«éæ¥çã«å
¥å°ããçµè·¯ãå«ãã
More specifically, when the first imaging data g (1) RGB (m, n) is acquired, imaging is performed so that the optical axis Ax1 is on any path on which the illumination light enters the subject OBJ. A device 400 is arranged. Furthermore, a diffusing member 402 is disposed between the imaging device 400 and the light source 300 on this optical axis Ax1 (preferably in the immediate vicinity of the imaging device 400). The path of illumination light incident on the subject OBJ includes a path directly incident on the subject OBJ from the light source 300 and a path incident on the subject OBJ after being reflected from the light source 300 by a wall material or the like.
ãæ¡æ£éšæïŒïŒïŒã¯ãæ®åè£
眮ïŒïŒïŒã§æ®åãããå
ã空éçã«æ¡æ£ãããªãã¡ç©ºéçã«å¹³ååããããã®éšæã§ããã代衚çã«æ¢ç¥ã®åå
ééçããã€ä¹³çœè²ã®æ¡æ£æ¿ãçšããããã代æ¿çã«ãç©åçãªã©ãçšããŠãããããã®ãããªæ¡æ£éšæïŒïŒïŒãçšããããšã§ãæ®åè£
眮ïŒïŒïŒã«å
¥å°ããç
§æå
ã®åŒ·åºŠååžãäžæ§åã§ããããã«ãã£ãŠãåŸè¿°ããç
§æå
ã®åå
æŸå°èŒåºŠã®æšå®ç²ŸåºŠãé«ããããšãã§ããã
The diffusion member 402 is a member for spatially diffusing the light imaged by the imaging device 400, that is, for spatially averaging, and a milky white diffusion plate having a known spectral transmittance is typically used. Alternatively, an integrating sphere or the like may be used. By using such a diffusing member 402, the intensity distribution of the illumination light incident on the imaging device 400 can be made uniform, thereby increasing the estimation accuracy of the spectral radiance of the illumination light described later.
ãããã«ãæ¡æ£éšæïŒïŒïŒãšããŠä¹³çœè²ã®æ¡æ£æ¿ãçšããå Žåã«ã¯ãæå®ã®å
¥å°è§ç¹æ§ãæããæ¡æ£æ¿ïŒäžè¬çã«ãã³ãµã€ã³ã³ã¬ã¯ã¿ãã³ãµã€ã³ãã£ãã¥ãŒã¶ãã³ãµã€ã³ã¬ã»ãã¿ãªã©ãšç§°ããããïŒãçšããããšã奜ãŸããããã®ãããªæ¡æ£æ¿ã§ã¯ãæ¡æ£éšæïŒïŒïŒãééããåŸã®å
ã®å
¥å°åŒ·åºŠããæ®åè£
眮ïŒïŒïŒã®å
軞ïœïŒã«å¯Ÿããè§åºŠïŒç«äœè§ïŒã«ã€ããŠã®äœåŒŠé¢æ°ïŒã³ãµã€ã³ïŒã§ç€ºãããããã®ãããªäœåŒŠé¢æ°ã®å
¥å°è§ç¹æ§ããã€æ¡æ£æ¿ãçšããããšã§ãäœãã®ç¹å¥ã®æŒç®ãå®è¡ããããšãªããåäœé¢ç©åœããã«å
¥å°ããç
§æå
ã®ãšãã«ã®ãŒéïŒåå
æŸå°ç
§åºŠïŒãåæ ããæ®åããŒã¿ãååŸã§ããããŸããå
¥å°è§ç¹æ§ããããªãæ¡æ£æ¿ãçšããå Žåã«ã¯ãå€ä¹±å
ãæå¶ããããã«ãæ®åè£
眮ïŒïŒïŒã®èŠéè§ãçžå¯Ÿçã«å°ããããå¿
èŠããããããã®ãããªå
¥å°è§ç¹æ§ãæããæ¡æ£æ¿ãçšããå Žåã«ã¯ãæ®åè£
眮ïŒïŒïŒã®èŠéè§ãèæ
®ããããšãªããå
æºïŒïŒïŒããã®ç
§æå
ãæ®åããããšãã§ããã
Further, when a milky white diffusion plate is used as the diffusion member 402, it is preferable to use a diffusion plate having a predetermined incident angle characteristic (generally referred to as a cosine collector, a cosine diffuser, a cosine receptor, or the like). . In such a diffusing plate, the incident intensity of light after passing through the diffusing member 402 is indicated by a cosine function (cosine) with respect to an angle (solid angle) with respect to the optical axis Ax1 of the imaging device 400. By using a diffuser with such an incident angle characteristic of a cosine function, imaging data that reflects the amount of illumination light energy (spectral irradiance) incident per unit area without performing any special calculations Can be obtained. In addition, when a diffuser plate having no incident angle characteristic is used, it is necessary to relatively reduce the viewing angle of the imaging device 400 in order to suppress disturbance light. In the case of using the diffusing plate, it is possible to image the illumination light from the light source 300 without considering the viewing angle of the imaging device 400.
ãäžè¿°ã®æé ã«åŸã£ãŠååŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã¯ãç
§æç°å¢äžã«ãããŠè¢«åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ãåæ ããè²æ
å ±ãå«ãããšã«ãªãã
The first imaging data g (1) RGB (m, n) acquired according to the above procedure includes color information reflecting illumination light incident on the subject OBJ under the illumination environment.
ã次ã«ãå³ïŒïŒïŒ¢ïŒãåç
§ããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã¯ãå³ïŒïŒïŒ¡ïŒãšåãæ®åè£
眮ïŒïŒïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããšã§ååŸãããããã®ãšããæ®åè£
眮ïŒïŒïŒã®å
軞ïœïŒäžã«ã¯ãå³ïŒïŒïŒ¡ïŒã®å Žåãšç°ãªããæ¡æ£éšæïŒïŒïŒãé
眮ãããããšã¯ãªãããªãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã®ååŸã«äœ¿çšãããæ®åè£
眮ïŒïŒïŒãšã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã®ååŸã«äœ¿çšãããæ®åè£
眮ïŒïŒïŒãšã¯ãå¿
ãããåäžã®ãã®ãçšããå¿
èŠã¯ãªããå°ãªããšãæ®åçŽ åã®åå
æ床ãå®è³ªçã«æ¢ç¥ã§ããã°ãç°ãªãæ®åè£
眮ïŒïŒïŒãçšããŠãããã
Next, referring to FIG. 2B, the second imaging data g (2) RGB (m, n) is obtained by imaging the subject OBJ using the same imaging device 400 as in FIG. To be acquired. At this time, unlike the case of FIG. 2A, the diffusing member 402 is not disposed on the optical axis Ax2 of the imaging device 400. Note that the imaging device 400 used for acquiring the first imaging data g (1) RGB (m, n) and the imaging device 400 used for acquiring the second imaging data g (2) RGB (m, n). Are not necessarily the same, and different imaging devices 400 may be used as long as at least the spectral sensitivity of the imaging device is substantially known.
ããŸããå³ïŒïŒïŒ¡ïŒã«ãããŠç¬¬ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãæ®åãããšãã®æ®åè£
眮ïŒïŒïŒã®å
軞ïœïŒãšãå³ïŒïŒïŒ¢ïŒã«ãããŠç¬¬ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãæ®åãããšãã®æ®åè£
眮ïŒïŒïŒã®å
軞ïœïŒãšã¯ãäºãã«äžèŽãããããšã奜ãŸãããå³ïŒïŒïŒ¢ïŒã«ãããŠååŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã¯ãäž»ãšããŠã被åäœïŒ¯ïŒ¢ïŒªããã®åå°å
ã«äŸåããŠå®ãŸãããã®åå°å
ã¯ã被åäœïŒ¯ïŒ¢ïŒªã§åå°ãããŠå
軞ïœïŒäžãéæ¹åã«äŒæ¬ããå
ã§ããããã®åå°å
ãçããç
§æå
ã¯ãäž»ãšããŠãæ®åè£
眮ïŒïŒïŒã®å
軞ïœïŒäžã被åäœïŒ¯ïŒ¢ïŒªåŽã«åããŠäŒæ¬ããããããã£ãŠããã®åå°å
ãçããç
§æå
ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãšããŠæ®åããããšã§ãããé©åãªç
§æå
ã®åå
æŸå°èŒåºŠãç®åºããããšãã§ããã
2A, the optical axis Ax1 of the imaging device 400 when imaging the first imaging data g (1) RGB (m, n), and the second imaging data g (2) in FIG. 2B. It is preferable to match the optical axis Ax2 of the imaging device 400 when imaging RGB (m, n). The second imaging data g (2) RGB (m, n) acquired in FIG. 2B is determined mainly depending on the reflected light from the subject OBJ. This reflected light is reflected by the subject OBJ and propagates in the opposite direction on the optical axis Ax2, and the illumination light that generates this reflected light is mainly directed toward the subject OBJ on the optical axis Ax2 of the imaging device 400. Propagate. Therefore, by capturing the illumination light that generates the reflected light as the first imaging data g (1) RGB (m, n), a more appropriate spectral radiance of the illumination light can be calculated.
ãïŒç
§æå
ã®åå
æŸå°èŒåºŠã®ç®åºåŠçïŒ
ãå床ãå³ïŒãåç §ããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠå®è¡ãããã被åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒç §æã¹ãã¯ãã«ïŒã®ç®åºåŠçã«ã€ããŠèª¬æããããªããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã¯ãæ¬æ¥ãæ³¢é·Î»ã«ã€ããŠã®é£ç¶é¢æ°ã«ãªãã¯ãã§ããããæ¬å®æœã®åœ¢æ ã§ã¯ãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠãå¯èŠå é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒã«ã€ããŠãæå®ã®æ³¢é·å¹ ïŒïŒããã¡ãŒã¿å¹ ïŒã§ãµã³ããªã³ã°ããé¢æ£å€ãçšãããã®ãšããã以äžã«èª¬æããè¡åæŒç®ã®äŸ¿å®äžãæ¬å®æœã®åœ¢æ ã«åŸãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã¯ãæ³¢é·Î»ïŒïŒïŒïŒïŒïŒïŒïŒïŒã»ã»ã»ïŒïŒïŒïŒã®èšïŒïŒïŒåã®èŒåºŠå€ãå«ãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åãšããããã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã瀺ãè¡åã§ã¯ããã®å¯Ÿè§èŠçŽ ã«åæ³¢é·ã«ãããèŒåºŠå€ãã»ããããããšãšãã«ã察è§èŠçŽ 以å€ã®èŠçŽ ã«ã¯ãŒããã»ãããããŠããã <Calculation processing of spectral radiance of illumination light>
With reference to FIG. 1 again, the calculation process of the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject executed by the illuminationspectrum estimation unit 100 will be described. Note that the spectral radiance E (1) of the illumination light should originally be a continuous function with respect to the wavelength λ, but in the present embodiment, the spectral radiance E (1) is represented as a visible light region (380 to For 780 nanometers), discrete values sampled with a predetermined wavelength width (1 nanometer width) shall be used. For the convenience of matrix operation described below, the spectral radiance E (1) according to the present embodiment is 401 rows à 401 columns including a total of 401 luminance values of wavelengths λ = 380, 381,. A matrix. In the matrix indicating the spectral radiance E (1) , the luminance value at each wavelength is set in the diagonal element, and zero is set in elements other than the diagonal element.
ãå床ãå³ïŒãåç §ããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠå®è¡ãããã被åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒç §æã¹ãã¯ãã«ïŒã®ç®åºåŠçã«ã€ããŠèª¬æããããªããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã¯ãæ¬æ¥ãæ³¢é·Î»ã«ã€ããŠã®é£ç¶é¢æ°ã«ãªãã¯ãã§ããããæ¬å®æœã®åœ¢æ ã§ã¯ãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠãå¯èŠå é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒã«ã€ããŠãæå®ã®æ³¢é·å¹ ïŒïŒããã¡ãŒã¿å¹ ïŒã§ãµã³ããªã³ã°ããé¢æ£å€ãçšãããã®ãšããã以äžã«èª¬æããè¡åæŒç®ã®äŸ¿å®äžãæ¬å®æœã®åœ¢æ ã«åŸãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã¯ãæ³¢é·Î»ïŒïŒïŒïŒïŒïŒïŒïŒïŒã»ã»ã»ïŒïŒïŒïŒã®èšïŒïŒïŒåã®èŒåºŠå€ãå«ãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åãšããããã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã瀺ãè¡åã§ã¯ããã®å¯Ÿè§èŠçŽ ã«åæ³¢é·ã«ãããèŒåºŠå€ãã»ããããããšãšãã«ã察è§èŠçŽ 以å€ã®èŠçŽ ã«ã¯ãŒããã»ãããããŠããã <Calculation processing of spectral radiance of illumination light>
With reference to FIG. 1 again, the calculation process of the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject executed by the illumination
ãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã¯ãå
¥åéšïŒïŒãšãåå
æŸå°èŒåºŠç®åºéšïŒïŒãšãæšå®è¡åç®åºéšïŒïŒãšãå
æºããŒã¿æ ŒçŽéšïŒïŒãšãå«ãã
The illumination spectrum estimation unit 100 includes an input unit 10, a spectral radiance calculation unit 11, an estimation matrix calculation unit 12, and a light source data storage unit 13.
ãå
¥åéšïŒïŒã¯ãå³ïŒïŒïŒ¡ïŒã«ç€ºãããã«ãæ®åè£
眮ïŒïŒïŒãçšããŠãç
§æç°å¢äžã«ãããŠè¢«åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ãããããã«ãå
¥åéšïŒïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã«åºã¥ããŠããã®ç¬¬ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ãåºåããããã®æ®åããŒã¿ïœïŒïŒïŒ
ã¯ãïŒïŒ§ïŒïŒ¢ã®ïŒã€ã®èŒåºŠå€ïŒä»£è¡šå€ïŒãããªãç·åœ¢åãããè²ããŒã¿ã§ãããäžäŸãšããŠãå
¥åéšïŒïŒã¯ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã«å«ãŸããèŒåºŠå€ãå¹³ååããããã®ããžãã¯ãå«ã¿ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã®åç»çŽ ã«ãããèŒåºŠå€ãïŒïŒ§ïŒïŒ¢ã®å¥ã«å¹³åãããã®å¹³ååããå€ïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒãæ®åããŒã¿ïœïŒïŒïŒ
ãšããŠåºåããã
As shown in FIG. 2A, the input unit 10 is obtained by imaging at least a part of light incident on the subject OBJ through the diffusing member 402 using an imaging device 400 in an illumination environment. First imaging data g (1) RGB (m, n) is received. Further, the input unit 10 based on the first image data g (1) RGB (m, n) , the first imaging data g (1) RGB (m, n) image pickup data g (1) representing the RGB Is output. The imaging data g (1) RGB is linearized color data composed of three luminance values (representative values) of R, G, and B. As an example, the input unit 10, the first imaging data g (1) RGB (m, n) includes logic for averaging the luminance value included in the first imaging data g (1) RGB (m, n ) Is averaged separately for each of R, G, and B, and the averaged value (R, G, B) is output as imaging data g (1) RGB .
ããªãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã«éã¬ã³ãç¹æ§ïŒéç·åœ¢æ§ïŒãäžããããŠããå Žåã«ã¯ãå
¥åéšïŒïŒããã®éã¬ã³ãç¹æ§ãæã¡æ¶ãããã®åŠçãè¡ãªãããšã§ã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãç·åœ¢åããŠããããäžè¬çã«ã衚瀺è£
眮ã§ã¯ãå
¥åä¿¡å·ã¬ãã«ãšå®éã«è¡šç€ºãããèŒåºŠã¬ãã«ãšã®éã¯éç·åœ¢ãªé¢ä¿ïŒã¬ã³ãç¹æ§ïŒãæããŠããããã®ãããªè¡šç€ºè£
眮ã«ãããéç·åœ¢æ§ãæã¡æ¶ããŠã人éã®èŠèŠã«é©å¿ããç»åã衚瀺ãããããã«ãæ®åè£
眮ïŒïŒïŒããã¯ãåœè©²è¡šç€ºè£
眮ã®ã¬ã³ãç¹æ§ãšéã®éç·åœ¢æ§ïŒéã¬ã³ãç¹æ§ïŒããã€ãããªæ®åããŒã¿ãåºåãããããšãå€ãããã®ããã«ãæ®åããŒã¿ã«éã¬ã³ãç¹æ§ãäžããããŠããå Žåã«ã¯ã以åŸã®åŠçãæ£ç¢ºã«å®è¡ã§ããªãã®ã§ãããšãã°å
¥åéšïŒïŒããã®ãããªéã¬ã³ãç¹æ§ãæã¡æ¶ããŠãç·åœ¢åããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãçæããã
If the first imaging data g (1) RGB (m, n) has an inverse gamma characteristic (non-linearity), the input unit 10 performs processing for canceling the inverse gamma characteristic. The first imaging data g (1) RGB (m, n) may be linearized. In general, a display device has a non-linear relationship (gamma characteristic) between an input signal level and an actually displayed luminance level. The imaging device 400 has a non-linearity (inverse gamma characteristic) opposite to the gamma characteristic of the display device so that an image adapted to human vision is displayed by canceling such non-linearity in the display device. Such imaging data is often output. As described above, when the inverse gamma characteristic is given to the imaging data, the subsequent processing cannot be executed accurately. For example, the input unit 10 cancels the inverse gamma characteristic and linearizes the first data. Imaging data g (1) RGB (m, n) is generated.
ãäžè¬çã«ã¬ã³ãç¹æ§ããã³éã¬ã³ãç¹æ§ã¯ã¹ãä¹ã®é¢æ°ãšããŠè¡šããããšãã§ãããããšãã°ãæ®åè£
眮ïŒïŒïŒã«ãããéã¬ã³ãå€ãγïœãšãããšã以äžã®ãããªæŒç®åŒã«åŸã£ãŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãç·åœ¢åããããšãã§ããã
In general, the gamma characteristic and the inverse gamma characteristic can be expressed as a power function. For example, when the inverse gamma value in the imaging apparatus 400 is γc, the first imaging data g (1) RGB (m, n) can be linearized according to the following arithmetic expression.
ããïœâïŒïŒïŒ
ïŒïœïŒïœïŒïŒïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïŒÎ³ïœ
ããŸãããã®ãããªç·åœ¢ååŠçã¯ãäžè¿°ã®æ®åããŒã¿ïœïŒïŒïŒ ãç®åºããããã®å¹³åååŠçã®å®è¡åã«è¡ãªãå¿ èŠããããäžæ¹ã§ãæ®åè£ çœ®ïŒïŒïŒãæ§æããæ®åçŽ åã®ç»çŽ ãµã€ãºãçžå¯Ÿçã«å€§ãããã°ãäžè¿°ã®ç·åœ¢ååŠçã«ã¯èšå€§ãªæŒç®éãå¿ èŠãšãªãããã®ãããå ¥åéšïŒïŒã®æŒç®åŠçèœåãååã«é«ãå Žåã«ã¯ãäžè¿°ã®ã¹ãä¹æŒç®ãçŽæ¥çã«å®è¡ããŠãããããæŒç®åŠçèœåã«å¶éãããå Žåã«ã¯ãã«ãã¯ã¢ããããŒãã«ïŒïŒ¬ïŒµïŒŽïŒïŒ¬ïœïœïœïŒïŒµïœãïœïœïœïœ ïŒãçšããããšãæå¹ã§ããããã®ã«ãã¯ã¢ããããŒãã«ã¯ãå ¥åãããæ®åããŒã¿ãåãåŸããã¹ãŠã®èŒåºŠå€ã®åã ã«å¯Ÿå¿ä»ããŠãäžè¿°ã®å€æåŒã®çµæãäºãæ ŒçŽããããŒã¿ããŒãã«ã§ããããã®å ¥åãšåºåãšã®å¯Ÿå¿é¢ä¿ãåç §ããã ãã§å€æåŸã®å€ãååŸã§ããã®ã§ãæŒç®éãå€§å¹ ã«äœæžã§ããã g â² (1) RGB (m, n) = g (1) RGB (m, n) 1 / γc
Further, such a linearization process needs to be performed before the execution of the averaging process for calculating the above-described imaging data g (1) RGB . On the other hand, if the pixel size of the image sensor that constitutes theimage capturing apparatus 400 is relatively large, the above-described linearization process requires a huge amount of calculation. For this reason, when the arithmetic processing capability of the input unit 10 is sufficiently high, the above power calculation may be directly executed. However, when the arithmetic processing capability is limited, a lookup table (LUT: Look) is used. It is effective to use (Up Table). This lookup table is a data table in which the result of the above-described conversion formula is stored in advance in association with each of all the luminance values that can be taken by the input imaging data. Refer to the correspondence between this input and output. Since the converted value can be acquired simply by doing this, the amount of calculation can be greatly reduced.
ããŸãããã®ãããªç·åœ¢ååŠçã¯ãäžè¿°ã®æ®åããŒã¿ïœïŒïŒïŒ ãç®åºããããã®å¹³åååŠçã®å®è¡åã«è¡ãªãå¿ èŠããããäžæ¹ã§ãæ®åè£ çœ®ïŒïŒïŒãæ§æããæ®åçŽ åã®ç»çŽ ãµã€ãºãçžå¯Ÿçã«å€§ãããã°ãäžè¿°ã®ç·åœ¢ååŠçã«ã¯èšå€§ãªæŒç®éãå¿ èŠãšãªãããã®ãããå ¥åéšïŒïŒã®æŒç®åŠçèœåãååã«é«ãå Žåã«ã¯ãäžè¿°ã®ã¹ãä¹æŒç®ãçŽæ¥çã«å®è¡ããŠãããããæŒç®åŠçèœåã«å¶éãããå Žåã«ã¯ãã«ãã¯ã¢ããããŒãã«ïŒïŒ¬ïŒµïŒŽïŒïŒ¬ïœïœïœïŒïŒµïœãïœïœïœïœ ïŒãçšããããšãæå¹ã§ããããã®ã«ãã¯ã¢ããããŒãã«ã¯ãå ¥åãããæ®åããŒã¿ãåãåŸããã¹ãŠã®èŒåºŠå€ã®åã ã«å¯Ÿå¿ä»ããŠãäžè¿°ã®å€æåŒã®çµæãäºãæ ŒçŽããããŒã¿ããŒãã«ã§ããããã®å ¥åãšåºåãšã®å¯Ÿå¿é¢ä¿ãåç §ããã ãã§å€æåŸã®å€ãååŸã§ããã®ã§ãæŒç®éãå€§å¹ ã«äœæžã§ããã g â² (1) RGB (m, n) = g (1) RGB (m, n) 1 / γc
Further, such a linearization process needs to be performed before the execution of the averaging process for calculating the above-described imaging data g (1) RGB . On the other hand, if the pixel size of the image sensor that constitutes the
ãåå
æŸå°èŒåºŠç®åºéšïŒïŒã¯ãåŸè¿°ããæšå®è¡åç®åºéšïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒ
ãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºãããããå
·äœçã«ã¯ãåå
æŸå°èŒåºŠç®åºéšïŒïŒã¯ã第ïŒæšå®è¡åïŒïŒïŒãšæ®åããŒã¿ïœïŒïŒïŒ
ãšã®è¡åç©ã«ãã£ãŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºãããäžè¿°ããããã«ãæ¬å®æœã®åœ¢æ
ã§ã¯ãæå®ã®æ³¢é·å¹
ïŒä»£è¡šçã«ãïŒããã¡ãŒã¿å¹
ïŒã§ãµã³ããªã³ã°ããïŒïŒïŒè¡ÃïŒïŒïŒåã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãçšããã®ã§ã第ïŒæšå®è¡åïŒïŒïŒã¯ãæ³¢é·æåæ°Ãæ®åè£
眮ïŒïŒïŒã®ãã³ãæ°ãããªãã¡ïŒïŒïŒè¡ÃïŒåã®è¡åãšãªãã
The spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated by the estimation matrix calculation unit 12 to be described later, and the spectral radiance of illumination light incident on the subject OBJ from the imaging data g (1) RGB. E (1) is calculated. More specifically, the spectral radiance calculation unit 11 calculates the spectral radiance E (1) of the illumination light based on the matrix product of the first estimation matrix W (1) and the imaging data g (1) RGB . As described above, in the present embodiment, the spectral radiance E (1) of 401 rows à 401 columns sampled with a predetermined wavelength width (typically 1 nanometer width ) is used, so the first estimation matrix W ( 1) is the number of wavelength components à the number of bands of the imaging device 400, that is, a matrix of 401 rows à 3 columns.
ãæšå®è¡åç®åºéšïŒïŒã¯ã被åäœïŒ¯ïŒ¢ïŒªã«ãããç
§æç°å¢ãæäŸããããã«çšãããåŸãå
æºåè£ã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åãæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒãããã³æ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããã以äžãåå
æ床ã¯ïŒïŒïŒè¡ÃïŒåã®è¡åãåå
ééçïœïŒïŒïŒã¯ïŒïŒïŒè¡ÃïŒåã®è¡åã§ãããã®ãšããã
The estimation matrix calculation unit 12 includes the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the spectral of the imaging device 400. Based on the sensitivity S, a first estimation matrix W (1) is calculated. Hereinafter, it is assumed that the spectral sensitivity S is a matrix of 401 rows à 3 columns, and the spectral transmittance f (1) is a matrix of 401 rows à 3 columns.
ã以äžã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºã§ããåçã«ã€ããŠèª¬æããã
Hereinafter, the principle by which the spectral radiance E (1) can be calculated from the first imaging data g (1) RGB (m, n) will be described.
ãæ®åè£
眮ïŒïŒïŒã«å
¥å°ããæ¡æ£éšæïŒïŒïŒãééããåŸã®å
ïŒã¹ãã¯ãã«ïŒã¯ãæ¡æ£éšæïŒïŒïŒïŒãããã¯ã被åäœïŒ¯ïŒ¢ïŒªïŒã«ç
§å°ãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒÎ»ïŒãšãæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒïŒÎ»ïŒãšã®ç©ã«çžåœããããªããæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒïŒÎ»ïŒã¯ãæ¡æ£éšæïŒïŒïŒã®å
šäœã«äºã£ãŠäžå®ã§ãããšããããããŠãæ®åè£
眮ïŒïŒïŒããåºåããã第ïŒç»åããŒã¿ã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ã®åæåå€ïœïŒïŒïŒ
ïœïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒã¯ãåæ®åçŽ åã®åå
æ床ïœïŒÎ»ïŒïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒãããã«ä¹ããäžã§ãæ³¢é·é åã«ããã£ãŠå
ãšãã«ã®ãŒãç©åãããã®ã«çžåœããããã®ãããªé¢ä¿ã¯ãïŒïŒïŒåŒã«ç€ºãé¢ä¿åŒãšããŠè¡šããããšãã§ããã
The light (spectrum) that has passed through the diffusing member 402 incident on the imaging device 400 includes the spectral radiance E (1) (λ) of the illumination light applied to the diffusing member 402 (or the subject OBJ), and the diffusing member. This corresponds to the product of the spectral transmittance f (1) (λ) of 402. The spectral transmittance f (1) (λ) of the diffusing member 402 is assumed to be constant over the entire diffusing member 402. The imaging data g (1) RGB component values g (1) i (i = R, G, B) representing the first image data output from the imaging device 400 are the spectral sensitivities S of the imaging elements. This is equivalent to a product obtained by further multiplying i (λ) (i = R, G, B) and integrating the light energy over the wavelength region. Such a relationship can be expressed as a relational expression shown in Expression (1).
ãããã§ãïœïœïŒïœïŒïœïŒã¯ãåæ®åçŽ åã«çŸããçœè²ãã€ãºãªã©ã«ãã£ãŠçããå æ³æ§ãã€ãºã§ãããæ®åè£
眮ïŒïŒïŒã®æ®åçŽ åãã¬ã³ãºã®ç¹æ§ãããã³ç
§æç°å¢ãªã©ã«äŸåããå€ã§ããã
Here, n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
ãäžè¿°ããããã«ãæ¬å®æœã®åœ¢æ
ã§ã¯ãæå®ã®æ³¢é·å¹
ïŒä»£è¡šçã«ãïŒããã¡ãŒã¿å¹
ïŒã§ãµã³ããªã³ã°ããè¡åæŒç®åŒãçšãããããªãã¡ãïŒïŒïŒåŒã®å³èŸºç¬¬ïŒé
ã®ç©ååŒããåæ®åçŽ åã®åæ³¢é·ã«ãããæ床ã瀺ãè¡åã§ããåå
æ床ãšãåæ³¢é·ã«ãããæŸå°èŒåºŠã瀺ãè¡åã§ããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšãåæ³¢é·ã«ãããééçã瀺ãè¡åã§ããåå
ééçïœïŒïŒïŒãšã®è¡åæŒç®ã«ããåŠçããããªããåå
æ床ããã³åå
ééçïœïŒïŒïŒã«ã€ããŠã¯æ¢ç¥ã§ããã
As described above, in the present embodiment, a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (1) is expressed by the spectral sensitivity S that is a matrix indicating the sensitivity at each wavelength of each image sensor and the spectral radiance E 1 that is a matrix indicating the radiance at each wavelength. ) And spectral transmittance f (1) which is a matrix indicating the transmittance at each wavelength. The spectral sensitivity S and the spectral transmittance f (1) are already known.
ãããã§ãå æ³æ§ãã€ãºïœïœïŒïœïŒïœïŒã¯ãäžè¬çã«ååã«å°ããªå€ã§ããã®ã§ãïŒïŒïŒåŒããç¡èŠãããšãïŒïŒïŒåŒãã次ã®è¡åæŒç®åŒãå°ãããšãã§ããã
Here, the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (1), the following matrix operation expression can be derived from the expression (1).
ããïœïŒïŒïŒïŒïŒ³tã»ïŒ¥ïŒïŒïŒã»ïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒ
ããã®ïŒïŒïŒåŒã«åºã¥ããŠãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããããšãèãããå ·äœçã«ã¯ã以äžã«ç€ºãïŒïŒïŒåŒã«åŸã£ãŠç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããã g (1) = St · E (1) · f (1) ... (2)
Consider calculating the spectral radiance E (1) based on the equation (2). Specifically, the spectral radiance E (1) of the illumination light is calculated according to the following equation (3).
ããã®ïŒïŒïŒåŒã«åºã¥ããŠãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããããšãèãããå ·äœçã«ã¯ã以äžã«ç€ºãïŒïŒïŒåŒã«åŸã£ãŠç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããã g (1) = St · E (1) · f (1) ... (2)
Consider calculating the spectral radiance E (1) based on the equation (2). Specifically, the spectral radiance E (1) of the illumination light is calculated according to the following equation (3).
ããïœïŒïŒïŒïŒïŒ·ïŒïŒïŒã»ïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒ
ãïŒïŒïŒåŒã«ãããŠãïŒïŒïŒã¯ç¬¬ïŒæšå®è¡åã§ããã第ïŒæšå®è¡åïŒïŒïŒã¯ã以äžã«èª¬æãããŠã£ããŒæšå®ã®ææ³ã«ãã£ãŠç®åºããããå ·äœçã«ã¯ã第ïŒæšå®è¡åïŒïŒïŒã¯ãã·ã¹ãã è¡åã以äžã«ç€ºãïŒïŒïŒåŒãšå®ããäžã§ãïŒïŒïŒåŒãå€åœ¢ããããšã§ïŒïŒïŒåŒã®ããã«å°åºãããã f (1) = W (1) · g (1) (3)
In the equation (3), W (1) is a first estimation matrix. The first estimation matrix W (1) is calculated by the winner estimation method described below. Specifically, the first estimation matrix W (1) is derived as shown in equation (5) by modifying the equation (2) after defining the system matrix I as the following equation (4). The
ãïŒïŒïŒåŒã«ãããŠãïŒïŒïŒã¯ç¬¬ïŒæšå®è¡åã§ããã第ïŒæšå®è¡åïŒïŒïŒã¯ã以äžã«èª¬æãããŠã£ããŒæšå®ã®ææ³ã«ãã£ãŠç®åºããããå ·äœçã«ã¯ã第ïŒæšå®è¡åïŒïŒïŒã¯ãã·ã¹ãã è¡åã以äžã«ç€ºãïŒïŒïŒåŒãšå®ããäžã§ãïŒïŒïŒåŒãå€åœ¢ããããšã§ïŒïŒïŒåŒã®ããã«å°åºãããã f (1) = W (1) · g (1) (3)
In the equation (3), W (1) is a first estimation matrix. The first estimation matrix W (1) is calculated by the winner estimation method described below. Specifically, the first estimation matrix W (1) is derived as shown in equation (5) by modifying the equation (2) after defining the system matrix I as the following equation (4). The
ããïŒïŒ³tÃïœïŒïŒïŒtãã»ã»ã»ïŒïŒïŒ
ããïŒïŒïŒïŒïŒ¢ã»ïŒ©ïœã»ïŒïŒ©ã»ïŒ¢ã»ïŒ©ïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒ
ãããäœãããÃãã¯ãè¡åèŠçŽ å士ã®ç©ãæå³ãããïœãã¯ã転眮è¡åãæå³ãããïŒïŒãã¯ãéè¡åãæå³ããã I = S t à f (1) t (4)
W (1) = B · I t · (I · B · I t ) â1 (5)
However, âÃâ means a product of matrix elements, â t â means a transposed matrix, and â â1 â means an inverse matrix.
ããïŒïŒïŒïŒïŒ¢ã»ïŒ©ïœã»ïŒïŒ©ã»ïŒ¢ã»ïŒ©ïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒ
ãããäœãããÃãã¯ãè¡åèŠçŽ å士ã®ç©ãæå³ãããïœãã¯ã転眮è¡åãæå³ãããïŒïŒãã¯ãéè¡åãæå³ããã I = S t à f (1) t (4)
W (1) = B · I t · (I · B · I t ) â1 (5)
However, âÃâ means a product of matrix elements, â t â means a transposed matrix, and â â1 â means an inverse matrix.
ãïŒïŒïŒåŒã«ãããŠãã¯ãç
§æç°å¢ãæäŸããããã«çšãããåŸãå
æºåè£ã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åïŒä»¥äžããæŒç®è¡åããšã称ããïŒã§ãããæ¬å®æœã®åœ¢æ
ã«ãããŠã¯ãè€æ°ã®å
æºã®åè£ã«ã€ããŠã®åå
æŸå°èŒåºŠãäºãååŸããŠãããçµ±èšäžã®èŠç¹ãããåå
æºã®åå
æŸå°èŒåºŠãšã®çžé¢æ§ãå©çšããŠç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæšå®ãããããªãã¡ãå
æºã®çš®é¡æ¯ã«ååŸãããçµ±èšããŒã¿ãäºãçšæããŠããããã®çµ±èšããŒã¿ã®ç¹åŸŽã«åŸã£ãŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããã
In the equation (5), B is an autocorrelation matrix (hereinafter also referred to as âcalculation matrixâ) of spectral radiances of light source candidates that can be used to provide an illumination environment. In the present embodiment, spectral radiance for a plurality of light source candidates is acquired in advance, and from a statistical viewpoint, the spectral radiance of illumination light is used by utilizing the correlation with the spectral radiance of each light source. E (1) is estimated. That is, statistical data acquired for each type of light source is prepared in advance, and the spectral radiance E (1) of illumination light is calculated according to the characteristics of the statistical data.
ããã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæšå®ããããã®åºæºãšãªãã®ã§ãç
§æç°å¢ãæäŸããããã«çšããããŠããå¯èœæ§ã®é«ãå
æºã®çš®é¡ïŒããšãã°ãèå
ç¯ãçœç±ç¯ããã»ãã³ç¯ãæ°Žéç¯ãªã©ã®çºå
åçå¥ïŒã«å¿ããŠãé©åãªçµ±èšããŒã¿ãçšããããšã奜ãŸããã
Since the autocorrelation matrix B of the spectral radiance serves as a reference for estimating the spectral radiance E (1) of the illumination light, the type of the light source that is likely to be used to provide the illumination environment ( For example, it is preferable to use appropriate statistical data according to the light emission principle of fluorescent lamps, incandescent lamps, xenon lamps, mercury lamps, and the like.
ããã®ãããªå
æºã®åå
æŸå°èŒåºŠã¯ãåå
æºã«ã€ããŠäºãå®éšçã«ååŸããããšãã§ããããåœéç
§æå§å¡äŒïŒïŒ£ïŒ©ïŒ¥ïŒãïŒïŒ©ïœïœïœ
ïœïœïœïœïœïœïœïœïœãïœïœïœïœïœïœïœïœïœïœïœãïœïœïœãïœïœïœïœïœïœïœïœïœïœïœïœïœïœïŒãããã¯ïŒªïŒ©ïŒ³ïŒïŒªïœïœïœïœïœ
ïœïœ
ãïœïœïœïœïœïœïœïœïœãïœïœïœïœïœïœïœïœïŒã«ãã£ãŠæšæºåãããŠããçµ±èšããŒã¿ãçšããŠãããã
The spectral radiance of such a light source can be obtained experimentally in advance for each light source, or standardized by the International Commission on Illumination (CIE), ISO (International Organization for Standardization), or JIS (Japan Industrial Standards). Statistical data may be used.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ
ïŒã«åŸãåå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åã®çæåŠçã説æããããã®å³ã§ãããå³ïŒãåç
§ããŠããŸããå°ãªããšãïŒçš®é¡ä»¥äžã®å
æºåè£ïŒå
æºïŒïœå
æºïŒ®ïŒã®åå
æŸå°èŒåºŠã®å€ãåèŠçŽ ãšããå
æºã®çŸ€è¡åïœïœãäœæãããããªãã¡ãå
æºïœïŒïŒâŠïœâŠïŒ®ïŒã®åãµã³ããªã³ã°æ³¢é·Î»ïœïŒïŒâŠïœâŠïœïŒã«ãããæåå€ïŒæŸå°èŒåºŠïŒãïœ
ïœïŒÎ»ïœïŒãšãããšãåå
æºã®æåå€ïœ
ïœïŒÎ»ïœïŒãè¡æ¹åã«é
眮ããå
æºã®çŸ€è¡åïœïœãäœæããã
FIG. 3 is a diagram for describing processing for generating an autocorrelation matrix B of spectral radiance according to the first embodiment of the present invention. Referring to FIG. 3, first, a light source group matrix Est is generated with spectral radiance values of at least one type of light source candidates (light source 1 to light source N) as elements. That is, assuming that the component value (radiance) at each sampling wavelength λ j (1 ⊠j ⊠k) of the light source i (1 ⊠i ⊠N) is e i (λ j ), the component value e i (λ Create a light source group matrix Est with j ) arranged in the row direction.
ãããã«ã以äžã®æŒç®åŒã«åŸã£ãŠããã®çŸ€è¡åïœïœã«åºã¥ããŠèªå·±çžé¢è¡åãç®åºããã
Further, an autocorrelation matrix B is calculated based on the group matrix Est according to the following arithmetic expression.
ããïŒïŒ¥ïœïœã»ïŒ¥ïœïœ
ïœãã»ã»ã»ïŒïŒïŒ
ããªããå¯èŠå é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒãïŒããã¡ãŒã¿å¹ ã§ãµã³ããªã³ã°ããŠåŸãããåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããããã«ã¯ãåããµã³ããªã³ã°ééïŒèŠçŽ æ°ïŒããã€çŸ€è¡åïœïœãçšããå¿ èŠããããåŸã£ãŠãïŒã€ã®å æºã®åå æŸå°èŒåºŠã瀺ãïŒïŒïŒè¡ÃïŒåã®è¡åãïœååã ãçµåãã矀è¡åïœïœã¯ãïŒïŒïŒè¡Ãïœåã®è¡åãšãªãããã®çŸ€è¡åïœïœã®èªå·±çžé¢è¡åã¯ãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åãšãªãã B = E st · E st t (6)
In order to calculate the spectral radiance E (1) obtained by sampling the visible light region (380 to 780 nm) with a width of 1 nm, a group matrix E st having the same sampling interval (number of elements ) is calculated. It is necessary to use it. Accordingly, a group matrix Est in which n matrixes of 401 rows à 1 column indicating the spectral radiance of one light source are combined is a 401 row à n column matrix, and the autocorrelation matrix of the group matrix Est is , 401 rows à 401 columns matrix.
ããªããå¯èŠå é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒãïŒããã¡ãŒã¿å¹ ã§ãµã³ããªã³ã°ããŠåŸãããåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããããã«ã¯ãåããµã³ããªã³ã°ééïŒèŠçŽ æ°ïŒããã€çŸ€è¡åïœïœãçšããå¿ èŠããããåŸã£ãŠãïŒã€ã®å æºã®åå æŸå°èŒåºŠã瀺ãïŒïŒïŒè¡ÃïŒåã®è¡åãïœååã ãçµåãã矀è¡åïœïœã¯ãïŒïŒïŒè¡Ãïœåã®è¡åãšãªãããã®çŸ€è¡åïœïœã®èªå·±çžé¢è¡åã¯ãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åãšãªãã B = E st · E st t (6)
In order to calculate the spectral radiance E (1) obtained by sampling the visible light region (380 to 780 nm) with a width of 1 nm, a group matrix E st having the same sampling interval (number of elements ) is calculated. It is necessary to use it. Accordingly, a group matrix Est in which n matrixes of 401 rows à 1 column indicating the spectral radiance of one light source are combined is a 401 row à n column matrix, and the autocorrelation matrix of the group matrix Est is , 401 rows à 401 columns matrix.
ããŸããå
æºã®åå
æŸå°èŒåºŠãšããŠã¯ãããšãã°èå
ç¯ãçœç±ç¯ãšãã£ãåäœã®å
æºãçºããåå
æŸå°èŒåºŠãçšããŠãããããè€æ°çš®é¡ã®å
æºãçµã¿åããããŠçããåå
æŸå°èŒåºŠãçšããŠããããããã«ãå±å€ã§ã¯ã倪éœå
ãªã©ã®åå
æŸå°èŒåºŠãçµã¿åãããŠããããããªãã¡ãæ¬å®æœã®åœ¢æ
ã«ãããŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæšå®ããããã«ã¯ãç
§æç°å¢ãæäŸããããã«çšããããå¯èœæ§ã®é«ãå
æºã®åå
æŸå°èŒåºŠããåŸãããèªå·±çžé¢è¡åãçšããããšã奜ãŸããã
As the spectral radiance of the light source, for example, spectral radiance emitted from a single light source such as a fluorescent lamp or an incandescent lamp may be used, or spectral radiance generated by combining a plurality of types of light sources may be used. Further, outdoors, spectral radiance such as sunlight may be combined. That is, in this embodiment, in order to estimate the spectral radiance E (1) of illumination light, the autocorrelation matrix B obtained from the spectral radiance of a light source that is likely to be used to provide an illumination environment. Is preferably used.
ããªãããŠã£ããŒæšå®ã®è©³çŽ°ã«ã€ããŠã¯ãäžè¿°ã®âäžå®
æŽäžç·šããåå
ç»ååŠçå
¥éãã財å£æ³äººæ±äº¬å€§åŠåºçäŒãïŒïŒïŒïŒå¹ŽïŒæïŒïŒæ¥âã«è©³ããã®ã§ããã¡ããåç
§ããããã
For details of the winner estimation, please refer to âYoichi Miyake,â Introduction to Spectral Image Processing â, The University of Tokyo Press, February 24, 2006â above.
ãå床ãå³ïŒãåç
§ããŠãå
æºããŒã¿æ ŒçŽéšïŒïŒã¯ãäžè¿°ã®ãããªæé ã«ãã£ãŠç®åºããããæŒç®è¡åã§ããèªå·±çžé¢è¡åãäºãæ ŒçŽããã
Referring to FIG. 2 again, the light source data storage unit 13 stores in advance the autocorrelation matrix B, which is an arithmetic matrix, calculated by the procedure as described above.
ãæšå®è¡åç®åºéšïŒïŒã¯ãäžè¿°ã®ïŒïŒïŒåŒã«åŸã£ãŠãäºãæ ŒçŽããŠããæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒããã³æ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠãã·ã¹ãã è¡åãç®åºãããšãšãã«ãäžè¿°ã®ïŒïŒïŒåŒã«åŸã£ãŠããã®ã·ã¹ãã è¡åãšãå
æºããŒã¿æ ŒçŽéšïŒïŒããèªã¿åºããèªå·±çžé¢è¡åãšã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºãããç¶ããŠãåå
æŸå°èŒåºŠç®åºéšïŒïŒããäžè¿°ã®ïŒïŒïŒåŒã«åŸã£ãŠãæšå®è¡åç®åºéšïŒïŒããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒãšãå
¥åéšïŒïŒããã®æ®åããŒã¿ïœïŒïŒïŒ
ãšã«åºã¥ããŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããã
The estimation matrix calculation unit 12 calculates the system matrix I based on the spectral transmittance f (1) of the diffusing member 402 stored in advance and the spectral sensitivity S of the imaging device 400 according to the above-described equation (4). The first estimation matrix W (1) is calculated based on the system matrix I and the autocorrelation matrix B read from the light source data storage unit 13 in accordance with the above equation (5). Subsequently, the spectral radiance calculation unit 11 converts the first estimation matrix W (1) from the estimation matrix calculation unit 12 and the imaging data g (1) RGB from the input unit 10 according to the above equation (3). Based on this, the spectral radiance E (1) of the illumination light is calculated.
ããã®ããã«æšå®è¡åç®åºéšïŒïŒãç®åºããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒã¯ãåŸè¿°ãããã¯ã€ããã©ã³ã¹ç®åºåŠçããã³è²åçŸåŠçã«å©çšãããã
Thus, the spectral radiance E (1) of the illumination light calculated by the estimation matrix calculation unit 12 is used for white balance calculation processing and color reproduction processing described later.
ãïŒãã¯ã€ããã©ã³ã¹ç®åºåŠçïŒ
ãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã¯ãäžåºæ¿å€å€æéšïŒïŒãšã座æšå€æéšïŒïŒãšããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãšãããã«å«ãããããã®éšäœã¯ãç®åºãããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã«åºã¥ããŠãæ®åè£ çœ®ïŒïŒïŒã®ãã¯ã€ããã©ã³ã¹ãç®åºããããã®ãã¯ã€ããã©ã³ã¹ã®å€ã«åºã¥ããŠãæ®åè£ çœ®ïŒïŒïŒã®æ®åçŽ åããããããåºåãããïŒïŒ§ïŒïŒ¢ã®èŒåºŠå€ã®ã¬ãã«ãäºãã«èª¿æŽããããã®ãã¯ã€ããã©ã³ã¹èª¿æŽãå¯èœã«ãªãã <White balance calculation processing>
The illuminationspectrum estimation unit 100 further includes a tristimulus value conversion unit 14, a coordinate conversion unit 15, and a white balance calculation unit 16. These parts calculate the white balance of the imaging apparatus 400 based on the calculated spectral radiance E (1) of the illumination light. Based on this white balance value, it is possible to perform white balance adjustment for mutually adjusting the levels of the R, G, and B luminance values output from the image sensor of the imaging apparatus 400.
ãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã¯ãäžåºæ¿å€å€æéšïŒïŒãšã座æšå€æéšïŒïŒãšããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãšãããã«å«ãããããã®éšäœã¯ãç®åºãããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒã«åºã¥ããŠãæ®åè£ çœ®ïŒïŒïŒã®ãã¯ã€ããã©ã³ã¹ãç®åºããããã®ãã¯ã€ããã©ã³ã¹ã®å€ã«åºã¥ããŠãæ®åè£ çœ®ïŒïŒïŒã®æ®åçŽ åããããããåºåãããïŒïŒ§ïŒïŒ¢ã®èŒåºŠå€ã®ã¬ãã«ãäºãã«èª¿æŽããããã®ãã¯ã€ããã©ã³ã¹èª¿æŽãå¯èœã«ãªãã <White balance calculation processing>
The illumination
ãäžåºæ¿å€å€æéšïŒïŒã¯ãæ³¢é·é åã§èŠå®ãããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãç®åºããããã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºã¯ã被åäœïŒ¯ïŒ¢ïŒªãæ®åãããç
§æç°å¢ã«ãããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒã人éã芳枬ãããšä»®å®ããå Žåã®ç¹æ§å€ã瀺ããããå
·äœçã«ã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒã«ã€ããŠã®ïŒžïŒ¹ïŒºè¡šè²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºã¯ã以äžã«ç€ºãïŒïŒïŒåŒã®ããã«ãªãã
The tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z in the XYZ color system from the spectral radiance E (1) defined in the wavelength region. The tristimulus values X, Y, and Z indicate characteristic values when it is assumed that the human has observed the spectral radiance E (1) in the illumination environment where the subject OBJ is imaged. More specifically, the tristimulus values X, Y, and Z of the XYZ color system for the spectral radiance E (1) of the illumination light are expressed by the following equation (7).
ïŒïŒïŒåŒã«ãããŠãïœïœïŒÎ»ïŒïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒã¯çè²é¢æ°ã§ããã人éã®èŠèŠæ床ç¹æ§ã«çžåœããé¢æ°ã§ããããã®çè²é¢æ°ïœïœïŒÎ»ïŒã¯åœéç
§æå§å¡äŒïŒïŒ£ïŒ©ïŒ¥ïŒã«ãã£ãŠèŠå®ãããŠããã
In the equation (7), h i (λ) (i = R, G, B) is a color matching function, which is a function corresponding to human visual sensitivity characteristics. This color matching function h i (λ) is defined by the International Commission on Illumination (CIE).
ãäžè¿°ããŠããããã«ãåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åã§ããã®ã§ãäžåºæ¿å€å€æéšïŒïŒã¯ãïŒïŒïŒåŒã«çžåœããæŒç®ãã以äžã«ç€ºãè¡åæŒç®ã«ãã£ãŠå®çŸããã
As described above, since the spectral radiance E (1) is a matrix of 401 rows à 401 columns, the tristimulus value conversion unit 14 performs an operation corresponding to Equation (7) by a matrix operation shown below. Realize.
ããäžåºæ¿å€ïŒ»ïŒžïŒïŒïŒïŒïŒ¹ïŒïŒïŒïŒïŒºïŒïŒïŒïŒœïŒïœïœã»ïŒ¥ïŒïŒïŒãã»ã»ã»ïŒïŒïŒ
ãããã§ãè¡åïœã¯ãçè²é¢æ°ïœïœïŒÎ»ïŒã®åãµã³ããªã³ã°æ³¢é·ã«ãããå€ãèŠçŽ ãšããïŒïŒïŒè¡ÃïŒåã®è¡åã§ããã Tristimulus values [X (1) , Y (1) , Z (1) ] = h t · E (1) (8)
Here, the matrix h is a matrix of 401 rows à 3 columns whose elements are values at the respective sampling wavelengths of the color matching function h i (λ).
ãããã§ãè¡åïœã¯ãçè²é¢æ°ïœïœïŒÎ»ïŒã®åãµã³ããªã³ã°æ³¢é·ã«ãããå€ãèŠçŽ ãšããïŒïŒïŒè¡ÃïŒåã®è¡åã§ããã Tristimulus values [X (1) , Y (1) , Z (1) ] = h t · E (1) (8)
Here, the matrix h is a matrix of 401 rows à 3 columns whose elements are values at the respective sampling wavelengths of the color matching function h i (λ).
ãç¶ããŠã座æšå€æéšïŒïŒã¯ããã®äžåºæ¿å€ïŒžïŒïŒïŒïŒïŒ¹ïŒïŒïŒïŒïŒºïŒïŒïŒã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã«å€æãããããå
·äœçã«ã¯ã座æšå€æéšïŒïŒã¯ã以äžã«ç€ºãæŒç®åŒã«åŸã£ãŠã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒãç®åºããã
Subsequently, the coordinate conversion unit 15 converts the tristimulus values X (1) , Y (1) , Z (1) into coordinate values R (1) , G (1) , B ( defined in the RGB color system. Convert to 1) . More specifically, the coordinate conversion unit 15 calculates coordinate values R (1) , G (1) , and B (1) defined in the RGB color system according to the arithmetic expression shown below.
ããïŒïŒïŒïŒïœïŒïŒïŒžïŒïŒïŒïŒïœïŒïŒïŒ¹ïŒïŒïŒïŒïœïŒïŒïŒºïŒïŒïŒ
ããïŒïŒïŒïŒïœïŒïŒïŒžïŒïŒïŒïŒïœïŒïŒïŒ¹ïŒïŒïŒïŒïœïŒïŒïŒºïŒïŒïŒ
ããïŒïŒïŒïŒïœïŒïŒïŒžïŒïŒïŒïŒïœïŒïŒïŒ¹ïŒïŒïŒïŒïœïŒïŒïŒºïŒïŒïŒ
ãããã§ãïœïŒïŒïœïœïŒïŒã¯ã被åäœã®æž¬è²å€ïŒïŒžïŒ¹ïŒºè¡šè²ç³»ïŒãšãå®éã«æ®åè£ çœ®ã«èšé²ãããä¿¡å·å€ïŒïŒ²ïŒ§ïŒ¢è¡šè²ç³»ïŒãšã®å¯Ÿå¿é¢ä¿ãè¡šãïŒè¡ÃïŒåã®å€æè¡åã§ããããã®ãããªè¡åã®ããšãè¡šç€ºè£ çœ®ã«ãããâå€æè¡åãšåŒã¶ãïœïŒïŒïœïœïŒïŒãïŒè¡ÃïŒåã®è¡åãšããŠãšããããšãåŸè¿°ã®ïŒïŒïŒïŒåŒã«ãŠã
ããïœtã»ïŒ¥ïŒïŒïŒã»ïŒ·ïŒïŒïŒïŒïŒ
ãšããå Žåã®éè¡åïŒïŒïŒã«çžåœããããšã«ãªãã R (1) = a 11 X (1) + a 12 Y (1) + a 13 Z (1)
G (1) = a 21 X (1) + a 22 Y (1) + a 23 Z (1)
B (1) = a 31 X (1) + a 32 Y (1) + a 33 Z (1)
Here, a 11 to a 33 are 3 rows à 3 columns representing the correspondence between the colorimetric values of the subject (XYZ color system) and the signal values (RGB color system) actually recorded in the imaging apparatus. Is the transformation matrix. Such a matrix is called an RGBâXYZ conversion matrix in the display device. If a 11 to a 33 are viewed as a 3 à 3 matrix, the following equation (15)
h t · E (1) · W (2) = M
This corresponds to the inverse matrix Mâ 1 .
ããïŒïŒïŒïŒïœïŒïŒïŒžïŒïŒïŒïŒïœïŒïŒïŒ¹ïŒïŒïŒïŒïœïŒïŒïŒºïŒïŒïŒ
ããïŒïŒïŒïŒïœïŒïŒïŒžïŒïŒïŒïŒïœïŒïŒïŒ¹ïŒïŒïŒïŒïœïŒïŒïŒºïŒïŒïŒ
ãããã§ãïœïŒïŒïœïœïŒïŒã¯ã被åäœã®æž¬è²å€ïŒïŒžïŒ¹ïŒºè¡šè²ç³»ïŒãšãå®éã«æ®åè£ çœ®ã«èšé²ãããä¿¡å·å€ïŒïŒ²ïŒ§ïŒ¢è¡šè²ç³»ïŒãšã®å¯Ÿå¿é¢ä¿ãè¡šãïŒè¡ÃïŒåã®å€æè¡åã§ããããã®ãããªè¡åã®ããšãè¡šç€ºè£ çœ®ã«ãããâå€æè¡åãšåŒã¶ãïœïŒïŒïœïœïŒïŒãïŒè¡ÃïŒåã®è¡åãšããŠãšããããšãåŸè¿°ã®ïŒïŒïŒïŒåŒã«ãŠã
ããïœtã»ïŒ¥ïŒïŒïŒã»ïŒ·ïŒïŒïŒïŒïŒ
ãšããå Žåã®éè¡åïŒïŒïŒã«çžåœããããšã«ãªãã R (1) = a 11 X (1) + a 12 Y (1) + a 13 Z (1)
G (1) = a 21 X (1) + a 22 Y (1) + a 23 Z (1)
B (1) = a 31 X (1) + a 32 Y (1) + a 33 Z (1)
Here, a 11 to a 33 are 3 rows à 3 columns representing the correspondence between the colorimetric values of the subject (XYZ color system) and the signal values (RGB color system) actually recorded in the imaging apparatus. Is the transformation matrix. Such a matrix is called an RGBâXYZ conversion matrix in the display device. If a 11 to a 33 are viewed as a 3 à 3 matrix, the following equation (15)
h t · E (1) · W (2) = M
This corresponds to the inverse matrix Mâ 1 .
ãããã«ããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒã¯ã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ã«åºã¥ããŠãæ®åè£
眮ïŒïŒïŒã«ããããã¯ã€ããã©ã³ã¹ãç®åºãããäžè¬çã«ããã¯ã€ããã©ã³ã¹èª¿æŽã®å®äºãšã¯ãïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒïŒïŒïŒïŒïŒïŒãæç«ããããšã§ããããããã®æ¯çã厩ããŠããå Žåã«ã¯ããã¯ã€ããã©ã³ã¹èª¿æŽãååã§ã¯ãªããšãããããã®ãããªå Žåã«ã¯ãæ®åè£
眮ïŒïŒïŒãæ§æããåè²ã®æ®åçŽ åã®åºåã²ã€ã³ãç¬ç«ã«èª¿æŽããããšã§ããã¯ã€ããã©ã³ã¹ã調æŽããããããªãã¡ãïŒïŒ§ïŒïŒ¢ã®æ®åçŽ åã«ä¹ããã¹ã調æŽã²ã€ã³ã¯ãïŒïŒïŒ²ïŒïŒïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒïŒïŒ¢ïŒïŒïŒãšãªãã
Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) . In general, completion of white balance adjustment means that R (1) : G (1) : B (1) = 1: 1: 1 is established. It can be said that the white balance adjustment is not sufficient. In such a case, the white balance is adjusted by independently adjusting the output gains of the image pickup elements of the respective colors constituting the image pickup apparatus 400. That is, the adjustment gain to be multiplied by the R, G, and B image sensors is 1 / R (1) : 1 / G (1) : 1 / B (1) .
ããããã£ãŠããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒã¯ã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ããããã¯ãã®éæ¯ã§ããïŒïŒïŒ²ïŒïŒïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒïŒïŒ¢ïŒïŒïŒããã¯ã€ããã©ã³ã¹ãšããŠåºåããããã®ãã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒããåºåããããã¯ã€ããã©ã³ã¹ã¯ããŠãŒã¶ã«ããæåã®ã²ã€ã³èª¿æŽã«çšããããããããã¯ãæ®åè£
眮ïŒïŒïŒã®ã²ã€ã³èª¿æŽéšïŒå³ç€ºããªãïŒã«äžããããŠãåœè©²ã²ã€ã³èª¿æŽéšãèªåçã«æ®åè£
眮ïŒïŒïŒã®ã²ã€ã³ã調æŽããããã«ããŠãããã
Therefore, the white balance calculation unit 16 calculates the ratio of the coordinate values R (1) , G (1) , B (1) or the inverse ratio 1 / R (1) : 1 / G (1) : 1 / B (1) is output as white balance. The white balance output from the white balance calculation unit 16 is used for manual gain adjustment by the user. Alternatively, a gain adjustment unit (not shown) of the imaging device 400 may be provided so that the gain adjustment unit automatically adjusts the gain of the imaging device 400.
ãïŒè²åçŸåŠçïŒ
ã次ã«ãäžè¿°ã®åŠçã«ãã£ãŠç®åºãããåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒãã被åäœã®è²åçŸãè¡ãªã£ãŠç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ ïŒïœïŒïœïŒãçæããåŠçã«ã€ããŠèª¬æããã <Color reproduction processing>
Next, using the spectral radiance E (1) calculated by the above-described processing, the color of the subject is reproduced from the second imaging data g (2) RGB (m, n), and the image data g (OUT) RGB A process for generating (m, n) will be described.
ã次ã«ãäžè¿°ã®åŠçã«ãã£ãŠç®åºãããåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ ïŒïœïŒïœïŒãã被åäœã®è²åçŸãè¡ãªã£ãŠç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ ïŒïœïŒïœïŒãçæããåŠçã«ã€ããŠèª¬æããã <Color reproduction processing>
Next, using the spectral radiance E (1) calculated by the above-described processing, the color of the subject is reproduced from the second imaging data g (2) RGB (m, n), and the image data g (OUT) RGB A process for generating (m, n) will be described.
ãè²åçŸéšïŒïŒïŒã¯ãå
¥åéšïŒïŒãšãåå
åå°çç®åºéšïŒïŒãšãæšå®è¡åç®åºéšïŒïŒãšãåå
åå°çããŒã¿æ ŒçŽéšïŒïŒãšãç»åããŒã¿çæéšïŒïŒãšã座æšå€æéšïŒïŒãšãå«ãã
The color reproduction unit 200 includes an input unit 20, a spectral reflectance calculation unit 21, an estimation matrix calculation unit 22, a spectral reflectance data storage unit 23, an image data generation unit 24, and a coordinate conversion unit 25.
ãå
¥åéšïŒïŒã¯ãå³ïŒïŒïŒ¢ïŒã«ç€ºãããã«ãæ®åè£
眮ïŒïŒïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªãæ®åããŠåŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããããããŠãå
¥åéšïŒïŒã¯ãåŠçã«å¿ããŠããã®ç¬¬ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
åå°çç®åºéšïŒïŒãžåºåããã
As illustrated in FIG. 2B, the input unit 20 receives second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ using the imaging device 400. Then, the input unit 20 outputs the second imaging data g (2) RGB (m, n) to the spectral reflectance calculation unit 21 according to the processing.
ããªãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã«éã¬ã³ãç¹æ§ïŒéç·åœ¢æ§ïŒãäžããããŠããå Žåã«ã¯ãäžè¿°ããå
¥åéšïŒïŒãšåæ§ã«ãå
¥åéšïŒïŒã«ã€ããŠããã®éã¬ã³ãç¹æ§ãæã¡æ¶ãããã®åŠçãè¡ãªãããã«ããŠããããããªãã¡ãæ®åè£
眮ïŒïŒïŒã«ãããéã¬ã³ãå€ãγïœãšãããšã以äžã®ãããªæŒç®åŒã«åŸã£ãŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãç·åœ¢åããããšãã§ããã
Note that when the second imaging data g (2) RGB (m, n) is provided with an inverse gamma characteristic (nonlinearity), the input unit 20 also has the inverse gamma, as with the input unit 10 described above. You may make it perform the process for negating a characteristic. That is, when the inverse gamma value in the imaging apparatus 400 is γc, the second imaging data g (2) RGB (m, n) can be linearized according to the following arithmetic expression.
ããïœâïŒïŒïŒ
ïŒïœïŒïœïŒïŒïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïŒÎ³ïœ
ãããã«ãäžè¿°ããããã«ããã®ãããªç·åœ¢ååŠçãã«ãã¯ã¢ããããŒãã«ãçšããŠå®è¡ããŠãããã g â² (2) RGB (m, n) = g (2) RGB (m, n) 1 / γc
Furthermore, as described above, such a linearization process may be executed using a lookup table.
ãããã«ãäžè¿°ããããã«ããã®ãããªç·åœ¢ååŠçãã«ãã¯ã¢ããããŒãã«ãçšããŠå®è¡ããŠãããã g â² (2) RGB (m, n) = g (2) RGB (m, n) 1 / γc
Furthermore, as described above, such a linearization process may be executed using a lookup table.
ãåå
åå°çç®åºéšïŒïŒã¯ãåŸè¿°ããæšå®è¡åç®åºéšïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çãç®åºãããããã«ãåå
åå°çç®åºéšïŒïŒã¯ãä»»æã®ç
§æç°å¢äžã«ããã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸããŒã¿ã§ããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããããã®è²åçŸããŒã¿ãšã¯ã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çã«åºã¥ããŠãä»»æã®ç
§æç°å¢äžã«ãããŠåœè©²è¢«åäœïŒ¯ïŒ¢ïŒªãã©ã®ããã«èŠ³æž¬ãããããæŒç®åŠçã«ãã£ãŠåçŸãããã®ã§ããã
The spectral reflectance calculator 21 calculates the spectral reflectance of the subject OBJ from the second imaging data g (2) using the second estimation matrix W (2) calculated by the estimation matrix calculator 22 described later. Further, the spectral reflectance calculator 21 outputs image data g (OUT) RGB (m, n) that is color reproduction data of the subject OBJ under an arbitrary illumination environment. This color reproduction data is a reproduction of how the subject OBJ is observed under an arbitrary illumination environment based on the spectral reflectance of the subject OBJ.
ãæšå®è¡åç®åºéšïŒïŒã¯ã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åãšãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšãæ®åè£
眮ïŒïŒïŒã®åå
æ床ãšã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããã
The estimation matrix calculation unit 22 includes an autocorrelation matrix A calculated from spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, and imaging. Based on the spectral sensitivity S of the apparatus 400, a second estimation matrix W (2) is calculated.
ã以äžã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒããè²åçŸãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæããåçã«ã€ããŠèª¬æããã
Hereinafter, the principle of generating image data g (OUT) RGB (m, n) that is color-reproduced from the second imaging data g (2) RGB (m, n) will be described.
ãæ®åè£
眮ïŒïŒïŒã®åº§æšïŒïœïŒïœïŒã®ç»çŽ ã«å
¥å°ãã被åäœïŒ¯ïŒ¢ïŒªããã®å
ïŒã¹ãã¯ãã«ïŒã¯ã被åäœïŒ¯ïŒ¢ïŒªã«ç
§å°ãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒÎ»ïŒãšã被åäœïŒ¯ïŒ¢ïŒªã®åœè©²ç»çŽ ã«å¯Ÿå¿ããäœçœ®ã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒÎ»ïŒãšã®ç©ã«çžåœããããããŠãæ®åè£
眮ïŒïŒïŒããåºåããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã®åæåå€ïœïŒïŒïŒ
ïœïŒïœïŒïœïŒïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒã¯ãåæ®åçŽ åã®åå
æ床ïœïŒÎ»ïŒïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒãããã«ä¹ããäžã§ãæ³¢é·é åã«ããã£ãŠå
ãšãã«ã®ãŒãç©åãããã®ã«çžåœããããã®ãããªé¢ä¿ã¯ãïŒïŒïŒåŒã«ç€ºãé¢ä¿åŒãšããŠè¡šããããšãã§ããã
The light (spectrum) from the subject OBJ that enters the pixel at the coordinates (m, n) of the imaging device 400 is the spectral radiance E (1) (λ) of the illumination light irradiated on the subject OBJ and the subject OBJ. This corresponds to the product of the spectral reflectance f (2) (m, n; λ) at the position corresponding to the pixel. Then, each component value g (2) i (m, n) (i = R, G, B) of the second imaging data g (2) RGB (m, n) output from the imaging device 400 is obtained for each imaging. This corresponds to a product obtained by further multiplying the spectral sensitivity S i (λ) (i = R, G, B) of the element and integrating the light energy over the wavelength region. Such a relationship can be expressed as a relational expression shown in Expression (9).
ãããã§ãïœïœïŒïœïŒïœïŒã¯ãåæ®åçŽ åã«çŸããçœè²ãã€ãºãªã©ã«ãã£ãŠçããå æ³æ§ãã€ãºã§ãããæ®åè£
眮ïŒïŒïŒã®æ®åçŽ åãã¬ã³ãºã®ç¹æ§ãããã³ç
§æç°å¢ãªã©ã«äŸåããå€ã§ããã
Here, n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
ãäžè¿°ããããã«ãæ¬å®æœã®åœ¢æ
ã§ã¯ãæå®ã®æ³¢é·å¹
ïŒä»£è¡šçã«ãïŒããã¡ãŒã¿å¹
ïŒã§ãµã³ããªã³ã°ããè¡åæŒç®åŒãçšãããããªãã¡ãïŒïŒïŒåŒã®å³èŸºç¬¬ïŒé
ã®ç©ååŒããåæ®åçŽ åã®åæ³¢é·ã«ãããåå
æ床ã瀺ãè¡åã§ããåå
æ床ãšãåæ³¢é·ã«ãããåå
æŸå°èŒåºŠã瀺ãè¡åã§ããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšãåæ³¢é·ã«ããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çã瀺ãè¡åã§ããåå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãšã®è¡åæŒç®ã«ããå®çŸããã代衚çã«ãå¯èŠå
é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒãïŒããã¡ãŒã¿å¹
ã§ãµã³ããªã³ã°ããå Žåã«ã¯ãåå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒã¯ãåèŠçŽ ãããïŒïŒïŒè¡ÃïŒåã®è¡åãšãªãã
As described above, in the present embodiment, a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (9) is expressed as follows: spectral sensitivity S that is a matrix indicating the spectral sensitivity at each wavelength of each image sensor, and spectral radiance E that is a matrix indicating the spectral radiance at each wavelength. This is realized by matrix calculation of (1) and spectral reflectance f (2) (m, n) which is a matrix indicating the spectral reflectance of the subject OBJ at each wavelength. Typically, when the visible light region (380 to 780 nanometers) is sampled with a width of 1 nanometer, the spectral reflectance f (2) (m, n) is expressed as a matrix of 401 rows à 1 column for each element. Become.
ãããã§ãå æ³æ§ãã€ãºïœïœïŒïœïŒïœïŒã¯ãäžè¬çã«ååã«å°ããªå€ã§ããã®ã§ãïŒïŒïŒåŒããç¡èŠãããšãïŒïŒïŒåŒãã次ã®è¡åæŒç®åŒãå°ãããšãã§ããã
Here, the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (9), the following matrix operation expression can be derived from the expression (9).
ããïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒïŒ³tã»ïŒ¥ïŒïŒïŒã»ïœïŒïŒïŒïŒïœïŒïœïŒãã»ã»ã»ïŒïŒïŒïŒ
ããã®ïŒïŒïŒïŒåŒã«åºã¥ããŠãåå åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããããšãèãããå ·äœçã«ã¯ã以äžã«ç€ºãïŒïŒïŒïŒåŒã«åŸã£ãŠè¢«åäœïŒ¯ïŒ¢ïŒªã®åå åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããã g (2) RGB (m, n) = St · E (1) · f (2) (m, n) (10)
Consider calculating the spectral reflectance f (2) (m, n) based on the equation (10). Specifically, the spectral reflectance f (2) (m, n) of the subject OBJ is calculated according to the following equation (11).
ããã®ïŒïŒïŒïŒåŒã«åºã¥ããŠãåå åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããããšãèãããå ·äœçã«ã¯ã以äžã«ç€ºãïŒïŒïŒïŒåŒã«åŸã£ãŠè¢«åäœïŒ¯ïŒ¢ïŒªã®åå åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããã g (2) RGB (m, n) = St · E (1) · f (2) (m, n) (10)
Consider calculating the spectral reflectance f (2) (m, n) based on the equation (10). Specifically, the spectral reflectance f (2) (m, n) of the subject OBJ is calculated according to the following equation (11).
ããïœïŒïŒïŒïŒïœïŒïœïŒïŒïŒ·ïŒïŒïŒã»ïœïŒïŒïŒ
ïŒïœïŒïœïŒãã»ã»ã»ïŒïŒïŒïŒ
ãïŒïŒïŒïŒåŒã«ãããŠãïŒïŒïŒã¯ç¬¬ïŒæšå®è¡åã§ããã第ïŒæšå®è¡åïŒïŒïŒã¯ãäžè¿°ãã第ïŒæšå®è¡åïŒïŒïŒã®ç®åºãšåæ§ã«ããŠã£ããŒæšå®ã®ææ³ã«ãã£ãŠç®åºããããå ·äœçã«ã¯ã第ïŒæšå®è¡åïŒïŒïŒã¯ãã·ã¹ãã è¡åã以äžã«ç€ºãïŒïŒïŒïŒåŒãšå®ããäžã§ãïŒïŒïŒïŒåŒãå€åœ¢ããããšã§ïŒïŒïŒïŒåŒã®ããã«å°åºãããã f (2) (m, n) = W (2) · g (2) RGB (m, n) (11)
In equation (11), W (2) is the second estimation matrix. The second estimation matrix W (2) is calculated by the winner estimation method, similarly to the calculation of the first estimation matrix W (1) described above. Specifically, the second estimation matrix W (2) is derived as the following equation (13) by modifying the equation (11) after defining the system matrix H as the following equation (12). The
ãïŒïŒïŒïŒåŒã«ãããŠãïŒïŒïŒã¯ç¬¬ïŒæšå®è¡åã§ããã第ïŒæšå®è¡åïŒïŒïŒã¯ãäžè¿°ãã第ïŒæšå®è¡åïŒïŒïŒã®ç®åºãšåæ§ã«ããŠã£ããŒæšå®ã®ææ³ã«ãã£ãŠç®åºããããå ·äœçã«ã¯ã第ïŒæšå®è¡åïŒïŒïŒã¯ãã·ã¹ãã è¡åã以äžã«ç€ºãïŒïŒïŒïŒåŒãšå®ããäžã§ãïŒïŒïŒïŒåŒãå€åœ¢ããããšã§ïŒïŒïŒïŒåŒã®ããã«å°åºãããã f (2) (m, n) = W (2) · g (2) RGB (m, n) (11)
In equation (11), W (2) is the second estimation matrix. The second estimation matrix W (2) is calculated by the winner estimation method, similarly to the calculation of the first estimation matrix W (1) described above. Specifically, the second estimation matrix W (2) is derived as the following equation (13) by modifying the equation (11) after defining the system matrix H as the following equation (12). The
ããïŒïŒ³ïœã»ïŒ¥ïŒïŒïŒãã»ã»ã»ïŒïŒïŒïŒ
ããïŒïŒïŒïŒïŒ¡ã»ïŒšïœã»ïŒïŒšã»ïŒ¡ã»ïŒšïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒïŒ
ãããäœãããïœãã¯ã転眮è¡åãæå³ãããïŒïŒãã¯ãéè¡åãæå³ããã H = S t · E (1) (12)
W (2) = A · H t · (H · A · H t ) â1 (13)
However, â t â means a transposed matrix, and â â1 â means an inverse matrix.
ããïŒïŒïŒïŒïŒ¡ã»ïŒšïœã»ïŒïŒšã»ïŒ¡ã»ïŒšïœïŒïŒïŒãã»ã»ã»ïŒïŒïŒïŒ
ãããäœãããïœãã¯ã転眮è¡åãæå³ãããïŒïŒãã¯ãéè¡åãæå³ããã H = S t · E (1) (12)
W (2) = A · H t · (H · A · H t ) â1 (13)
However, â t â means a transposed matrix, and â â1 â means an inverse matrix.
ãïŒïŒïŒïŒåŒã«ãããŠãã¯ã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åã§ããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çãæšå®ããããã®åºæºãšãªãããã®èªå·±çžé¢è¡åã¯ãäžäŸãšããŠãã«ãããŠæšæºåãããŠããåå
åå°çã®ããŒã¿ããŒã¹ã§ããïŒïŒ³ïœïœïœïœïœïœïœãïœïœïœ
ïœïœãïœïœïœïœãïœïœïœïœïœ
ïŒãåç
§ããããšã§æ±ºå®ã§ããããããã¯ã被åäœïŒ¯ïŒ¢ïŒªã®æ質ãªã©ãäºãåã£ãŠããå Žåã«ã¯ã被åäœïŒ¯ïŒ¢ïŒªèªèº«ã®åå
åå°çãå¥ã®æ¹æ³ã«ãã£ãŠäºã枬å®ããŠãããèªå·±çžé¢è¡åã決å®ããŠãããã
In equation (13), A is an autocorrelation matrix calculated from spectral reflectances of colors that can be included in the subject OBJ, and serves as a reference for estimating the spectral reflectance of the subject OBJ. As an example, the autocorrelation matrix A can be determined by referring to a standard object color sample (SOCS) which is a database of spectral reflectance standardized by ISO. Alternatively, when the material of the subject OBJ is known in advance, the spectral reflectance of the subject OBJ itself may be measured in advance by another method to determine the autocorrelation matrix A.
ããã®èªå·±çžé¢è¡åã¯ãå³ïŒã«ãããèªå·±çžé¢è¡åã®çæåŠçãšåæ§ã®åŠçã«ãã£ãŠçæãããããã®èªå·±çžé¢è¡åã®çæã«çšãããã矀è¡åãšããŠã¯ãããšãã°ãè€æ°ã®ã«ã©ãŒèŠæ¬ãããªãã«ã©ãŒãã£ãŒãã®åè²ã®åå
åå°çãçšããããšãã§ãããæ¬å®æœã®åœ¢æ
ã«ãããŠã¯ãå¯èŠå
é åïŒïŒïŒïŒïœïŒïŒïŒããã¡ãŒãã«ïŒãïŒããã¡ãŒã¿å¹
ã§ãµã³ããªã³ã°ããŠåŸãããïŒïŒïŒè¡ÃïŒïŒïŒåã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãçšããããã®ã§ãèªå·±çžé¢è¡åãïŒïŒïŒè¡ÃïŒïŒïŒåã®è¡åãšãªãã
This autocorrelation matrix A is generated by a process similar to the process of generating the autocorrelation matrix B in FIG. As the group matrix used to generate the autocorrelation matrix A, for example, the spectral reflectance of each color of a color chart composed of a plurality of color samples can be used. In the present embodiment, a spectral radiance E (1) of 401 rows à 401 columns obtained by sampling the visible light region (380 to 780 nanometers) with a width of 1 nanometer is used. The matrix is 401 rows à 401 columns.
ããã®èªå·±çžé¢è¡åã¯ãåå
åå°çããŒã¿æ ŒçŽéšïŒïŒã«äºãæ ŒçŽãããã
ãããã«ãäžè¿°ã®ãããªãŠã£ããŒæšå®ã®ææ³ã«ä»£ããŠãäž»æååæã®ææ³ãçšããŠãããã The autocorrelation matrix A is stored in advance in the spectral reflectancedata storage unit 23.
Further, a principal component analysis technique may be used instead of the above-described winner estimation technique.
ãããã«ãäžè¿°ã®ãããªãŠã£ããŒæšå®ã®ææ³ã«ä»£ããŠãäž»æååæã®ææ³ãçšããŠãããã The autocorrelation matrix A is stored in advance in the spectral reflectance
Further, a principal component analysis technique may be used instead of the above-described winner estimation technique.
ã以äžã®ããã«ãæšå®è¡åç®åºéšïŒïŒã¯ãïŒïŒïŒïŒåŒããã³ïŒïŒïŒïŒåŒã«åŸã£ãŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšãæ®åè£
眮ïŒïŒïŒã®åå
æ床ãšã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
æŸå°èŒåºŠããåŸãããèªå·±çžé¢è¡åãšã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããããããŠãåå
åå°çç®åºéšïŒïŒã¯ãïŒïŒïŒïŒåŒã«åŸã£ãŠããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããã
As described above, the estimation matrix calculation unit 22 can be included in the spectral radiance E (1) of the illumination light, the spectral sensitivity S of the imaging device 400, and the subject OBJ according to the equations (12) and (13). Based on the autocorrelation matrix A obtained from the spectral radiance of the color, a second estimation matrix W (2) is calculated. Then, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) according to the equation (11), and uses the second imaging data g (2) RGB (m, n) as the spectral reflectance of the subject OBJ. f (2) Calculate (m, n).
ããã®ããã«ç®åºãããåå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒã被åäœïŒ¯ïŒ¢ïŒªã®ãã€è²ã®æ¬è³ªã§ãããåå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãçšããããšã§ã被åäœïŒ¯ïŒ¢ïŒªãã©ã®ãããªç
§æç°å¢äžã§èŠ³æž¬ããããã®ã§ãã£ãŠãããã®è²åçŸãè¡ãªãããšãã§ããã
The spectral reflectance f (2) (m, n) calculated in this way is the essence of the color of the subject OBJ. By using the spectral reflectance f (2) (m, n), which subject OBJ is selected Even if it is observed under such an illumination environment, the color reproduction can be performed.
ãããªãã¡ãä»»æã®åå
æŸå°èŒåºŠïŒ¥ïŒÎ»ïŒã®æ¡ä»¶äžã§ãåå
åå°çïœïŒïœïŒïœïŒÎ»ïŒã®è¢«åäœã芳枬ããå Žåã®ïŒžïŒ¹ïŒºè¡šè²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºã¯ã以äžã«ç€ºãïŒïŒïŒïŒåŒã®ããã«ãªãã
That is, the tristimulus values X, Y, and Z of the XYZ color system when an object having a spectral reflectance f (m, n; λ) is observed under the condition of an arbitrary spectral radiance E (λ) are as follows: (14) shown in FIG.
ïŒïŒïŒïŒåŒã«ãããŠãïœïœïŒÎ»ïŒïŒïœïŒïŒ²ïŒïŒ§ïŒïŒ¢ïŒã¯çè²é¢æ°ã§ããã人éã®èŠèŠæ床ç¹æ§ã«çžåœããé¢æ°ã§ããã
In the equation (14), h i (λ) (i = R, G, B) is a color matching function, which is a function corresponding to human visual sensitivity characteristics.
ããã®ïŒïŒïŒïŒåŒã«ãããŠãè²åçŸã«äœ¿çšããåå
æŸå°èŒåºŠïŒ¥ïŒÎ»ïŒãä»»æã«æ±ºå®ã§ããããæ¬å®æœã®åœ¢æ
ã«ãããŠã¯ã被åäœïŒ¯ïŒ¢ïŒªã®æ®åæãšåãç
§æç°å¢äžã«ãããè²åçŸãè¡ãªãå Žåã«ã€ããŠäŸç€ºããã
In this equation (14), the spectral radiance E (λ) used for color reproduction can be arbitrarily determined. However, in this embodiment, color reproduction is performed under the same illumination environment as when the subject OBJ is imaged. Illustrate.
ãããªãã¡ãç»åããŒã¿çæéšïŒïŒã¯ãçè²é¢æ°ïœãšã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒïŒïŒïŒ·ïŒïŒïŒã»ïœïŒïŒïŒ
ïŒïœïŒïœïŒïŒãšãçšããŠãåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒã§ããç
§æç°å¢äžã«ãããŠè¢«åäœïŒ¯ïŒ¢ïŒªã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæãããããªãã¡ãç»åããŒã¿çæéšïŒïŒã¯ãïŒïŒïŒïŒåŒã«ç€ºãæŒç®åŒãå®è¡ããã
That is, the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, n) (= W (2) · g (2) RGB (m, n)) and image data g (OUT) XYZ (m ) in which the color reproduction of the subject OBJ is performed in the illumination environment having the spectral radiance E (1). , N). That is, the image data generation unit 24 executes the arithmetic expression shown in Expression (15).
ããïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒïŒïœtã»ïŒ¥ïŒïŒïŒã»ïŒ·ïŒïŒïŒã»ïœïŒïŒïŒ
ïŒïœïŒïœïŒãã»ã»ã»ïŒïŒïŒïŒ
ãããã§ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ ïŒïœïŒïœïŒã¯ã衚è²ç³»ã®åº§æšå€ãšããŠå®çŸ©ãããã g (OUT) XYZ (m, n) = ht · E (1) · W (2) · g (2) RGB (m, n) (15)
Here, the image data g (OUT) XYZ (m, n) is defined as coordinate values of the XYZ color system.
ãããã§ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ ïŒïœïŒïœïŒã¯ã衚è²ç³»ã®åº§æšå€ãšããŠå®çŸ©ãããã g (OUT) XYZ (m, n) = ht · E (1) · W (2) · g (2) RGB (m, n) (15)
Here, the image data g (OUT) XYZ (m, n) is defined as coordinate values of the XYZ color system.
ãç¶ããŠã座æšå€æéšïŒïŒã¯ããã®ç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã衚è²ç³»ã«ãããŠå®çŸ©ãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«å€æããããã®åº§æšå€æéšïŒïŒã«ããå®è¡ããã座æšå€æåŠçã¯ãäžè¿°ãã座æšå€æéšïŒïŒã«ãããåŠçãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Subsequently, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) into image data g (OUT) RGB (m, n) defined in the RGB color system. Since the coordinate conversion process executed by coordinate conversion unit 25 is the same as the process in coordinate conversion unit 15 described above, detailed description will not be repeated.
ã以äžã®ãããªåŠçã«ãã£ãŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸããŒã¿ã§ããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæãããã
Through the above processing, image data g (OUT) RGB (m, n), which is color reproduction data of the subject OBJ, is generated from the second imaging data g (2) RGB (m, n).
ããªããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãã¬ã³ãç¹æ§ããã€è¡šç€ºè£
眮ãªã©ãžåºåãããå Žåã«ã¯ãåœè©²åºåå
ã®ã¬ã³ãç¹æ§ãæã¡æ¶ãããã®åŠçãè¡ãªãããšã奜ãŸããããã®å Žåã«ã¯ã座æšå€æéšïŒïŒãã¬ã³ãç¹æ§ãä»äžãããããªåŠçãå«ãã§ããŠãããããã®ã¬ã³ãç¹æ§ãä»äžããåŠçã¯ã衚瀺è£
眮ã®ã¬ã³ãå€Î³ïœãšãããšãçæãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«ã€ããŠããã®ã¬ã³ãå€Î³ïœã«ã€ããŠã®ã¹ãä¹ãæŒç®ããããšã§å®çŸãããããªããäžè¿°ããå
¥åéšïŒïŒããã³ïŒïŒãšåæ§ã«ãã«ãã¯ã¢ããããŒãã«ïŒïŒ¬ïŒµïŒŽïŒãçšããããšã§ãæŒç®éã倧å¹
ã«äœæžããããšãã§ããã
When the image data g (OUT) RGB (m, n) is output to a display device having a gamma characteristic, it is preferable to perform a process for canceling the gamma characteristic of the output destination. In this case, the coordinate conversion unit 25 may include a process for giving a gamma characteristic. When the gamma value γd of the display device is used, the process for imparting the gamma characteristic is realized by calculating the power of the gamma value γd for the generated image data g (OUT) RGB (m, n). . Note that, similarly to the input units 10 and 20 described above, the amount of calculation can be significantly reduced by using a lookup table (LUT).
ãäžè¿°ã®èª¬æã§ã¯ãç»åããŒã¿çæéšïŒïŒã被åäœïŒ¯ïŒ¢ïŒªã®æ®åæãšåãç
§æç°å¢äžã«ãããŠè²åçŸãè¡ãªãæ§æã«ã€ããŠäŸç€ºããããè²åçŸãè¡ãªãç
§æç°å¢ãç°ãªããã®ãšããŠããããããªãã¡ãç»åããŒã¿çæéšïŒïŒãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã®çæã«çšããåå
æŸå°èŒåºŠïŒ¥ãä»»æã«æ±ºå®ã§ããã
In the above description, the configuration in which the image data generation unit 24 performs color reproduction under the same illumination environment as when the subject OBJ is imaged is illustrated, but the illumination environment in which color reproduction is performed may be different. That is, the spectral radiance E used by the image data generation unit 24 to generate the image data g (OUT) XYZ (m, n) can be arbitrarily determined.
ãïŒåŠçæé ïŒ
ãæ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçè£ çœ®ïŒã«ãããåŠçæé ããŸãšãããšã以äžã®ããã«ãªãã <Processing procedure>
The processing procedure in theimage processing apparatus 1 according to the present embodiment is summarized as follows.
ãæ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçè£ çœ®ïŒã«ãããåŠçæé ããŸãšãããšã以äžã®ããã«ãªãã <Processing procedure>
The processing procedure in the
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ
ïŒã«åŸãç»ååŠçè£
眮ïŒã«ãããå
šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããã
FIG. 4 is a flowchart showing an overall processing procedure in image processing apparatus 1 according to the first embodiment of the present invention.
ãå³ïŒããã³å³ïŒãåç
§ããŠããŸããå
¥åéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠæ®åãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãå
¥åéšïŒïŒã¯ãåå
¥ãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ãçæããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
With reference to FIGS. 1 and 4, first, the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
ã次ã«ãæšå®è¡åç®åºéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«ãããç
§æç°å¢ãæäŸããããã«çšãããåŸãå
æºåè£ã®åå
æŸå°èŒåºŠã®èªå·±çžé¢è¡åãæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
æŸå°èŒåºŠç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒ
ãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the estimation matrix calculation unit 12 uses the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the imaging device 400. The first estimation matrix W (1) is calculated based on the spectral sensitivity S (step S104). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
ã次ã«ãäžåºæ¿å€å€æéšïŒïŒããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠã座æšå€æéšïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã«å€æããïŒã¹ãããïŒïŒïŒïŒãããã«ããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ã«åºã¥ããŠãæ®åè£
眮ïŒïŒïŒã«ããããã¯ã€ããã©ã³ã¹ãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
ãäžæ¹ãå
¥åéšïŒïŒããç
§æç°å¢äžã«ãããŠæ®åè£
眮ïŒïŒïŒã«ãã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
ã次ã«ãæšå®è¡åç®åºéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
åå°çç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãããã«ãç»åããŒã¿çæéšïŒïŒããçè²é¢æ°ïœã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãããã³ã¹ãããïŒïŒïŒã§ç®åºãããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæããïŒã¹ãããïŒïŒïŒïŒãããã«ã座æšå€æéšïŒïŒããã¹ãããïŒïŒïŒã«ãããŠçæãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã衚è²ç³»ã«ãããŠå®çŸ©ãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«å€æãïŒã¹ãããïŒïŒïŒïŒããã®å€æåŸã®ç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããã
Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m ) of the subject OBJ calculated in step S124. , N) is used to generate image data g (OUT) XYZ (m, n) obtained by performing color reproduction of the subject OBJ (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
ãïŒæ¬å®æœã®åœ¢æ
ã«ããäœçšå¹æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããã®æ®åè£ çœ®ãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã«ç §å°ãããç §æå ã®åå æŸå°èŒåºŠãç®åºããããšãã§ããããã®ãããç §æå ã®åå æŸå°èŒåºŠã枬å®ããããã®å°çšã®æž¬å®è£ 眮ãçšããããšãªãã容æã«åå æŸå°èŒåºŠãååŸããããšãã§ããã <Operational effects of the present embodiment>
According to the first embodiment of the present invention, the spectral radiance of the illumination light applied to the subject OBJ can be calculated using the imaging device for imaging the subject OBJ. Therefore, the spectral radiance can be easily acquired without using a dedicated measuring device for measuring the spectral radiance of the illumination light.
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããã®æ®åè£ çœ®ãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã«ç §å°ãããç §æå ã®åå æŸå°èŒåºŠãç®åºããããšãã§ããããã®ãããç §æå ã®åå æŸå°èŒåºŠã枬å®ããããã®å°çšã®æž¬å®è£ 眮ãçšããããšãªãã容æã«åå æŸå°èŒåºŠãååŸããããšãã§ããã <Operational effects of the present embodiment>
According to the first embodiment of the present invention, the spectral radiance of the illumination light applied to the subject OBJ can be calculated using the imaging device for imaging the subject OBJ. Therefore, the spectral radiance can be easily acquired without using a dedicated measuring device for measuring the spectral radiance of the illumination light.
ãããã«ããã®ããã«ç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠã«åºã¥ããŠã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çãæ£ç¢ºã«æšå®ããäžã§ãåœè©²æ®åãã¹ãç
§æç°å¢ã«ãããŠæ®åïŒèŠ³å¯ïŒãããã§ãããè²ãé©åã«åçŸããããšãã§ããã
Furthermore, based on the spectral radiance of the illumination light calculated in this way, the spectral reflectance of the subject OBJ is accurately estimated, and then the color that will be imaged (observed) in the illumination environment to be imaged. Can be reproduced appropriately.
ããŸããç
§æå
ã®åå
æŸå°èŒåºŠã«åºã¥ããŠãæ®åè£
眮ã®ãã¯ã€ããã©ã³ã¹ãé©åã«èª¿æŽã§ããã®ã§ãæ®åçŽ åã®ç¹æ§ã®ãã©ãããªã©ã«åœ±é¿ãããããšãªããããæ£ç¢ºãªè²åçŸãå®çŸããããšãã§ããã
Also, since the white balance of the imaging device can be adjusted appropriately based on the spectral radiance of the illumination light, more accurate color reproduction can be realized without being affected by variations in the characteristics of the imaging device.
ããªããäžè¿°ã®å®æœã®åœ¢æ
ã§ã¯ããç
§æå
ã®åå
æŸå°èŒåºŠã®ç®åºåŠçããããã¯ã€ããã©ã³ã¹ç®åºåŠçãããè²åçŸåŠçãã®ïŒã€ã®åŠçãïŒã€ã®ç»ååŠçè£
眮ã§å®çŸããæ§æã«ã€ããŠäŸç€ºããããå°ãªããšããç
§æå
ã®åå
æŸå°èŒåºŠã®ç®åºåŠçããå®è¡å¯èœãªè£
眮ã§ããã°ãæ¬é¡çºæã®èª²é¡ã¯è§£æ±ºå¯èœã§ããã
In the above-described embodiment, the configuration in which the three processes of âspectral radiance calculation processing of illumination lightâ, âwhite balance calculation processingâ, and âcolor reproduction processingâ are realized by one image processing apparatus is illustrated. However, the problem of the present invention can be solved as long as the apparatus can execute at least âcalculation processing of spectral radiance of illumination lightâ.
ãå®æœã®åœ¢æ
ïŒïŒœ
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ã被åäœïŒ¯ïŒ¢ïŒªã«ç §å°ãããç §æå ã®åå æŸå°èŒåºŠãæšå®ããããã«ïŒçš®é¡ã®èªå·±çžé¢è¡åãçšããæ§æã«ã€ããŠäŸç€ºãããäžæ¹ã§ãç §æå ã®åå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒã¯ãå æºã®çš®é¡ã«ãã£ãŠå€§ããå€åããããšãç¥ãããŠãããããã¯ãå æºã®çºå åçãªã©ã«ãã£ãŠããã®çºå ã¹ãã¯ãã«ïŒèŒç·ã¹ãã¯ãã«ïŒã¯æ§ã ãªåºæã®ç¹æ§ãæããããã§ããããã®ããã被åäœïŒ¯ïŒ¢ïŒªãæ®åããç §æç°å¢ã«å¿ããŠãå æºã®çš®é¡æ¯ã«äºãçæããèªå·±çžé¢è¡åãéžæçã«çšããããšã奜ãŸããã [Embodiment 2]
In the first embodiment described above, the configuration in which one type of autocorrelation matrix B is used to estimate the spectral radiance of the illumination light applied to the subject OBJ has been illustrated. On the other hand, it is known that the spectral radiance (spectrum) of illumination light varies greatly depending on the type of light source. This is because the emission spectrum (bright line spectrum) has various unique characteristics depending on the emission principle of the light source. Therefore, it is preferable to selectively use the autocorrelation matrix B generated in advance for each type of light source according to the illumination environment in which the subject OBJ is imaged.
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ã被åäœïŒ¯ïŒ¢ïŒªã«ç §å°ãããç §æå ã®åå æŸå°èŒåºŠãæšå®ããããã«ïŒçš®é¡ã®èªå·±çžé¢è¡åãçšããæ§æã«ã€ããŠäŸç€ºãããäžæ¹ã§ãç §æå ã®åå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒã¯ãå æºã®çš®é¡ã«ãã£ãŠå€§ããå€åããããšãç¥ãããŠãããããã¯ãå æºã®çºå åçãªã©ã«ãã£ãŠããã®çºå ã¹ãã¯ãã«ïŒèŒç·ã¹ãã¯ãã«ïŒã¯æ§ã ãªåºæã®ç¹æ§ãæããããã§ããããã®ããã被åäœïŒ¯ïŒ¢ïŒªãæ®åããç §æç°å¢ã«å¿ããŠãå æºã®çš®é¡æ¯ã«äºãçæããèªå·±çžé¢è¡åãéžæçã«çšããããšã奜ãŸããã [Embodiment 2]
In the first embodiment described above, the configuration in which one type of autocorrelation matrix B is used to estimate the spectral radiance of the illumination light applied to the subject OBJ has been illustrated. On the other hand, it is known that the spectral radiance (spectrum) of illumination light varies greatly depending on the type of light source. This is because the emission spectrum (bright line spectrum) has various unique characteristics depending on the emission principle of the light source. Therefore, it is preferable to selectively use the autocorrelation matrix B generated in advance for each type of light source according to the illumination environment in which the subject OBJ is imaged.
ãããã§ãå®æœã®åœ¢æ
ïŒã«ãããŠã¯ãè€æ°ã®å
æºã®çš®é¡æ¯ïŒã«ããŽãªãŒæ¯ïŒã«èªå·±çžé¢è¡åãè€æ°æ ŒçŽããŠããã被åäœïŒ¯ïŒ¢ïŒªãæ®åããç
§æç°å¢ã«çžå¿ãããã®ããŠãŒã¶ãéžæã§ããæ§æã«ã€ããŠäŸç€ºããã
Therefore, the second embodiment exemplifies a configuration in which a plurality of autocorrelation matrices are stored for each type of a plurality of light sources (for each category), and the user can select a suitable one for the illumination environment for imaging the subject OBJ.
ãïŒå
šäœæ§æïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒïŒ¡ã®æ©èœæ§æå³ã§ããã <Overall configuration>
FIG. 5 is a functional configuration diagram of animage processing apparatus 1A according to the second embodiment of the present invention.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒïŒ¡ã®æ©èœæ§æå³ã§ããã <Overall configuration>
FIG. 5 is a functional configuration diagram of an
ãå³ïŒãåç
§ããŠãç»ååŠçè£
眮ïŒïŒ¡ã¯ãå³ïŒã«ç€ºãç»ååŠçè£
眮ïŒã«ãããŠãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ä»£ããŠãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ¡ãèšãããã®ã§ãããäžæ¹ãè²åçŸéšïŒïŒïŒã«ã€ããŠã¯ãå³ïŒã«ç€ºãç»ååŠçè£
眮ïŒã®è²åçŸéšïŒïŒïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Referring to FIG. 5, the image processing apparatus 1 </ b> A includes an illumination spectrum estimation unit 100 </ b> A in place of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1. On the other hand, color reproduction unit 200 is similar to color reproduction unit 200 of image processing apparatus 1 shown in FIG. 1, and therefore detailed description thereof will not be repeated.
ãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ¡ã¯ãå³ïŒã«ç€ºãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãããŠãå
æºããŒã¿æ ŒçŽéšïŒïŒã«ä»£ããŠãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ãèšãããã®ã§ããããã®ä»ã®éšäœã«ã€ããŠã¯ãå®æœã®åœ¢æ
ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
The illumination spectrum estimation unit 100A is provided with a light source data storage unit 13A in place of the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
ãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã¯ãç
§æç°å¢ãæäŸããããã«çšãããåŸãïŒçš®é¡ã®å
æºåè£æ¯ã«äºãå®ããããæŒç®è¡åã§ãããèªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒãäºãæ ŒçŽããããããŠãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã¯ããŠãŒã¶ãªã©ããã®å€éšæ什ã«å¿ããŠãäºãæ ŒçŽããèªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒã®ãã¡ãéžæããããã®ãæšå®è¡åç®åºéšïŒïŒãžåºåããã
The light source data storage unit 13A stores in advance an autocorrelation matrix B 1 , B 2 ,..., B M, which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment. Store. Then, the light source data storage unit 13A selects the selected one of the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance in accordance with an external command from the user or the like. Output to.
ã以äžããã®å
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã«æ ŒçŽãããè€æ°ã®èªå·±çžé¢è¡åã«ã€ããŠèª¬æããã
Hereinafter, a plurality of autocorrelation matrices stored in the light source data storage unit 13A will be described.
ãããšãã°ãäžè¬çãªèå
ç¯ã®åå
ç
§å°èŒåºŠïŒã¹ãã¯ãã«ïŒã¯ããã®äžã«å°å
¥ãããŠããæ°Žéãªã©ã®èŒç·ã¹ãã¯ãã«ã«çžåœããæ³¢é·ã«ããŒã¯ãæãããããªæ³¢åœ¢ãæãããäžæ¹ãçœç±ç¯ã®åå
ç
§å°èŒåºŠïŒã¹ãã¯ãã«ïŒã«ã¯ããã®çºå
åçäžãããŒã¯ãååšããªãããã®ããã«ãå
æºåè£ã®åå
ç
§å°èŒåºŠïŒã¹ãã¯ãã«ïŒã¯ããã®çš®é¡æ¯ã«ç°ãªã£ããã®ãšãªãããã®ããã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒç
§æã¹ãã¯ãã«ïŒãæšå®ããããã«ã¯ãåºæºãšãªãèªå·±çžé¢è¡åãé©åã«éžæããå¿
èŠãããã
For example, the spectral irradiation luminance (spectrum) of a general fluorescent lamp has a waveform having a peak at a wavelength corresponding to an emission line spectrum of mercury or the like enclosed therein. On the other hand, there is no peak in the spectral illumination luminance (spectrum) of the incandescent lamp due to its emission principle. In this way, the spectral irradiation brightness (spectrum) of the light source candidate is different for each type. Therefore, in order to estimate the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject OBJ, it is necessary to appropriately select the autocorrelation matrix B serving as a reference.
ãäžæ¹ãããçšåºŠã®äºåç¥èãæãããŠãŒã¶ã¯ãæ®åè£
眮ïŒïŒïŒãçšããŠè¢«åäœïŒ¯ïŒ¢ïŒªãæ®åããéã«ãã©ã®ãããªå
æºã«ããç
§æç°å¢äžã§ããããå€æããããšãã§ãããããšãã°ã被åäœïŒ¯ïŒ¢ïŒªã®æ®åå Žæã¯å±å
ã§ãããããããã¯å±å€ã§ãããããæ®åå Žæãå±å
ã§ããã°ãå
æºãšããŠèå
ç¯ãçšããããŠãããããããã¯çœç±ç¯ãçšããããŠããããšãã£ãå€æã¯å¯èœã§ããããã®ããããã®ãããªãŠãŒã¶ãå€æã§ããçšåºŠã®åé¡ã®äžã«ãå
æºã®çš®é¡å¥ã«è€æ°ã®èªå·±çžé¢è¡åãäºãçšæããŠããããŠãŒã¶ã被åäœïŒ¯ïŒ¢ïŒªã®æ®åç¶æ³ã«å¿ããŠä»»æã«éžæã§ããã°ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒïŒç
§æã¹ãã¯ãã«ïŒã®æšå®ç²ŸåºŠãé«ããããšãã§ããã
On the other hand, a user having a certain level of prior knowledge can determine what light source is used in the illumination environment when the subject OBJ is imaged using the imaging device 400. For example, whether the subject OBJ is imaged indoors or outdoors, or if the imaged location is indoors, it is possible to determine whether a fluorescent light or an incandescent light is used as the light source. It is. Therefore, if a plurality of autocorrelation matrices are prepared in advance for each type of light source under such a classification that can be determined by the user and the user can arbitrarily select according to the imaging state of the subject OBJ, The estimation accuracy of the spectral radiance E (1) (illumination spectrum) can be increased.
ãããã§ãæ¬å®æœã®åœ¢æ
ã«åŸãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã¯ãäžäŸãšããŠããèå
ç¯ãããçœç±ç¯ããããã»ãã³ç¯ãããæ°Žéç¯ããã倪éœå
ããšã£ãçš®é¡æ¯ã«è€æ°ã®èªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒãäºãæ ŒçŽããŠããããŠãŒã¶ãªã©ã«ããéžææ什ã«å¿ããŠããããã®ãã¡å¯Ÿå¿ãããã®ãèªå·±çžé¢è¡åãšããŠæšå®è¡åç®åºéšïŒïŒãžåºåãããããªãã¡ãèªå·±çžé¢è¡åïŒã¯ããèå
ç¯ãã®åå
æŸå°èŒåºŠã®çµ±èšããŒã¿ã®ã¿ããçæããèªå·±çžé¢è¡åïŒã¯ããçœç±ç¯ãã®åå
æŸå°èŒåºŠã®çµ±èšããŒã¿ã®ã¿ããçæãããšãã£ãå
·åã§ããã
Therefore, as an example, the light source data storage unit 13A according to the present embodiment includes a plurality of autocorrelation matrices for each type of âfluorescent lampâ, âincandescent lampâ, âxenon lampâ, âmercury lampâ, and âsunlightâ. B 1 , B 2 ,..., B M are stored in advance, and in response to a selection command SEL by a user or the like, the corresponding one is output as an autocorrelation matrix B to the estimation matrix calculation unit 12. That is, the autocorrelation matrix B 1 is generated only from the statistical data of the spectral radiance of âfluorescent lampâ, and the autocorrelation matrix B 2 is generated only from the statistical data of the spectral radiance of âincandescent lampâ. is there.
ãæšå®è¡åç®åºéšïŒïŒã¯ããã®å
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ããåããèªå·±çžé¢è¡åã«åºã¥ããŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæšå®ããã
The estimation matrix calculation unit 12 estimates the spectral radiance E (1) of the illumination light based on the autocorrelation matrix B received from the light source data storage unit 13A.
ããªããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒã®æšå®ç²ŸåºŠãé«ããããã«ã¯ãå
æºã®çš®é¡ããã现å¥ããèªå·±çžé¢è¡åãäºãæ ŒçŽããŠããããšã奜ãŸãããããã«ãåäœã®å
æºã ãã§ãªããããšãã°ãèå
ç¯ããšãçœç±ç¯ããšãçµã¿åãããå Žåã«çããåå
æŸå°èŒåºŠã«åºã¥ããŠãèªå·±çžé¢è¡åãçæããŠãããŠããããããªãã¡ãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã«ã¯ã被åäœïŒ¯ïŒ¢ïŒªãæ®åããéã®ç
§æç°å¢ãšããŠæ³å®ãããæ§ã
ãªåå
æŸå°èŒåºŠã«åºã¥ããŠçæãããèªå·±çžé¢è¡åãäºãæ ŒçŽããŠããããšã奜ãŸããã
In order to increase the estimation accuracy of the spectral radiance E (1) of the illumination light, it is preferable to store in advance an autocorrelation matrix that further classifies the type of light source. Furthermore, not only a single light source, but also an autocorrelation matrix may be generated based on the spectral radiance generated when, for example, a âfluorescent lampâ and an âincandescent lampâ are combined. That is, it is preferable to store in advance in the light source data storage unit 13A autocorrelation matrices generated based on various spectral radiances that are assumed as illumination environments when the subject OBJ is imaged.
ããã®ä»ã®åŠçã«ã€ããŠã¯ãåºæ¬çã«äžè¿°ããå®æœã®åœ¢æ
ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Other processes are basically the same as those in the first embodiment described above, and thus detailed description will not be repeated.
ãïŒåŠçæé ïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒïŒ¡ã«ãããå šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããããªããå³ïŒã«ç€ºããããŒãã£ãŒãäžã®åã¹ãããã®ãã¡ãå³ïŒã«ç€ºããããŒãã£ãŒãäžã®ã¹ããããšåäžå 容ã®ã¹ãããã«ã€ããŠã¯ãåã笊å·ãä»ããŠããã <Processing procedure>
FIG. 6 is a flowchart showing an overall processing procedure inimage processing apparatus 1A according to the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 6, steps having the same contents as the steps in the flowchart shown in FIG.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒïŒ¡ã«ãããå šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããããªããå³ïŒã«ç€ºããããŒãã£ãŒãäžã®åã¹ãããã®ãã¡ãå³ïŒã«ç€ºããããŒãã£ãŒãäžã®ã¹ããããšåäžå 容ã®ã¹ãããã«ã€ããŠã¯ãåã笊å·ãä»ããŠããã <Processing procedure>
FIG. 6 is a flowchart showing an overall processing procedure in
ãå³ïŒããã³å³ïŒãåç
§ããŠããŸããå
¥åéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠæ®åãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãå
¥åéšïŒïŒã¯ãåå
¥ãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ãçæããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
With reference to FIGS. 5 and 6, first, the input unit 10 captures at least part of the light incident on the subject OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
ã次ã«ãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ããäºãæ ŒçŽããèªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒã®ãã¡ãéžææ什ã«å¿ããŠïŒã€ã®èªå·±çžé¢è¡åãèªå·±çžé¢è¡åãšããŠæšå®è¡åç®åºéšïŒïŒãžåºåããïŒã¹ãããïŒïŒïŒïŒããã®åŸãæšå®è¡åç®åºéšïŒïŒããå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ããã®èªå·±çžé¢è¡åãæ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
æŸå°èŒåºŠç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒ
ãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, among the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance by the light source data storage unit 13A, one autocorrelation matrix is estimated as an autocorrelation matrix B according to the selection command SEL. It outputs to the calculation part 12 (step S103). Thereafter, the estimation matrix calculator 12 determines the first estimation matrix W ( based on the autocorrelation matrix B from the light source data storage 13 </ b> A, the spectral transmittance f (1) of the diffusing member 402, and the spectral sensitivity S of the imaging device 400. 1) is calculated (step S104). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
ã次ã«ãäžåºæ¿å€å€æéšïŒïŒããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠã座æšå€æéšïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã«å€æããïŒã¹ãããïŒïŒïŒïŒãããã«ããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ã«åºã¥ããŠãæ®åè£
眮ïŒïŒïŒã«ããããã¯ã€ããã©ã³ã¹ãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
ãäžæ¹ãå
¥åéšïŒïŒããç
§æç°å¢äžã«ãããŠæ®åè£
眮ïŒïŒïŒã«ãã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
ã次ã«ãæšå®è¡åç®åºéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
åå°çç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãããã«ãç»åããŒã¿çæéšïŒïŒããçè²é¢æ°ïœã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã¹ãããïŒïŒïŒã§ç®åºãããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæããïŒã¹ãããïŒïŒïŒïŒãããã«ã座æšå€æéšïŒïŒããã¹ãããïŒïŒïŒã«ãããŠçæãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã衚è²ç³»ã«ãããŠå®çŸ©ãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«å€æãïŒã¹ãããïŒïŒïŒïŒããã®å€æåŸã®ç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããã
Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
ãïŒæ¬å®æœã®åœ¢æ
ã«ããäœçšå¹æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ã被åäœïŒ¯ïŒ¢ïŒªã®æ®åç¶æ³ãªã©ã«å¿ããŠé©åãªèªå·±çžé¢è¡åãéžæããããšã§ãç §æå ã®åå æŸå°èŒåºŠãããæ£ç¢ºã«æšå®ããããšãã§ããã <Operational effects of the present embodiment>
According to the second embodiment of the present invention, it is possible to obtain the same effects as those of the first embodiment described above, and by selecting an appropriate autocorrelation matrix according to the imaging situation of the subject OBJ, etc. The spectral radiance of light can be estimated more accurately.
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ã被åäœïŒ¯ïŒ¢ïŒªã®æ®åç¶æ³ãªã©ã«å¿ããŠé©åãªèªå·±çžé¢è¡åãéžæããããšã§ãç §æå ã®åå æŸå°èŒåºŠãããæ£ç¢ºã«æšå®ããããšãã§ããã <Operational effects of the present embodiment>
According to the second embodiment of the present invention, it is possible to obtain the same effects as those of the first embodiment described above, and by selecting an appropriate autocorrelation matrix according to the imaging situation of the subject OBJ, etc. The spectral radiance of light can be estimated more accurately.
ãå®æœã®åœ¢æ
ïŒã®å€åœ¢äŸïŒœ
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ããŠãŒã¶ãªã©ã«ããéžææ什ã«å¿ããŠãè€æ°ã®èªå·±çžé¢è¡åã®ãã¡ããããïŒã€ãéžæãããããã«ããã®éžæãããèªå·±çžé¢è¡åã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãçæãããæ§æã«ã€ããŠäŸç€ºããããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒã®çæã«ã¯ãæ¡æ£éšæïŒïŒïŒã®åå ééçïœïŒïŒïŒããã³æ®åè£ çœ®ïŒïŒïŒã®åå æ床ãçšãããããããããã®å€ã¯ãæ®åè£ çœ®ïŒïŒïŒããã³æ¡æ£éšæïŒïŒïŒã亀æãããªãéãäžå€ã§ãããããã§ãå®æœã®åœ¢æ ïŒã®å€åœ¢äŸãšããŠãè€æ°ã®èªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒããããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ ïŒïŒïŒ·ïŒïŒïŒ ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ ïœãäºãç®åºããŠããæ§æã«ã€ããŠäŸç€ºããã [Modification of Embodiment 2]
In the above-described second embodiment, any one of a plurality of autocorrelation matrices is selected in response to a selection command SEL by a user or the like, and the first estimation matrix is further based on the selected autocorrelation matrix. The configuration in which W (1) is generated has been illustrated. The first estimation matrix W (1) is generated using the spectral transmittance f (1) of the diffusingmember 402 and the spectral sensitivity S of the imaging device 400. These values are used for the imaging device 400 and the diffusing member 402. As long as is not exchanged. Therefore, as a modification of the second embodiment, a plurality of auto-correlation matrix B 1, B 2, · · ·, a plurality of are calculated from B M first estimation matrix W (1) 1, W ( 1) 2 ,..., W (1) An example of a configuration in which m is calculated in advance will be described.
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ããŠãŒã¶ãªã©ã«ããéžææ什ã«å¿ããŠãè€æ°ã®èªå·±çžé¢è¡åã®ãã¡ããããïŒã€ãéžæãããããã«ããã®éžæãããèªå·±çžé¢è¡åã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãçæãããæ§æã«ã€ããŠäŸç€ºããããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒã®çæã«ã¯ãæ¡æ£éšæïŒïŒïŒã®åå ééçïœïŒïŒïŒããã³æ®åè£ çœ®ïŒïŒïŒã®åå æ床ãçšãããããããããã®å€ã¯ãæ®åè£ çœ®ïŒïŒïŒããã³æ¡æ£éšæïŒïŒïŒã亀æãããªãéãäžå€ã§ãããããã§ãå®æœã®åœ¢æ ïŒã®å€åœ¢äŸãšããŠãè€æ°ã®èªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒããããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ ïŒïŒïŒ·ïŒïŒïŒ ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ ïœãäºãç®åºããŠããæ§æã«ã€ããŠäŸç€ºããã [Modification of Embodiment 2]
In the above-described second embodiment, any one of a plurality of autocorrelation matrices is selected in response to a selection command SEL by a user or the like, and the first estimation matrix is further based on the selected autocorrelation matrix. The configuration in which W (1) is generated has been illustrated. The first estimation matrix W (1) is generated using the spectral transmittance f (1) of the diffusing
ãïŒå
šäœæ§æïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«åŸãç»ååŠçè£ çœ®ïŒïŒ¢ã®æ©èœæ§æå³ã§ããã <Overall configuration>
FIG. 7 is a functional configuration diagram of animage processing device 1B according to a modification of the second embodiment of the present invention.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«åŸãç»ååŠçè£ çœ®ïŒïŒ¢ã®æ©èœæ§æå³ã§ããã <Overall configuration>
FIG. 7 is a functional configuration diagram of an
ãå³ïŒãåç
§ããŠãç»ååŠçè£
眮ïŒïŒ¢ã¯ãå³ïŒã«ç€ºãç»ååŠçè£
眮ïŒã«ãããŠãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ä»£ããŠãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ¢ãèšãããã®ã§ãããäžæ¹ãè²åçŸéšïŒïŒïŒã«ã€ããŠã¯ãå³ïŒã«ç€ºãç»ååŠçè£
眮ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Referring to FIG. 7, the image processing apparatus 1 </ b> B includes an illumination spectrum estimation unit 100 </ b> B instead of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1. On the other hand, color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
ãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ¢ã¯ãå³ïŒã«ç€ºãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãããŠãæšå®è¡åç®åºéšïŒïŒããã³å
æºããŒã¿æ ŒçŽéšïŒïŒã«ä»£ããŠãæšå®è¡åæ ŒçŽéšïŒïŒãèšãããã®ã§ããããã®ä»ã®éšäœã«ã€ããŠã¯ãå®æœã®åœ¢æ
ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
The illumination spectrum estimation unit 100B includes an estimation matrix storage unit 17 instead of the estimation matrix calculation unit 12 and the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
ãæšå®è¡åæ ŒçŽéšïŒïŒã¯ãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠäºãç®åºããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ
ïŒãäºãæ ŒçŽããããããŠãæšå®è¡åæ ŒçŽéšïŒïŒã¯ããŠãŒã¶ãªã©ããã®å€éšæ什ã«å¿ããŠãäºãæ ŒçŽãã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ
ïŒã®ãã¡ãéžæããããã®ãåå
æŸå°èŒåºŠç®åºéšïŒïŒãžåºåããã
The estimation matrix storage unit 17 includes first estimation matrices W (1) 1 , W (1) 2 ,... That are calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. ., W (1) M is stored in advance. And the estimation matrix storage part 17 respond | corresponds to the external instruction | command from a user etc. among 1st estimation matrix W (1) 1 , W (1) 2 , ..., W (1) M stored beforehand. The selected one is output to the spectral radiance calculation unit 11.
ããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ
ïŒã¯ãå®æœã®åœ¢æ
ïŒã«åŸãç»ååŠçè£
眮ïŒïŒ¡ã®å
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã«æ ŒçŽãããïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒã®ããããããç®åºãããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ
ïŒã®ç®åºã«éããŠãæ¢ç¥ã®æ¡æ£éšæïŒïŒïŒã®åå
ééçïœïŒïŒïŒããã³æ®åè£
眮ïŒïŒïŒã®åå
æ床ãçšããããã
The first estimation matrix W (1) 1, W ( 1) 2, ···, W (1) M is, B 1 is stored in the source data storage section 13A of the image processing apparatus 1A according to the second embodiment, Calculated from B 2 ,..., B M. First estimation matrix W (1) 1, W ( 1) 2, ···, W (1) when calculating the M, the spectral sensitivity of the known spectral transmittance f (1) of the diffusion member 402 and the image pickup apparatus 400 S Is used.
ããªããå®æœã®åœ¢æ
ïŒã®å€åœ¢äŸã§ã¯ãæ®åè£
眮ïŒïŒïŒããã³æ¡æ£éšæïŒïŒïŒã®å°ãªããšãäžæ¹ã亀æãããå Žåã«ã¯ãå
æºããŒã¿æ ŒçŽéšïŒïŒïŒ¡ã«æ ŒçŽãããïŒïŒïŒ¢ïŒïŒã»ã»ã»ïŒïŒ¢ïŒãå床èšç®ããå¿
èŠãããã
In the modification of the second embodiment, when at least one of the imaging device 400 and the diffusing member 402 is replaced, B 1 , B 2 ,..., B M stored in the light source data storage unit 13A. Need to be calculated again.
ããã®ä»ã®åŠçã«ã€ããŠã¯ãåºæ¬çã«äžè¿°ããå®æœã®åœ¢æ
ïŒãŸãã¯ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Other processes are basically the same as those in the first or second embodiment described above, and thus detailed description will not be repeated.
ãïŒåŠçæé ïŒ
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«åŸãç»ååŠçè£ çœ®ïŒïŒ¢ã«ãããå šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããããªããå³ïŒã«ç€ºããããŒãã£ãŒãäžã®åã¹ãããã®ãã¡ãå³ïŒã«ç€ºããããŒãã£ãŒãäžã®ã¹ããããšåäžå 容ã®ã¹ãããã«ã€ããŠã¯ãåã笊å·ãä»ããŠããã <Processing procedure>
FIG. 8 is a flowchart showing an overall processing procedure inimage processing apparatus 1B according to the modification of the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 8, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«åŸãç»ååŠçè£ çœ®ïŒïŒ¢ã«ãããå šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããããªããå³ïŒã«ç€ºããããŒãã£ãŒãäžã®åã¹ãããã®ãã¡ãå³ïŒã«ç€ºããããŒãã£ãŒãäžã®ã¹ããããšåäžå 容ã®ã¹ãããã«ã€ããŠã¯ãåã笊å·ãä»ããŠããã <Processing procedure>
FIG. 8 is a flowchart showing an overall processing procedure in
ãå³ïŒããã³å³ïŒãåç
§ããŠããŸããå
¥åéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠæ®åãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãå
¥åéšïŒïŒã¯ãåå
¥ãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ãçæããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
With reference to FIGS. 7 and 8, first, the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least a part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
ã次ã«ãæšå®è¡åæ ŒçŽéšïŒïŒããäºãæ ŒçŽãã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ
ïŒã®ãã¡ãéžææ什ã«å¿ããŠïŒã€ã®ç¬¬ïŒæšå®è¡åã第ïŒæšå®è¡åïŒïŒïŒãšããŠåå
æŸå°èŒåºŠç®åºéšïŒïŒãžåºåããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
æŸå°èŒåºŠç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§éžæããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒ
ãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the estimation matrix storage unit 17 selects one of the first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) M stored in advance according to the selection command SEL. One estimation matrix is output to the spectral radiance calculation unit 11 as a first estimation matrix W (1) (step S105). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) selected in step S105, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
ã次ã«ãäžåºæ¿å€å€æéšïŒïŒããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠã座æšå€æéšïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã«å€æããïŒã¹ãããïŒïŒïŒïŒãããã«ããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ã«åºã¥ããŠãæ®åè£
眮ïŒïŒïŒã«ããããã¯ã€ããã©ã³ã¹ãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
ãäžæ¹ãå
¥åéšïŒïŒããç
§æç°å¢äžã«ãããŠæ®åè£
眮ïŒïŒïŒã«ãã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
ã次ã«ãæšå®è¡åç®åºéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
åå°çç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãããã«ãç»åããŒã¿çæéšïŒïŒããçè²é¢æ°ïœã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã¹ãããïŒïŒïŒã§ç®åºãããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæããïŒã¹ãããïŒïŒïŒïŒãããã«ã座æšå€æéšïŒïŒããã¹ãããïŒïŒïŒã«ãããŠçæãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã衚è²ç³»ã«ãããŠå®çŸ©ãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«å€æãïŒã¹ãããïŒïŒïŒïŒããã®å€æåŸã®ç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããã
Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
ãïŒæ¬å®æœã®åœ¢æ
ã«ããäœçšå¹æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒããã³ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ãè€æ°ã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ ïŒïŒïŒ·ïŒïŒïŒ ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ ïŒã®ç®åºåŠçãç°¡ç¥åã§ããã®ã§ãããæŒç®åŠçãé«éåã§ããã <Operational effects of the present embodiment>
According to the modification of the second embodiment of the present invention, the same operational effects as those of the first and second embodiments can be obtained, and a plurality of first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) Since the calculation process of M can be simplified, the calculation process can be further speeded up.
ããã®çºæã®å®æœã®åœ¢æ ïŒã®å€åœ¢äŸã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒããã³ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ãè€æ°ã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ ïŒïŒïŒ·ïŒïŒïŒ ïŒïŒã»ã»ã»ïŒïŒ·ïŒïŒïŒ ïŒã®ç®åºåŠçãç°¡ç¥åã§ããã®ã§ãããæŒç®åŠçãé«éåã§ããã <Operational effects of the present embodiment>
According to the modification of the second embodiment of the present invention, the same operational effects as those of the first and second embodiments can be obtained, and a plurality of first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) Since the calculation process of M can be simplified, the calculation process can be further speeded up.
ãå®æœã®åœ¢æ
ïŒïŒœ
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ãå æºããŒã¿ã®çš®é¡æ¯ã«è€æ°ã®èªå·±çžé¢è¡åãäºãæ ŒçŽããŠãããä»»æã«éžæãããèªå·±çžé¢è¡åãçšããŠãç §æå ã®åå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒã®æšå®ãè¡ãªãããæ§æã«ã€ããŠäŸç€ºãããäžæ¹ã以äžã«èª¬æããå®æœã®åœ¢æ ïŒã«ãããŠã¯ãè€æ°ã®èªå·±çžé¢è¡åã®åã ãçšããç §æå ã®åå æŸå°èŒåºŠã®æšå®çµæã«å¯ŸããŠè©äŸ¡ãè¡ãªã£ãäžã§ãæãé©åãªæšå®çµæãåºåããæ§æã«ã€ããŠäŸç€ºããã [Embodiment 3]
InEmbodiment 2 described above, a plurality of autocorrelation matrices are stored in advance for each type of light source data, and the spectral radiance (spectrum) of illumination light is estimated using an arbitrarily selected autocorrelation matrix. The configuration in which is performed is illustrated. On the other hand, in Embodiment 3 described below, the most appropriate estimation result is output after evaluating the estimation result of the spectral radiance of the illumination light using each of the plurality of autocorrelation matrices. The configuration will be exemplified.
ãäžè¿°ã®å®æœã®åœ¢æ ïŒã«ãããŠã¯ãå æºããŒã¿ã®çš®é¡æ¯ã«è€æ°ã®èªå·±çžé¢è¡åãäºãæ ŒçŽããŠãããä»»æã«éžæãããèªå·±çžé¢è¡åãçšããŠãç §æå ã®åå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒã®æšå®ãè¡ãªãããæ§æã«ã€ããŠäŸç€ºãããäžæ¹ã以äžã«èª¬æããå®æœã®åœ¢æ ïŒã«ãããŠã¯ãè€æ°ã®èªå·±çžé¢è¡åã®åã ãçšããç §æå ã®åå æŸå°èŒåºŠã®æšå®çµæã«å¯ŸããŠè©äŸ¡ãè¡ãªã£ãäžã§ãæãé©åãªæšå®çµæãåºåããæ§æã«ã€ããŠäŸç€ºããã [Embodiment 3]
In
ãïŒå
šäœæ§æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ã¯ãå³ïŒã«ç€ºãå®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã«ãããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ä»£ããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ£ãèšãããã®ã§ãããäžæ¹ãè²åçŸéšïŒïŒïŒã«ã€ããŠã¯ãå³ïŒã«ç€ºãç»ååŠçè£ çœ®ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã <Overall configuration>
The image processing apparatus according to the third embodiment of the present invention is theimage processing apparatus 1 according to the first embodiment shown in FIG. 1 except that an illumination spectrum estimation unit 100C is provided instead of the illumination spectrum estimation unit 100. On the other hand, color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
ããã®çºæã®å®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ã¯ãå³ïŒã«ç€ºãå®æœã®åœ¢æ ïŒã«åŸãç»ååŠçè£ çœ®ïŒã«ãããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ä»£ããŠãç §æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ£ãèšãããã®ã§ãããäžæ¹ãè²åçŸéšïŒïŒïŒã«ã€ããŠã¯ãå³ïŒã«ç€ºãç»ååŠçè£ çœ®ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã <Overall configuration>
The image processing apparatus according to the third embodiment of the present invention is the
ãå³ïŒã¯ããã®çºæã®å®æœã®åœ¢æ
ïŒã«åŸãç»ååŠçè£
眮ã®ç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ£ã®æ©èœæ§æå³ã§ããããªããæ¬å®æœã®åœ¢æ
ã«åŸãç»ååŠçè£
眮ã«å«ãŸããè²åçŸéšïŒïŒïŒã«ã€ããŠã¯å³ç€ºããªãã
FIG. 9 is a functional configuration diagram of illumination spectrum estimation unit 100C of the image processing device according to the third embodiment of the present invention. The color reproduction unit 200 included in the image processing apparatus according to the present embodiment is not shown.
ãå³ïŒãåç
§ããŠãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ£ã¯ãå
¥åéšïŒïŒãšãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ãšãéžæéšïŒïŒãšãè©äŸ¡éšïŒïŒãšãäžåºæ¿å€å€æéšïŒïŒãšã座æšå€æéšïŒïŒãšããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãšãããã«å«ãããããã®ãã¡ãå
¥åéšïŒïŒãšãäžåºæ¿å€å€æéšïŒïŒãšã座æšå€æéšïŒïŒãšããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãšã«ã€ããŠã¯ãå®æœã®åœ¢æ
ïŒïŒå³ïŒïŒã«ãããŠèª¬æããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Referring to FIG. 9, the illumination spectrum estimation unit 100C includes an input unit 10, spectral radiance calculation units 11A, 11B, 11C, and 11D, a selection unit 18, an evaluation unit 19, and a tristimulus value conversion unit 14. The coordinate conversion unit 15 and the white balance calculation unit 16 are further included. Among these, since the input unit 10, the tristimulus value conversion unit 14, the coordinate conversion unit 15, and the white balance calculation unit 16 have been described in the first embodiment (FIG. 1), detailed description will be repeated. Absent.
ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ã¯ãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®åå
æŸå°èŒåºŠã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒãããããç®åºããããã®ç¬¬ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒã¯ãå³ïŒã«ç€ºãæšå®è¡åæ ŒçŽéšïŒïŒãæ ŒçŽãã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒãšå®è³ªçã«åæ§ã§ãããããªãã¡ã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒã¯ãç
§æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å
æºåè£ã®çš®é¡æ¯ã«äºãå®ããããèªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒïŒ¢ïŒïŒïŒ¢ïŒã«åºã¥ããŠãäžè¿°ããæé ãšåæ§ã®æé ã«åŸã£ãŠç®åºãããããªããå³ïŒã«ã¯ïŒçš®é¡ã®ç¬¬ïŒæšå®è¡åïŒïŒïŒãäŸç€ºãããã第ïŒæšå®è¡åïŒïŒïŒã¯è€æ°ã§ããéãã«ãããŠããã®æ°ã¯å¶éãããªãããŸããå³ïŒã«ã¯ã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒãäºãç®åºããŠããæ§æãäŸç€ºããããå³ïŒã«ç€ºãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒïŒ¡ã®ããã«ãæŒç®åŠçæ¯ã«ããããåçã«ç®åºããŠãããã
Spectral radiance calculation units 11A, 11B, 11C, and 11D calculate first estimation matrices W (1) 1 , W ( based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. 1) Spectral radiance E (1) 1 , E (1) 2 , E of illumination light incident on subject OBJ from imaging data g (1) using 2 , W (1) 3 , W (1) 4 (1) 3 and E (1) Calculate 4 respectively. The first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is the first estimation matrix W (1) stored in the estimation matrix storage unit 17 shown in FIG. 1 , W (1) 2 , W (1) 3 , and W (1) 4 are substantially the same. That is, the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is previously stored for each type of a plurality of light source candidates that can be used to provide an illumination environment. Based on the determined autocorrelation matrices B 1 , B 2 , B 3 , B 4 , the calculation is performed according to the same procedure as described above. In addition, although four types of 1st estimation matrix W (1) is illustrated in FIG. 9, as long as there are multiple 1st estimation matrices W (1) , the number is not restrict | limited. FIG. 9 illustrates a configuration in which the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is calculated in advance. Like the illumination spectrum estimation unit 100A, these may be dynamically calculated for each calculation process.
ãæ¬å®æœã®åœ¢æ
ã§ã¯ãäžäŸãšããŠã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã¯ãèå
ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããèªå·±çžé¢è¡åïŒããç®åºãããŠããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã¯ãçœç±ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããèªå·±çžé¢è¡åïŒããç®åºãããŠããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã¯ããã»ãã³ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããèªå·±çžé¢è¡åïŒããç®åºãããŠãããã®ãšãããããã«ã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã¯ãèå
ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ãã¹ãŠãå«ãçµ±èšããŒã¿ã«åºã¥ããŠäœæãããèªå·±çžé¢è¡åïŒããç®åºãããŠãããã®ãšããã
In the present embodiment, as an example, the first estimation matrix W (1) 1 is calculated from the autocorrelation matrix B 1 created based on the fluorescent lamp statistical data, and the first estimation matrix W (1) 2 is calculated from the autocorrelation matrix B 2 created based on the incandescent lamp statistical data, and the first estimation matrix W (1) 3 is the autocorrelation matrix created based on the xenon lamp statistical data. It assumed to be calculated from the B 3. Furthermore, it is assumed that the first estimation matrix W (1) 4 is calculated from an autocorrelation matrix B 4 created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp.
ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ã¯ãããããç®åºããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒããéžæéšïŒïŒãžããããåºåããã
Spectral radiance calculation units 11A, 11B, 11C, and 11D select the calculated spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light, respectively. 18 respectively.
ãéžæéšïŒïŒã¯ãåŸè¿°ããè©äŸ¡éšïŒïŒã«ããè©äŸ¡çµæã«åŸã£ãŠãå
¥åãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®ãã¡ïŒã€ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããã
The selection unit 18 selects one of the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the input illumination light according to the evaluation result by the evaluation unit 19 described later. Are output as the spectral radiance E (1) of the illumination light.
ãè©äŸ¡éšïŒïŒã¯ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ãããããç®åºããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®ãã¡ãæãé©åã«æšå®ããªããããã®ãè©äŸ¡ãããããå
·äœçã«ã¯ãè©äŸ¡éšïŒïŒã¯ãäºãèŠå®ããåºæºãã¿ãŒã³ãšæ¯èŒããããšã§ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒãè©äŸ¡ããã
The evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light calculated by the spectral radiance calculation units 11A, 11B, 11C, and 11D, respectively. Of these, the one that is most appropriately estimated is evaluated. More specifically, the evaluation unit 19 compares the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) of the illumination light by comparing with a reference pattern defined in advance. 4 is evaluated.
ãæ¬å®æœã®åœ¢æ
ã§ã¯ãäžäŸãšããŠããããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒãããã¯ã察å¿ããèªå·±çžé¢è¡åïŒïŒïŒ¢ïŒïŒïŒ¢ïŒïŒã®ç®åºæã«çšããå
æºã®åå
æŸå°èŒåºŠïŒçµ±èšå€ãŸãã¯å®æž¬å€ïŒããããããç®åºããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãçšãããããå
·äœçã«ã¯ãããšãã°ã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã«å¯Ÿå¿ããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ã¯ã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã®ç®åºã«çšããèªå·±çžé¢è¡åïŒã®çæå
ãšãªã£ãå
æºã®çŸ€è¡åïœïœïŒå³ïŒåç
§ïŒã®åèŠçŽ ãå¹³ååããããšã§ç®åºããããããªãã¡ãå³ïŒã«ç€ºãããã«ãå
æºåè£ïœã®çŸ€è¡åïœïœãæåå€ïœ
ïœïŒÎ»ïœïŒïœïŒâŠïœâŠïŒ®ïŒïŒâŠïœâŠïœïœãããªãå Žåã«ã¯ã察å¿ããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ã®åãµã³ããªã³ã°æ³¢é·Î»ïœïŒïŒâŠïœâŠïœïŒã«ãããæåå€ãïœ
ïŒÎ»ïœïŒãšãããšã以äžã®ãããªé¢ä¿ãæç«ããã
In the present embodiment, as an example, calculation of first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 (or corresponding autocorrelation matrices B 1 , B 2 , B 3 ), respectively. The reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE calculated from the spectral radiance (statistical value or actually measured value) of the light source used at times are used. More specifically, for example, a reference pattern E (1) 1AVE corresponding to the first estimation matrix W (1) 1, the generation of the autocorrelation matrix B 1 used in the calculation of the first estimated matrix W (1) 1 It is calculated by averaging each element of the original light source group matrix E st (see FIG. 3). That is, as shown in FIG. 3, when the group matrix E st of the light source candidate i is composed of the component values e i (λ j ) {1 ⊠i ⊠N, 1 ⊠j ⊠k}, the corresponding reference pattern E (1) When each of the AVE sampling wavelength lambda j component values in (1 ⊠j ⊠k) and e AVE (lambda j), the following relationship is established.
ãæåå€ïœ
ïŒÎ»ïœïŒïŒïœïœ
ïŒïŒÎ»ïœïŒïŒïœ
ïŒïŒÎ»ïœïŒïŒã»ã»ã»ïŒïœ
ïŒÎ»ïœïŒïœïŒïŒ®
ãæ¬å®æœã®åœ¢æ ã§ã¯ããã®ãããªç®åºæé ã«åŸã£ãŠãåºæºãã¿ãŒã³ãšããŠãèå ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ããããã代衚ããåå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒãäºãç®åºãããããªãã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«å¯Ÿå¿ããåºæºãã¿ãŒã³ã«ã€ããŠã¯ãå¿ ãããç®åºããå¿ èŠã¯ãªããããã¯ã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«å¯Ÿå¿ããèªå·±çžé¢è¡åïŒãèå ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ãã¹ãŠãå«ãçµ±èšããŒã¿ã«åºã¥ããŠäœæããããã®ã§ããããããã®èªå·±çžé¢è¡åïŒããåºæºãã¿ãŒã³ãäœæãããšããŠããåå æºã®ç¹åŸŽããŒãããŠããŸããåºæºãã¿ãŒã³ãšããŠã®å¹æãèãŸãããã§ããã Component value e AVE (λ j ) = {e 1 (λ j ) + e 2 (λ j ) +... + E N (λ j )} / N
In the present embodiment, according to such a calculation procedure, spectral radiance (spectrum) representative of each of a fluorescent lamp, an incandescent lamp, and a xenon lamp is calculated in advance as a reference pattern. Note that the reference pattern corresponding to the first estimation matrix W (1) 4 is not necessarily calculated. This is because the autocorrelation matrix B 4 corresponding to the first estimation matrix W (1) 4 is created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp. even create a reference pattern from the B 4, will blurred characteristics of each light source, because the effect of the reference pattern Usumaru.
ãæ¬å®æœã®åœ¢æ ã§ã¯ããã®ãããªç®åºæé ã«åŸã£ãŠãåºæºãã¿ãŒã³ãšããŠãèå ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ããããã代衚ããåå æŸå°èŒåºŠïŒã¹ãã¯ãã«ïŒãäºãç®åºãããããªãã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«å¯Ÿå¿ããåºæºãã¿ãŒã³ã«ã€ããŠã¯ãå¿ ãããç®åºããå¿ èŠã¯ãªããããã¯ã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«å¯Ÿå¿ããèªå·±çžé¢è¡åïŒãèå ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ãã¹ãŠãå«ãçµ±èšããŒã¿ã«åºã¥ããŠäœæããããã®ã§ããããããã®èªå·±çžé¢è¡åïŒããåºæºãã¿ãŒã³ãäœæãããšããŠããåå æºã®ç¹åŸŽããŒãããŠããŸããåºæºãã¿ãŒã³ãšããŠã®å¹æãèãŸãããã§ããã Component value e AVE (λ j ) = {e 1 (λ j ) + e 2 (λ j ) +... + E N (λ j )} / N
In the present embodiment, according to such a calculation procedure, spectral radiance (spectrum) representative of each of a fluorescent lamp, an incandescent lamp, and a xenon lamp is calculated in advance as a reference pattern. Note that the reference pattern corresponding to the first estimation matrix W (1) 4 is not necessarily calculated. This is because the autocorrelation matrix B 4 corresponding to the first estimation matrix W (1) 4 is created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp. even create a reference pattern from the B 4, will blurred characteristics of each light source, because the effect of the reference pattern Usumaru.
ã次ã«ãå³ïŒïŒããã³å³ïŒïŒãåç
§ããŠãè©äŸ¡éšïŒïŒã«ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®è©äŸ¡æ¹æ³ã«ã€ããŠèª¬æããã
Next, with reference to FIG. 10 and FIG. 11, the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light. explain.
ãå³ïŒïŒã¯ãè©äŸ¡éšïŒïŒã«ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒãšåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšã®æ¯èŒåŠçã説æããããã®å³ã§ãããå³ïŒïŒã¯ãå³ïŒïŒã«ãããé¡äŒŒåºŠã®ç®åºåŠçã説æããããã®å³ã§ããã
FIG. 10 shows the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) of the illumination light by the evaluation unit 19. It is a figure for demonstrating the comparison process with 3AVE . FIG. 11 is a diagram for explaining the similarity calculation process in FIG. 10.
ãå³ïŒïŒãåç
§ããŠãè©äŸ¡éšïŒïŒã¯ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ãããããç®åºããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒãšåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšãããããæ¯èŒããæ¯èŒçµæïŒäžäŸãšããŠãé¡äŒŒåºŠïŒãç®åºããããªããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒããã³åºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ïŒïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ã¯ãããããïŒïœïŒã®ç¯å²ã«èŠæ ŒåãããŠãããã®ãšããã
Referring to FIG. 10, the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of illumination light calculated by the spectral radiance calculation units 11A, 11B, and 11C, respectively. The reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are respectively compared, and the comparison result (similarity as an example) is calculated. The spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are all 0. It is assumed that it is standardized in the range of ~ 1.
ãå³ïŒïŒïŒïŒ¡ïŒã«ç€ºãããã«ãèå
ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ã§ã¯ãç¹å®ã®æ³¢é·ã«ããŒã¯ãååšããŠãããäžæ¹ãå³ïŒïŒïŒïŒ¢ïŒã«ç€ºãããã«ãçœç±ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ã§ã¯ãããŒã¯ã¯ååšãããæ³¢é·ãé·ããªãã»ã©ãã®èŒåºŠãé«ããªã£ãŠããããšããããããŸããå³ïŒïŒïŒïŒ£ïŒã«ç€ºãããã«ããã»ãã³ç¯ã®çµ±èšããŒã¿ã«åºã¥ããŠäœæãããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ã§ã¯ãè¥å¹²ã®ããŒã¯ãååšãããšãšãã«ãå¯èŠå
é åã®ã»ãŒå
šäœã«äºã£ãŠé«ãèŒåºŠãæããŠããããšããããã
As shown in FIG. 10A, in the reference pattern E (1) 1AVE created based on the statistical data of fluorescent lamps, there is a peak at a specific wavelength. On the other hand, as shown in FIG. 10B, in the reference pattern E (1) 2AVE created based on the statistical data of the incandescent lamp, there is no peak, and the luminance increases as the wavelength increases. I understand. In addition, as shown in FIG. 10C , in the reference pattern E (1) 3AVE created based on the statistical data of the xenon lamp, there is a slight peak and it is high over almost the entire visible light region. It turns out that it has a brightness | luminance.
ãè©äŸ¡éšïŒïŒã¯ãååå
æŸå°èŒåºŠã察å¿ããåºæºãã¿ãŒã³ã«ã©ã®çšåºŠé¡äŒŒããŠãããã«ã€ããŠè©äŸ¡ããã代衚çã«ãè©äŸ¡éšïŒïŒã¯ãæ³¢é·é åäžã®æ³¢åœ¢å士ã®åå·®ã«åºã¥ããŠãé¡äŒŒåºŠãç®åºããã
The evaluation unit 19 evaluates how similar each spectral radiance is to the corresponding reference pattern. Typically, the evaluation unit 19 calculates the similarity based on the deviation between the waveforms on the wavelength region.
ãå³ïŒïŒã¯ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ãç®åºããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšã®æ¯èŒåŠçã説æããããã®å³ã瀺ããå³ïŒïŒïŒïŒ¡ïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšãåäžã®æ³¢é·é åäžã«ããããããç¶æ
ã瀺ããå³ïŒïŒïŒïŒ¢ïŒã¯ãåæ³¢é·ã«ãããåå·®ãç®åºããåŠçã瀺ãã
FIG. 11 is a diagram for explaining a comparison process between the spectral radiance E (1) 1 of the illumination light calculated by the spectral radiance calculation unit 11A and the reference pattern E (1) 1AVE . FIG. 11A shows a state where the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE are plotted on the same wavelength region, and FIG. The process which calculates a deviation is shown.
ãè©äŸ¡éšïŒïŒã¯ãåãµã³ããªã³ã°æ³¢é·Î»ïœïŒïŒâŠïœâŠïœïŒã«ãããŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšã®éã®åå·®ïŒèŠæ Œåå€ïŒïœ
ïœïœïœïŒïŒâŠïœâŠïœïŒãé 次ç®åºãããç¶ããŠãè©äŸ¡éšïŒïŒã¯ããµã³ããªã³ã°æ³¢é·Î»ïœã®åå·®ïœ
ïœïœïœã®ãã¹ãŠã«ã€ããŠã®ç·åå¹³åãç®åºããããšã§ãè©äŸ¡çµæïŒé¡äŒŒåºŠïŒãç®åºãããããªãã¡ãé¡äŒŒåºŠïŒ³ïŒã¯ããµã³ããªã³ã°æ³¢é·Î»ïœã®åå·®ïœ
ïœïœïœãçšããŠã以äžã®ãããªæŒç®åŒã§ç®åºã§ããã
The evaluation unit 19 calculates the deviation (standardized value) err j between the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE at each sampling wavelength λ j (1 ⊠j ⊠k). (1 ⊠j ⊠k) is calculated sequentially. Subsequently, the evaluation unit 19 calculates an evaluation result (similarity) by calculating a total average of all the deviations err j of the sampling wavelength λ j . That is, the similarity SM can be calculated by the following arithmetic expression using the deviation err j of the sampling wavelength λ j .
ããé¡äŒŒåºŠïŒ³ïŒïŒïœïŒïŒïŒïœ
ïœïœïŒïŒïœ
ïœïœïŒïŒã»ã»ã»ïŒïœ
ïœïœïœïŒïŒïœïœÃïŒïŒïŒïŒ»ïŒ

ããªããå³ïŒïŒã«ã¯ãèå ç¯ã®ç §æç°å¢äžã§æ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçæ¹æ³ãå®æœããå Žåã«å®æž¬ãããé¡äŒŒåºŠã瀺ãããã®ãããªé¡äŒŒåºŠã®ç®åºåŠçã®çµæã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«åºã¥ããŠæšå®ãããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒ ïŒã®é¡äŒŒåºŠãæãé«ããè©äŸ¡éšïŒïŒã¯ããã®ç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒ ïŒãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããããŸãããã®è©äŸ¡çµæã¯ãå®éã«èå ç¯ã®ç §æç°å¢äžã§å®æž¬ãããšããäºå®ã«åèŽããã Similarity SM = {1- (err 1 + err 2 +... + Err k ) / k} à 100 [%]
FIG. 10 shows the measured similarity when the image processing method according to the present embodiment is performed under the illumination environment of a fluorescent lamp. As a result of the similarity calculation process, the similarity of the spectral radiance E (1) 1 of the illumination light estimated based on the first estimation matrix W (1) 1 is the highest. The spectral radiance E (1) 1 of the illumination light is output as the spectral radiance E (1) . Moreover, this evaluation result agrees with the fact that it was actually measured under the fluorescent lamp illumination environment.
ããªããå³ïŒïŒã«ã¯ãèå ç¯ã®ç §æç°å¢äžã§æ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçæ¹æ³ãå®æœããå Žåã«å®æž¬ãããé¡äŒŒåºŠã瀺ãããã®ãããªé¡äŒŒåºŠã®ç®åºåŠçã®çµæã第ïŒæšå®è¡åïŒïŒïŒ ïŒã«åºã¥ããŠæšå®ãããç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒ ïŒã®é¡äŒŒåºŠãæãé«ããè©äŸ¡éšïŒïŒã¯ããã®ç §æå ã®åå æŸå°èŒåºŠïŒ¥ïŒïŒïŒ ïŒãåå æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããããŸãããã®è©äŸ¡çµæã¯ãå®éã«èå ç¯ã®ç §æç°å¢äžã§å®æž¬ãããšããäºå®ã«åèŽããã Similarity SM = {1- (err 1 + err 2 +... + Err k ) / k} à 100 [%]
FIG. 10 shows the measured similarity when the image processing method according to the present embodiment is performed under the illumination environment of a fluorescent lamp. As a result of the similarity calculation process, the similarity of the spectral radiance E (1) 1 of the illumination light estimated based on the first estimation matrix W (1) 1 is the highest. The spectral radiance E (1) 1 of the illumination light is output as the spectral radiance E (1) . Moreover, this evaluation result agrees with the fact that it was actually measured under the fluorescent lamp illumination environment.
ããŸãã被åäœïŒ¯ïŒ¢ïŒªã®ç
§æç°å¢ããèå
ç¯ãçœç±ç¯ããã»ãã³ç¯ãçµã¿åãããããšã§æäŸãããŠããå Žåããããã以å€ã®å
æºã«ãã£ãŠæäŸãããŠããå Žåã«ã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®ãããã«ã€ããŠãããã®é¡äŒŒåºŠãååé«ããªãå Žåãæ³å®ãããããã®ãããªå Žåã«ã¯ãèå
ç¯ãçœç±ç¯ããã»ãã³ç¯ã®ç¹åŸŽããã¹ãŠåæ ãã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã«åºã¥ããŠæšå®ãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããå Žåãé©åãªå Žåããããããã§ãè©äŸ¡éšïŒïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã«ã€ããŠã®è©äŸ¡çµæïŒé¡äŒŒåºŠïŒããããã蚱容å€ãäžåã£ãŠããå Žåãªã©ã«ã¯ãåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããã
Further, when the illumination environment of the subject OBJ is provided by combining a fluorescent lamp, an incandescent lamp, and a xenon lamp, or provided by a light source other than these, the spectral radiance E (1) of the illumination light. ) 1 , E (1) 2 , E (1) For any of 3 , it is assumed that the degree of similarity is not sufficiently high. In such a case, the spectral radiance E (1) 4 of the illumination light estimated based on the first estimation matrix W (1) 4 reflecting all the characteristics of the fluorescent lamp, the incandescent lamp, and the xenon lamp is used as the illumination light. It may be appropriate to output as the spectral radiance E (1) . Therefore, the evaluation unit 19 is used when the evaluation results (similarities) for the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of the illumination light are all below the allowable value. outputs the spectral radiance E (1) 4 as the spectral radiance E of the illumination light (1).
ããªããäžè¿°ã®èª¬æã§ã¯ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã䞊è¡ããŠåæã«ç®åºãããã®åŸé¡äŒŒåºŠãç®åºããæ§æã«ã€ããŠäŸç€ºãããã第ïŒæšå®è¡åã®åã
ã«ã€ããŠãç
§æå
ã®åå
æŸå°èŒåºŠã®ç®åºåŠçãšé¡äŒŒåºŠã®ç®åºåŠçãšãé 次å®è¡ããããã«ããŠãããããã®å Žåã«ã¯ãããããã®ç¬¬ïŒæšå®è¡åã«ããæšå®ãããç
§æå
ã®åå
æŸå°èŒåºŠã«ã€ããŠã®é¡äŒŒåºŠããæå®ãããå€ïŒããšãã°ãïŒïŒïŒ
ïŒãè¶
ããããšãå€æããæç¹ã§ã以åŸã®åŠçãè¡ãªãããšãªããåœè©²æ¬¡å
ã®åå
æŸå°èŒåºŠãåºåã§ããã®ã§ãåŠçãç°¡çŽ åããããšãã§ããã
In the above description, the spectral radiance calculation units 11A, 11B, 11C, and 11D perform spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of illumination light. However, the calculation process of the spectral radiance of the illumination light and the calculation process of the similarity degree are sequentially executed for each of the first estimation matrices. May be. In this case, when it is determined that the similarity with respect to the spectral radiance of the illumination light estimated by any of the first estimation matrices exceeds a predetermined threshold (for example, 95%), Since the spectral radiance of the dimension can be output without performing the process, the process can be simplified.
ããŸãã第ïŒæšå®è¡åïŒïŒïŒ
ïŒã«åºã¥ããŠç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãæšå®ããå¿
ç¶æ§ã¯ãªããããªãã¡ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã«ã€ããŠã®é¡äŒŒåºŠããããã蚱容å€ãäžåã£ãŠããå Žåã«ã¯ãæšå®äžå¯èœãåºåããŠãããã
Further, there is no necessity to estimate the spectral radiance E (1) 4 of the illumination light based on the first estimation matrix W (1) 4 . In other words, if the similarities of the spectral radiances E (1) 1 , E (1) 2 , and E (1) 3 of the illumination light are all below the allowable value, the estimation impossible is output. Good.
ããŸããäžè¿°ãããåå·®ã«åºã¥ãé¡äŒŒåºŠã®ç®åºã«ä»£ããŠãçžé¢ä¿æ°ãªã©ãçšããŠé¡äŒŒåºŠãç®åºããŠãããã
Also, instead of calculating the similarity based on the deviation described above, the similarity may be calculated using a correlation coefficient or the like.
ããã®ä»ã®åŠçã«ã€ããŠã¯ãåºæ¬çã«äžè¿°ããå®æœã®åœ¢æ
ïŒãšåæ§ã§ããã®ã§ã詳现ãªèª¬æã¯ç¹°è¿ããªãã
Other processes are basically the same as those in the first embodiment described above, and thus detailed description will not be repeated.
ãïŒåŠçæé ïŒ
ãæ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçè£ çœ®ã«ãããåŠçæé ããŸãšãããšã以äžã®ããã«ãªãã <Processing procedure>
The processing procedure in the image processing apparatus according to the present embodiment is summarized as follows.
ãæ¬å®æœã®åœ¢æ ã«åŸãç»ååŠçè£ çœ®ã«ãããåŠçæé ããŸãšãããšã以äžã®ããã«ãªãã <Processing procedure>
The processing procedure in the image processing apparatus according to the present embodiment is summarized as follows.
ãå³ïŒïŒã¯ããã®çºæã®å®æœã®åœ¢æ
ïŒã«åŸãç»ååŠçè£
眮ã«ãããå
šäœåŠçæé ã瀺ããããŒãã£ãŒãã§ããããªããå³ïŒïŒã«ç€ºããããŒãã£ãŒãäžã®åã¹ãããã®ãã¡ãå³ïŒã«ç€ºããããŒãã£ãŒãäžã®ã¹ããããšåäžå
容ã®ã¹ãããã«ã€ããŠã¯ãåã笊å·ãä»ããŠããããŸããå³ïŒïŒã¯ãå³ïŒïŒã«ç€ºãã¹ãããïŒïŒïŒã«ç€ºãè©äŸ¡ãµãã«ãŒãã³ã®åŠçæé ã瀺ããããŒãã£ãŒãã§ããã
FIG. 12 is a flowchart showing an overall processing procedure in the image processing apparatus according to the third embodiment of the present invention. Of the steps in the flowchart shown in FIG. 12, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals. FIG. 13 is a flowchart showing the procedure of the evaluation subroutine shown in step S108 shown in FIG.
ãå³ïŒããã³å³ïŒïŒãåç
§ããŠããŸããå
¥åéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããå
ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒãä»ããŠæ®åãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãå
¥åéšïŒïŒã¯ãåå
¥ãã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒã代衚ããæ®åããŒã¿ïœïŒïŒïŒ
ãçæããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
With reference to FIGS. 9 and 12, first, the input unit 10 captures at least part of the light incident on the object OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
ã次ã«ãåå
æŸå°èŒåºŠç®åºéšïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ãããããã第ïŒæšå®è¡åïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒïŒïŒ·ïŒïŒïŒ
ïŒãçšããŠãæ®åããŒã¿ïœïŒïŒïŒ
ãã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒãããããç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãè©äŸ¡éšïŒïŒããè©äŸ¡ãµãã«ãŒãã³ãå®è¡ããã¹ãããïŒïŒïŒã§ç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®ãã¡æãæšå®ç²ŸåºŠã®é«ããã®ãè©äŸ¡ããïŒã¹ãããïŒïŒïŒïŒãããã«ãéžæéšïŒïŒããã¹ãããïŒïŒïŒã«ãããè©äŸ¡çµæã«åŸã£ãŠãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒã®ãã¡ïŒã€ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãšããŠåºåããïŒã¹ãããïŒïŒïŒïŒã
Next, the spectral radiance calculation units 11A, 11B, 11C, and 11D capture images using the first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 , and W (1) 4 , respectively. Data g (1) Spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light incident on the subject OBJ from RGB is calculated (step S107). Subsequently, the evaluation unit 19 executes an evaluation subroutine, and the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light calculated in step S107. Among them, the one with the highest estimation accuracy is evaluated (step S108). Further, the selection unit 18, according to the evaluation result in step S108, one of the spectral radiance E of the illumination light (1) 1, E (1 ) 2, E (1) 3, E (1) 4, lighting Output as spectral radiance E (1) of light (step S109).
ã次ã«ãäžåºæ¿å€å€æéšïŒïŒããåå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠã座æšå€æéšïŒïŒãã衚è²ç³»ã®äžåºæ¿å€ïŒžïŒïŒ¹ïŒïŒºãã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã«å€æããïŒã¹ãããïŒïŒïŒïŒãããã«ããã¯ã€ããã©ã³ã¹ç®åºéšïŒïŒãã座æšå€ïŒ²ïŒïŒïŒïŒïŒ§ïŒïŒïŒïŒïŒ¢ïŒïŒïŒã®æ¯ã«åºã¥ããŠãæ®åè£
眮ïŒïŒïŒã«ããããã¯ã€ããã©ã³ã¹ãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
ãäžæ¹ãå
¥åéšïŒïŒããç
§æç°å¢äžã«ãããŠæ®åè£
眮ïŒïŒïŒã«ãã被åäœïŒ¯ïŒ¢ïŒªãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ïœïŒïŒïŒ
ïŒïœïŒïœïŒãåå
¥ããïŒã¹ãããïŒïŒïŒïŒããªããå
¥åéšïŒïŒã¯ãå¿
èŠã«å¿ããŠã第ïŒæ®åããŒã¿ãç·åœ¢åããã
On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
ã次ã«ãæšå®è¡åç®åºéšïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã«å«ãŸãåŸãè²ã®åå
åå°çããç®åºãããèªå·±çžé¢è¡åãç
§æã¹ãã¯ãã«æšå®éšïŒïŒïŒã«ãã£ãŠç®åºãããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãæ®åè£
眮ïŒïŒïŒã®åå
æ床ã«åºã¥ããŠã第ïŒæšå®è¡åïŒïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãç¶ããŠãåå
åå°çç®åºéšïŒïŒããã¹ãããïŒïŒïŒã§ç®åºããã第ïŒæšå®è¡åïŒïŒïŒãçšããŠã第ïŒæ®åããŒã¿ïœïŒïŒïŒãã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãããã«ãç»åããŒã¿çæéšïŒïŒããçè²é¢æ°ïœã被åäœïŒ¯ïŒ¢ïŒªã«å
¥å°ããç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒãã¹ãããïŒïŒïŒã§ç®åºãããã被åäœïŒ¯ïŒ¢ïŒªã®åå
åå°çïœïŒïŒïŒïŒïœïŒïœïŒãçšããŠã被åäœïŒ¯ïŒ¢ïŒªã®è²åçŸãè¡ãªã£ãç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãçæããïŒã¹ãããïŒïŒïŒïŒãããã«ã座æšå€æéšïŒïŒããã¹ãããïŒïŒïŒã«ãããŠçæãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã衚è²ç³»ã«ãããŠå®çŸ©ãããç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒã«å€æãïŒã¹ãããïŒïŒïŒïŒããã®å€æåŸã®ç»åããŒã¿ïœïŒïŒ¯ïŒµïŒŽïŒ
ïŒïœïŒïœïŒãåºåããã
Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
ãå³ïŒïŒãåç
§ããŠãè©äŸ¡éšïŒïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšäºãå®ããããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšãæ¯èŒããããšã§ãäž¡è
ã®éã®é¡äŒŒåºŠïŒ³ïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãåæ§ã«ãè©äŸ¡éšïŒïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšäºãå®ããããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšãæ¯èŒããããšã§ãäž¡è
ã®éã®é¡äŒŒåºŠïŒ³ïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒãåæ§ã«ãè©äŸ¡éšïŒïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãšäºãå®ããããåºæºãã¿ãŒã³ïŒ¥ïŒïŒïŒ
ïŒïŒ¡ïŒ¶ïŒ¥ãšãæ¯èŒããããšã§ãäž¡è
ã®éã®é¡äŒŒåºŠïŒ³ïŒïŒãç®åºããïŒã¹ãããïŒïŒïŒïŒã
Referring to FIG. 13, the evaluation unit 19 compares the spectral radiance E (1) 1 of the illumination light with a predetermined reference pattern E (1) 1AVE , thereby determining the similarity SM 1 between the two. Is calculated (step S200). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 2 of the illumination light with a predetermined reference pattern E (1) 2AVE to calculate the similarity SM 2 between them ( Step S202). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 3 of the illumination light with a predetermined reference pattern E (1) 3AVE to calculate a similarity SM 3 between them ( Step S204).
ãç¶ããŠãè©äŸ¡éšïŒïŒã¯ãã¹ãããïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒã«ãããŠç®åºãããé¡äŒŒåºŠïŒ³ïŒïŒïŒïŒ³ïŒïŒïŒïŒ³ïŒïŒã®ãã¡ããã®å€ãæãé«ããã®ãæœåºããïŒã¹ãããïŒïŒïŒïŒãããã«ãè©äŸ¡éšïŒïŒã¯ãã¹ãããïŒïŒïŒã§æœåºããé¡äŒŒåºŠãæå®ã®èš±å®¹å€ä»¥äžã§ãããåŠããå€æããïŒã¹ãããïŒïŒïŒïŒã
Subsequently, the evaluation unit 19 extracts the one having the highest value among the similarities SM 1 , SM 2 , SM 3 calculated in steps S200, S202, S204 (step S206). Furthermore, the evaluation unit 19 determines whether or not the similarity extracted in step S206 is greater than or equal to a predetermined allowable value (step S208).
ãé¡äŒŒåºŠãæå®ã®èš±å®¹å€ä»¥äžã§ããå ŽåïŒã¹ãããïŒïŒïŒã«ãããŠïŒ¹ïŒ¥ïŒ³ã®å ŽåïŒã«ã¯ãè©äŸ¡éšïŒïŒã¯ãã¹ãããïŒïŒïŒã§æœåºããé¡äŒŒåºŠã«å¯Ÿå¿ããåå
æŸå°èŒåºŠãæãæšå®ç²ŸåºŠã®é«ããšè©äŸ¡ããïŒã¹ãããïŒïŒïŒïŒã
If the similarity is equal to or greater than a predetermined allowable value (YES in step S208), the evaluation unit 19 evaluates that the spectral radiance corresponding to the similarity extracted in step S206 has the highest estimation accuracy ( Step S210).
ãäžæ¹ãé¡äŒŒåºŠãæå®ã®èš±å®¹å€ä»¥äžã§ãªãå ŽåïŒã¹ãããïŒïŒïŒã«ãããŠïŒ®ïŒ¯ã®å ŽåïŒã«ã¯ãè©äŸ¡éšïŒïŒã¯ãç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒïŒïŒ¥ïŒïŒïŒ
ïŒä»¥å€ãããªãã¡ç
§æå
ã®åå
æŸå°èŒåºŠïŒ¥ïŒïŒïŒ
ïŒãæãæšå®ç²ŸåºŠã®é«ããšè©äŸ¡ããïŒã¹ãããïŒïŒïŒïŒã
On the other hand, when the similarity is not equal to or greater than the predetermined allowable value (NO in step S208), the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) of the illumination light. It is evaluated that the spectral radiance E (1) 4 of illumination light other than 3 has the highest estimation accuracy (step S212).
ããã®åŸãåŠçã¯å³ïŒïŒã®ã¹ãããïŒïŒïŒã«é²ãã
ãïŒæ¬å®æœã®åœ¢æ ã«ããäœçšå¹æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ãäœãã®äºåç¥èãæããªããŠãŒã¶ã§ãã£ãŠããç §æå ã®åå æŸå°èŒåºŠãé«ãæšå®ç²ŸåºŠã§ååŸããããšãã§ããããã®ããã被åäœïŒ¯ïŒ¢ïŒªãæ§ã ãªæ¡ä»¶äžã§è¡ãªããããšããŠããç §æå ã®åå æŸå°èŒåºŠã®æšå®ç²ŸåºŠãç¶æã§ããã Thereafter, the processing proceeds to step S109 in FIG.
<Operational effects of the present embodiment>
According to the third embodiment of the present invention, the same operational effects as those of the above-described first embodiment can be obtained, and the spectral radiance of illumination light is high even for a user who does not have any prior knowledge. It can be obtained with estimation accuracy. Therefore, even if the subject OBJ is performed under various conditions, the estimation accuracy of the spectral radiance of the illumination light can be maintained.
ãïŒæ¬å®æœã®åœ¢æ ã«ããäœçšå¹æïŒ
ããã®çºæã®å®æœã®åœ¢æ ïŒã«ããã°ãäžè¿°ã®å®æœã®åœ¢æ ïŒãšåæ§ã®äœçšå¹æãåŸãããšãã§ãããšãšãã«ãäœãã®äºåç¥èãæããªããŠãŒã¶ã§ãã£ãŠããç §æå ã®åå æŸå°èŒåºŠãé«ãæšå®ç²ŸåºŠã§ååŸããããšãã§ããããã®ããã被åäœïŒ¯ïŒ¢ïŒªãæ§ã ãªæ¡ä»¶äžã§è¡ãªããããšããŠããç §æå ã®åå æŸå°èŒåºŠã®æšå®ç²ŸåºŠãç¶æã§ããã Thereafter, the processing proceeds to step S109 in FIG.
<Operational effects of the present embodiment>
According to the third embodiment of the present invention, the same operational effects as those of the above-described first embodiment can be obtained, and the spectral radiance of illumination light is high even for a user who does not have any prior knowledge. It can be obtained with estimation accuracy. Therefore, even if the subject OBJ is performed under various conditions, the estimation accuracy of the spectral radiance of the illumination light can be maintained.
ãå®æœã®åœ¢æ
ïŒïœïŒã®å€åœ¢äŸïŒœ
ãäžè¿°ã®å®æœã®åœ¢æ ïŒïœïŒã«ãããŠã¯ãäž»ãšããŠããŒããŠã§ã¢ã§æ§æãããç»ååŠçè£ çœ®ãçšããæ§æã«ã€ããŠäŸç€ºãããããã®å šéšãŸãã¯äžéšããœãããŠã§ã¢ã§å®çŸããŠããããããªãã¡ãã³ã³ãã¥ãŒã¿ãçšããŠãç»ååŠçè£ çœ®ã«ãããåŠçãå®çŸããŠãããã [Modifications ofEmbodiments 1 to 3]
In the above-described first to third embodiments, the configuration using the image processing apparatus mainly configured by hardware is exemplified, but all or a part thereof may be realized by software. That is, the processing in the image processing apparatus may be realized using a computer.
ãäžè¿°ã®å®æœã®åœ¢æ ïŒïœïŒã«ãããŠã¯ãäž»ãšããŠããŒããŠã§ã¢ã§æ§æãããç»ååŠçè£ çœ®ãçšããæ§æã«ã€ããŠäŸç€ºãããããã®å šéšãŸãã¯äžéšããœãããŠã§ã¢ã§å®çŸããŠããããããªãã¡ãã³ã³ãã¥ãŒã¿ãçšããŠãç»ååŠçè£ çœ®ã«ãããåŠçãå®çŸããŠãããã [Modifications of
In the above-described first to third embodiments, the configuration using the image processing apparatus mainly configured by hardware is exemplified, but all or a part thereof may be realized by software. That is, the processing in the image processing apparatus may be realized using a computer.
ãå³ïŒïŒã¯ããã®çºæã®å®æœã®åœ¢æ
ã®å€åœ¢äŸã«åŸãç»ååŠçè£
眮ïŒïŒãå®çŸããã³ã³ãã¥ãŒã¿ã®æŠç¥æ§æå³ã§ããã
FIG. 14 is a schematic configuration diagram of a computer that realizes an image processing apparatus 1 # according to a modification of the embodiment of the present invention.
ãå³ïŒïŒãåç
§ããŠãã³ã³ãã¥ãŒã¿ã¯ãïŒïŒŠïœïœ
ïœïœïœïœïœ
ãïœïœïœïŒé§åè£
眮ïŒïŒïŒããã³ïŒ£ïŒ€ïŒïŒ²ïŒ¯ïŒïŒïŒ£ïœïœïœïœïœïœãïœïœïœïŒïŒ²ïœ
ïœïœãïœïœïœãïŒïœ
ïœïœïœïœ)é§åè£
眮ïŒïŒïŒãæèŒããã³ã³ãã¥ãŒã¿æ¬äœïŒïŒïŒãšãã¢ãã¿ïŒïŒïŒãšãããŒããŒãïŒïŒïŒãšãããŠã¹ïŒïŒïŒãšãå«ãã
Referring to FIG. 14, the computer includes a computer main body 150 equipped with an FD (Flexible Disk) driving device 166 and a CD-ROM (Compact Disk-Read Only Memory) driving device 168, a monitor 152, a keyboard 154, a mouse. 156.
ãã³ã³ãã¥ãŒã¿æ¬äœïŒïŒïŒã¯ãçžäºã«ãã¹ã§æ¥ç¶ããããæŒç®è£
眮ã§ããïŒïŒ£ïœ
ïœïœïœïœïœãïœïœïœïœ
ïœïœïœïœïœãïœïœïœïŒïŒïŒïŒãšãã¡ã¢ãªïŒïŒïŒãšãèšæ¶è£
眮ã§ããåºå®ãã£ã¹ã¯ïŒïŒïŒãšãéä¿¡ã€ã³ã¿ãŒãã§ãŒã¹ïŒïŒïŒãšãããã«å«ãã
The computer main body 150 further includes a CPU (Central Processing Unit) 160 that is an arithmetic device, a memory 162, a fixed disk 164 that is a storage device, and a communication interface 170 that are connected to each other via a bus.
ãé§åè£
眮ïŒïŒïŒã«ã¯ïŒŠïŒ€ïŒïŒïŒïœãè£
çãããïŒïŒ²ïŒ¯ïŒé§åè£
眮ïŒïŒïŒã«ã¯ïŒ£ïŒ€ïŒïŒ²ïŒ¯ïŒïŒïŒïŒïœãè£
çããããæ¬å®æœã®åœ¢æ
ã®å€åœ¢äŸã«åŸãç»ååŠçè£
眮ïŒïŒã¯ãïŒïŒïŒãã¡ã¢ãªïŒïŒïŒãªã©ã®ã³ã³ãã¥ãŒã¿ããŒããŠã§ã¢ãçšããŠããœãããŠã§ã¢ãå®è¡ããããšã§å®çŸã§ãããäžè¬çã«ããã®ãããªãœãããŠã§ã¢ã¯ãïŒïŒïŒïœãïŒïŒ²ïŒ¯ïŒïŒïŒïŒïœãªã©ã®èšé²åªäœã«æ ŒçŽãããŠããŸãã¯ãããã¯ãŒã¯ãªã©ãä»ããŠæµéããããããŠããã®ãããªãœãããŠã§ã¢ã¯ãé§åè£
眮ïŒïŒïŒãïŒïŒ²ïŒ¯ïŒé§åè£
眮ïŒïŒïŒãªã©ã«ããèšé²åªäœããèªåãããŠããŸãã¯éä¿¡ã€ã³ã¿ãŒãã§ãŒã¹ïŒïŒïŒã«ãŠåä¿¡ãããŠãåºå®ãã£ã¹ã¯ïŒïŒïŒã«æ ŒçŽããããããã«ãåºå®ãã£ã¹ã¯ïŒïŒïŒããã¡ã¢ãªïŒïŒïŒã«èªåºãããŠãïŒïŒïŒã«ããå®è¡ãããã
FD 166a is mounted on the FD drive device 166, and CD-ROM 168a is mounted on the CD-ROM drive device 168. Image processing apparatus 1 # according to the modification of the present embodiment can be realized by CPU 160 executing software using computer hardware such as memory 162. In general, such software is stored in a recording medium such as the FD 166a or the CD-ROM 168a, or distributed via a network or the like. Such software is read from the recording medium by the FD driving device 166 or the CD-ROM driving device 168 or received by the communication interface 170 and stored in the fixed disk 164. Further, it is read from the fixed disk 164 to the memory 162 and executed by the CPU 160.
ãã¢ãã¿ïŒïŒïŒã¯ãïŒïŒïŒãåºåããæ
å ±ã衚瀺ããããã®è¡šç€ºéšã§ãã£ãŠãäžäŸãšããŠïŒ¬ïŒ£ïŒ€ïŒïŒ¬ïœïœïœïœïœãïœïœïœïœïœïœãïœïœïœïœïœïœïŒãïŒïŒ£ïœïœïœïœïœïœ
ãïœïœãïœïœïœ
ïŒãªã©ããæ§æããããããŠã¹ïŒïŒïŒã¯ãã¯ãªãã¯ãã¹ã©ã€ããªã©ã®åäœã«å¿ãããŠãŒã¶ããæ什ãåä»ãããããŒããŒãïŒïŒïŒã¯ãå
¥åãããããŒã«å¿ãããŠãŒã¶ããæ什ãåä»ãããïŒïŒïŒã¯ãããã°ã©ã ãããåœä»€ãé 次å®è¡ããããšã§ãåçš®ã®æŒç®ãå®æœããæŒç®åŠçéšã§ãããã¡ã¢ãªïŒïŒïŒã¯ãïŒïŒïŒã®ããã°ã©ã å®è¡ã«å¿ããŠãåçš®ã®æ
å ±ãèšæ¶ãããéä¿¡ã€ã³ã¿ãŒãã§ãŒã¹ïŒïŒïŒã¯ãïŒïŒïŒãåºåããæ
å ±ããäŸãã°é»æ°ä¿¡å·ã«å€æããŠä»ã®è£
眮ãžéåºãããšãšãã«ãä»ã®è£
眮ããé»æ°ä¿¡å·ãåä¿¡ããŠïŒ£ïŒ°ïŒµïŒïŒïŒãå©çšã§ããæ
å ±ã«å€æãããåºå®ãã£ã¹ã¯ïŒïŒïŒã¯ãïŒïŒïŒãå®è¡ããããã°ã©ã ãäºãå®ããããããŒã¿ãªã©ãèšæ¶ããäžæ®çºæ§ã®èšæ¶è£
眮ã§ããããŸããã³ã³ãã¥ãŒã¿ã«ã¯ãå¿
èŠã«å¿ããŠãããªã³ã¿ãªã©ã®ä»ã®åºåè£
眮ãæ¥ç¶ãããŠãããã
The monitor 152 is a display unit for displaying information output by the CPU 160, and includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like. The mouse 156 receives a command from a user corresponding to an operation such as click or slide. The keyboard 154 receives a command from the user corresponding to the input key. The CPU 160 is an arithmetic processing unit that executes various arithmetic operations by sequentially executing programmed instructions. The memory 162 stores various types of information according to the program execution of the CPU 160. The communication interface 170 converts the information output from the CPU 160 into, for example, an electrical signal and sends it to another device, and receives the electrical signal from the other device and converts it into information that can be used by the CPU 160. Fixed disk 164 is a non-volatile storage device that stores programs executed by CPU 160 and predetermined data. In addition, other output devices such as a printer may be connected to the computer as necessary.
ãããã«ãæ¬å®æœã®åœ¢æ
ã«ä¿ãããã°ã©ã ã¯ãã³ã³ãã¥ãŒã¿ã®ãªãã¬ãŒãã£ã³ã°ã·ã¹ãã ïŒïŒ¯ïŒ³ïŒã®äžéšãšããŠæäŸãããããã°ã©ã ã¢ãžã¥ãŒã«ã®ãã¡ãå¿
èŠãªã¢ãžã¥ãŒã«ãæå®ã®é
åã§æå®ã®ã¿ã€ãã³ã°ã§åŒåºããŠåŠçãå®è¡ããããã®ã§ãã£ãŠãããããã®å Žåãããã°ã©ã èªäœã«ã¯äžèšã¢ãžã¥ãŒã«ãå«ãŸãããšååããŠåŠçãå®è¡ãããããã®ãããªã¢ãžã¥ãŒã«ãå«ãŸãªãããã°ã©ã ããæ¬çºæã«ä¿ãããã°ã©ã ã«å«ãŸãåŸãã
Furthermore, the program according to the present embodiment is a program module that is provided as a part of a computer operating system (OS) and calls necessary modules in a predetermined arrangement at a predetermined timing to execute processing. There may be. In that case, the program itself does not include the module, and the process is executed in cooperation with the OS. A program that does not include such a module can also be included in the program according to the present invention.
ããŸããæ¬å®æœã®åœ¢æ
ã«ä¿ãããã°ã©ã ã¯ä»ã®ããã°ã©ã ã®äžéšã«çµèŸŒãŸããŠæäŸããããã®ã§ãã£ãŠãããããã®å Žåã«ããããã°ã©ã èªäœã«ã¯äžèšä»ã®ããã°ã©ã ã«å«ãŸããã¢ãžã¥ãŒã«ãå«ãŸãããä»ã®ããã°ã©ã ãšååããŠåŠçãå®è¡ãããã
Further, the program according to the present embodiment may be provided by being incorporated in a part of another program. Even in this case, the program itself does not include the module included in the other program, and the process is executed in cooperation with the other program.
ãä»åé瀺ãããå®æœã®åœ¢æ
ã¯ãã¹ãŠã®ç¹ã§äŸç€ºã§ãã£ãŠå¶éçãªãã®ã§ã¯ãªããšèããããã¹ãã§ãããæ¬çºæã®ç¯å²ã¯äžèšãã説æã§ã¯ãªããŠè«æ±ã®ç¯å²ã«ãã£ãŠç€ºãããè«æ±ã®ç¯å²ãšåçã®æå³ããã³ç¯å²å
ã§ã®ãã¹ãŠã®å€æŽãå«ãŸããããšãæå³ãããã
The embodiment disclosed this time should be considered as illustrative in all points and not restrictive. The scope of the present invention is defined by the terms of the claims, rather than the description above, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
Claims (14)
- ãæ®åè£ çœ®ïŒïŒïŒïŒïŒã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£ çœ®ïŒïŒïŒã§ãã£ãŠã
ãåèšæ®åè£ çœ®ãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå ¥ããå ¥åéšïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãå æºåè£ã®åå æŸå°èŒåºŠã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒãšãåãããç»ååŠçè£ çœ®ã An image processing apparatus (1) capable of performing image processing on imaging data captured by an imaging apparatus (400),
An input unit (10) that receives first imaging data obtained by imaging at least a part of light incident on the subject (OBJ) through the diffusing member (402) under the illumination environment using the imaging device. )When,
A first estimation matrix calculated based on an autocorrelation matrix of spectral radiance of a light source candidate that can be used to provide the illumination environment, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device And a first calculation unit (11, 12, 13) that calculates a spectral radiance of illumination light incident on the subject from the first imaging data. - ãåèšå æºåè£ã®åå æŸå°èŒåºŠã¯ãå æºïŒïŒïŒïŒïŒã®çš®é¡æ¯ã«äºãååŸãããç¹æ§å€ã§ãããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã The image processing apparatus according to claim 1, wherein the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source (300).
- ãåèšæ¡æ£éšæã¯ãåèšæ®åè£ çœ®ã®å 軞ïŒïŒ¡ïœïŒïŒäžã«é 眮ããã
ãåèšæ¡æ£éšæã®å ¥å°åŒ·åºŠã¯ãåèšå 軞ã«å¯Ÿããè§åºŠã«ã€ããŠã®æå®ã®é¢æ°å€ã§ç€ºããããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã The diffusing member is disposed on the optical axis (Ax1) of the imaging device,
The image processing apparatus according to claim 1, wherein the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis. - ãåèšé¢æ°å€ã¯ãåèšå 軞ã«å¯Ÿããè§åºŠã«ã€ããŠã®äœåŒŠé¢æ°ã§ãããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã The image processing device according to claim 3, wherein the function value is a cosine function with respect to an angle with respect to the optical axis.
- ãåèšæ®åè£ çœ®ã¯ãåèšæ®åããŒã¿ãšããŠã衚è²ç³»ã«ãããŠå®çŸ©ããã座æšå€ãåºåããããã«æ§æããã
ãåèšç»ååŠçè£ çœ®ã¯ã
ãåèšç §æå ã®åå æŸå°èŒåºŠãšçè²é¢æ°ãšãçšããŠãåèšç §æå ã®åå æŸå°èŒåºŠã«å¯Ÿå¿ãã衚è²ç³»ã«ããã座æšå€ãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒïŒïŒïŒãšã
ãåèšç¬¬ïŒç®åºéšã«ãããŠç®åºããã座æšå€ã®æ¯ã«åºã¥ããŠãåèšæ®åè£ çœ®ã«ããããã¯ã€ããã©ã³ã¹ãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒãšãããã«åãããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã The imaging device is configured to output coordinate values defined in the RGB color system as the imaging data,
The image processing apparatus includes:
A second calculation unit (14, 15) for calculating coordinate values in the RGB color system corresponding to the spectral radiance of the illumination light using the spectral radiance of the illumination light and the color matching function;
The image processing device according to claim 1, further comprising a third calculation unit (16) that calculates a white balance in the imaging device based on a ratio of coordinate values calculated in the second calculation unit. . - ãåèšç §æå ã®åå æŸå°èŒåºŠãšãåèšæ®åè£ çœ®ã®åå æ床ãšãåèšè¢«åäœã«å«ãŸãåŸãè²ã®åå åå°çã®èªå·±çžé¢è¡åãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãåèšç §æç°å¢äžã«ãããŠåèšæ®åè£ çœ®ã«ããåèšè¢«åäœãæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåèšè¢«åäœã®åå åå°çãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒïŒïŒïŒïŒïŒïŒãããã«åãããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã Using the second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging device, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject, the illumination The first calculation unit (20, 21, 22) further calculating a spectral reflectance of the subject from second imaging data obtained by imaging the subject with the imaging device under an environment. The image processing apparatus according to item.
- ãåèšç¬¬ïŒç®åºéšã«ãã£ãŠç®åºãããåèšè¢«åäœã®åå åå°çã«åºã¥ããŠãåèšè¢«åäœãæå®ã®ç §æç°å¢äžã«ãããŠæ®åããå Žåã«ååŸãããç»åããŒã¿ãçæããçæéšïŒïŒïŒïŒïŒïŒïŒãããã«åãããè«æ±ã®ç¯å²ç¬¬ïŒé ã«èšèŒã®ç»ååŠçè£ çœ®ã A generation unit (24, 25) that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. The image processing apparatus according to claim 6.
- ãæ®åè£ çœ®ïŒïŒïŒïŒïŒã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£ çœ®ïŒïŒïŒ¡ïŒã§ãã£ãŠã
ãåèšæ®åè£ çœ®ãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå ¥ããå ¥åéšïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®çš®é¡æ¯ã«äºãå®ããããæŒç®è¡åã®ãã¡äžã€ããå€éšæ什ã«å¿ããŠéžæããéžæéšïŒïŒïŒïŒ¡ïŒãšã
ãåèšéžæéšã«ãã£ãŠéžæãããæŒç®è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒïŒïŒïŒãšãåãã
ãåèšæŒç®è¡åã®åã ã¯ãåèšå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åã§ãããç»ååŠçè£ çœ®ã An image processing apparatus (1A) capable of performing image processing on imaging data captured by an imaging apparatus (400),
An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
A selection unit (13A) that selects one of a plurality of calculation matrices predetermined for each type of light source candidates that can be used to provide the illumination environment according to an external command;
Using the first estimation matrix calculated based on the calculation matrix selected by the selection unit, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, the subject from the first imaging data A first calculation unit (11, 12) for calculating the spectral radiance of the illumination light incident on
Each of the calculation matrices is an autocorrelation matrix of a matrix indicating a spectral radiance of the light source candidate. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£ çœ®ïŒïŒïŒ¢ïŒã§ãã£ãŠã
ãåèšæ®åè£ çœ®ãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå ¥ããå ¥åéšïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®åå æŸå°èŒåºŠã«åºã¥ããŠäºãç®åºããããè€æ°ã®ç¬¬ïŒæšå®è¡åã®ãã¡äžã€ãå€éšæ什ã«å¿ããŠéžæããéžæéšïŒïŒïŒïŒãšã
ãåèšéžæéšã«ãã£ãŠéžæããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºãã第ïŒç®åºéšïŒïŒïŒïŒãšãåãã
ãåèšç¬¬ïŒæšå®è¡åã¯ã察å¿ããå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããããç»ååŠçè£ çœ®ã An image processing device (1B) capable of performing image processing on imaging data imaged by an imaging device (400),
An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
A selection unit (17) that selects one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment according to an external command. When,
A first calculation unit (11) that calculates a spectral radiance of illumination light incident on the subject from the first imaging data using the first estimation matrix selected by the selection unit;
The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. apparatus. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒã«ãã£ãŠæ®åãããæ®åããŒã¿ã«å¯Ÿããç»ååŠçãå¯èœãªç»ååŠçè£ çœ®ã§ãã£ãŠã
ãåèšæ®åè£ çœ®ãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§åŸããã第ïŒæ®åããŒã¿ããåå ¥ããå ¥åéšïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®åå æŸå°èŒåºŠã«åºã¥ããŠããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠã®åè£ãããããç®åºãã第ïŒç®åºéšïŒïŒïŒïŒ¡ïŒïŒïŒïŒ¢ïŒïŒïŒïŒ£ïŒïŒïŒïŒ€ïŒãšã
ãããããç®åºãããåèšåå æŸå°èŒåºŠã®åè£ãäºãå®ããããåºæºãã¿ãŒã³ãšã®æ¯èŒã«ãã£ãŠè©äŸ¡ãããã®ãã¡äžã€ãåèšç §æç°å¢äžã«ãããç §æå ã®åå æŸå°èŒåºŠãšããŠåºåããè©äŸ¡éšïŒïŒïŒïŒïŒïŒïŒãšãåãã
ãåèšç¬¬ïŒæšå®è¡åã¯ã察å¿ããå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããããç»ååŠçè£ çœ®ã An image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus (400),
An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
By using a plurality of first estimation matrices respectively calculated based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment, illumination light incident on the subject from the first imaging data A first calculation unit (11A, 11B, 11C, 11D) for calculating spectral radiance candidates,
An evaluation unit (18, 19) that evaluates each calculated spectral radiance candidate by comparison with a predetermined reference pattern, and outputs one of them as a spectral radiance of illumination light in the illumination environment; With
The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. apparatus. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãå æºåè£ã®åå æŸå°èŒåºŠã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšãåãããç»ååŠçæ¹æ³ã Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
A first estimation matrix calculated based on an autocorrelation matrix of spectral radiance of a light source candidate that can be used to provide the illumination environment, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device And (S104, S106) calculating the spectral radiance of the illumination light incident on the subject from the first imaging data. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®çš®é¡æ¯ã«äºãå®ããããè€æ°ã®æŒç®è¡åã®ãã¡äžã€ãéžæããã¹ãããïŒïŒ³ïŒïŒïŒïŒãšã
ãéžæãããæŒç®è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšãåãã
ãåèšæŒç®è¡åã®åã ã¯ãå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åã§ãããç»ååŠçæ¹æ³ã Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
Selecting one of a plurality of operation matrices predetermined for each type of light source candidates that can be used to provide the illumination environment (S103);
Illumination incident on the subject from the first imaging data using a first estimation matrix calculated based on the selected calculation matrix, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device Calculating spectral radiance of light (S104, S106),
Each of the calculation matrices is an image processing method, which is an autocorrelation matrix of matrices indicating spectral radiances of light source candidates. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®åå æŸå°èŒåºŠã«åºã¥ããŠäºãç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åã®ãã¡äžã€ãéžæããã¹ãããïŒïŒ³ïŒïŒïŒïŒãšã
ãéžæããã第ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠãç®åºããã¹ãããïŒïŒ³ïŒïŒïŒïŒãšãåãã
ãåèšç¬¬ïŒæšå®è¡åã¯ã察å¿ããå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããããç»ååŠçæ¹æ³ã Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that may be used to provide the illumination environment (S105);
Calculating a spectral radiance of illumination light incident on the subject from the first imaging data using the selected first estimation matrix (S106),
The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. Method. - ãæ®åè£ çœ®ïŒïŒïŒïŒïŒãçšããŠãç §æç°å¢äžã«ãããŠè¢«åäœïŒïŒ¯ïŒ¢ïŒªïŒã«å ¥å°ããå ã®å°ãªããšãäžéšãæ¡æ£éšæïŒïŒïŒïŒïŒãä»ããŠæ®åããããšã§ç¬¬ïŒæ®åããŒã¿ãååŸããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšã
ãåèšç §æç°å¢ãæäŸããããã«çšãããåŸãè€æ°ã®å æºåè£ã®åå æŸå°èŒåºŠã«åºã¥ããŠããããç®åºãããè€æ°ã®ç¬¬ïŒæšå®è¡åãçšããŠãåèšç¬¬ïŒæ®åããŒã¿ããåèšè¢«åäœã«å ¥å°ããç §æå ã®åå æŸå°èŒåºŠã®åè£ãããããç®åºããã¹ãããïŒïŒ³ïŒïŒïŒïŒãšã
ãããããç®åºãããåèšåå æŸå°èŒåºŠã®åè£ãäºãå®ããããåºæºãã¿ãŒã³ãšã®æ¯èŒã«ãã£ãŠè©äŸ¡ãããã®ãã¡äžã€ãåèšç §æç°å¢äžã«ãããç §æå ã®åå æŸå°èŒåºŠãšããŠåºåããã¹ãããïŒïŒ³ïŒïŒïŒïŒïŒ³ïŒïŒïŒïŒãšãåãã
ãåèšç¬¬ïŒæšå®è¡åã¯ã察å¿ããå æºåè£ã®åå æŸå°èŒåºŠã瀺ãè¡åã®èªå·±çžé¢è¡åãšãåèšæ¡æ£éšæã®åå ééçãšãåèšæ®åè£ çœ®ã®åå æ床ãšãã«åºã¥ããŠç®åºããããç»ååŠçæ¹æ³ã Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
By using a plurality of first estimation matrices respectively calculated based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment, illumination light incident on the subject from the first imaging data Calculating each spectral radiance candidate (S107);
Evaluating each of the calculated spectral radiance candidates by comparison with a predetermined reference pattern, and outputting one of them as spectral radiance of illumination light in the illumination environment (S108, S109); Prepared,
The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. Method.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008021394A JP2009182845A (en) | 2008-01-31 | 2008-01-31 | Apparatus and method for processing image |
JP2008021395A JP5120936B2 (en) | 2008-01-31 | 2008-01-31 | Image processing apparatus and image processing method |
JP2008-021394 | 2008-08-21 | ||
JP2008-021395 | 2008-08-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009096232A1 true WO2009096232A1 (en) | 2009-08-06 |
Family
ID=40912585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/050453 WO2009096232A1 (en) | 2008-01-31 | 2009-01-15 | Image processing device and image processing method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2009096232A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002344979A (en) * | 2001-05-21 | 2002-11-29 | Minolta Co Ltd | Digital image pickup device, illuminance component acquisition method, program and recording medium |
JP2005202673A (en) * | 2004-01-15 | 2005-07-28 | Kddi Corp | Image recognition device |
-
2009
- 2009-01-15 WO PCT/JP2009/050453 patent/WO2009096232A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002344979A (en) * | 2001-05-21 | 2002-11-29 | Minolta Co Ltd | Digital image pickup device, illuminance component acquisition method, program and recording medium |
JP2005202673A (en) * | 2004-01-15 | 2005-07-28 | Kddi Corp | Image recognition device |
Non-Patent Citations (3)
Title |
---|
PLATT W.K. ET AL.: "Spectral estimation techniques for the spectral calibration of a color image scanner", APPLIED OPTICS, vol. 15, no. 1, January 1976 (1976-01-01), pages 73 - 75 * |
TOMINAGA M.: "A Technique for Multi-band Imaging and Its Application to Vision", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN (CVIM13, vol. 47, no. SIG5, 15 March 2006 (2006-03-15), pages 20 - 34 * |
UCHIYAMA T. ET AL.: "Capture of natural illumination environments and spectral-based image synthesis", IEICE TECHNICAL REPORT, vol. 105, no. 535, 12 January 2006 (2006-01-12), pages 7 - 12 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100278642B1 (en) | Color image processing apparatus and method | |
US7436997B2 (en) | Light source estimating device, light source estimating method, and imaging device and image processing method | |
US10168215B2 (en) | Color measurement apparatus and color information processing apparatus | |
US10514335B2 (en) | Systems and methods for optical spectrometer calibration | |
US9076068B2 (en) | Method and apparatus for evaluating color in an image | |
JP4967440B2 (en) | Imaging apparatus and light source estimation apparatus thereof | |
US7616314B2 (en) | Methods and apparatuses for determining a color calibration for different spectral light inputs in an imaging apparatus measurement | |
US20090294640A1 (en) | System for capturing graphical images using hyperspectral illumination | |
US7457000B2 (en) | Image input system, conversion matrix calculating method, and computer software product | |
JP2020012668A (en) | Evaluation device, measurement device, evaluation method and evaluation program | |
JP6969164B2 (en) | Evaluation device, evaluation program and evaluation method | |
JP2021113744A (en) | Imaging system | |
JP5841091B2 (en) | Image color distribution inspection apparatus and image color distribution inspection method | |
JP6113319B2 (en) | Image color distribution inspection apparatus and image color distribution inspection method | |
JP5120936B2 (en) | Image processing apparatus and image processing method | |
JP2010139324A (en) | Color irregularity measuring method and color irregularity measuring device | |
JP2009182845A (en) | Apparatus and method for processing image | |
US11825211B2 (en) | Method of color inspection by using monochrome imaging with multiple wavelengths of light | |
WO2009096232A1 (en) | Image processing device and image processing method | |
JP2003337067A (en) | Spectrophotometry system and color reproduction system | |
JP2009188807A (en) | Imaging method and imaging system | |
JP4588076B2 (en) | Imaging method and imaging system | |
JP3577977B2 (en) | Illumination light spectral characteristic estimation device | |
JP2021103835A (en) | Method of quantifying color of object, signal processing device, and imaging system | |
JP2001186540A (en) | Colorimetry conversion coefficient calculation method, colorimetry image conversion method, and colorimetry conversion coefficient calculation device and colorimetry image converter, and computer-readable information recording medium for recording colorimetry conversion coefficient calculation program or colorimetry image pickup program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09706560 Country of ref document: EP Kind code of ref document: A1 |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09706560 Country of ref document: EP Kind code of ref document: A1 |