[go: up one dir, main page]

WO2009096232A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2009096232A1
WO2009096232A1 PCT/JP2009/050453 JP2009050453W WO2009096232A1 WO 2009096232 A1 WO2009096232 A1 WO 2009096232A1 JP 2009050453 W JP2009050453 W JP 2009050453W WO 2009096232 A1 WO2009096232 A1 WO 2009096232A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectral
imaging
matrix
illumination
subject
Prior art date
Application number
PCT/JP2009/050453
Other languages
French (fr)
Japanese (ja)
Inventor
Koichi Sugiyama
Original Assignee
Sharp Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2008021394A external-priority patent/JP2009182845A/en
Priority claimed from JP2008021395A external-priority patent/JP5120936B2/en
Application filed by Sharp Kabushiki Kaisha filed Critical Sharp Kabushiki Kaisha
Publication of WO2009096232A1 publication Critical patent/WO2009096232A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B33/00Colour photography, other than mere exposure or projection of a colour film
    • G03B33/06Colour photography, other than mere exposure or projection of a colour film by additive-colour projection apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J2003/467Colour computing

Definitions

  • the present invention relates to an image processing apparatus and an image processing method capable of calculating the spectral radiance of illumination light applied to a subject when the subject is imaged.
  • a color management technique based on the spectral reflectance (reflection spectrum) of a subject is known. This technique is realized by handling the color of the subject in the wavelength region, and enables accurate color reproduction regardless of the illumination environment of the subject.
  • An imaging method based on the spectral reflectance of such a subject is disclosed in “Yoichi Miyake,“ Introduction to Spectral Image Processing ”, The University of Tokyo Press, February 24, 2006”. Has been.
  • the spectral radiance from the subject is determined in accordance with the spectral radiance of the illumination light and the spectral reflectance of the subject, so if the spectral radiance of the illumination light is not known, imaging data obtained by imaging the subject This is because the spectral reflectance of the subject cannot be accurately calculated based on the above.
  • the spectral radiance of the illumination light is also used for white balance adjustment of the imaging device.
  • This white balance adjustment is an operation for determining a coefficient for mutually adjusting the levels of luminance values output from a plurality of image sensors constituting the imaging apparatus.
  • white balance adjustment is disclosed in Japanese Patent Application Laid-Open Nos. 2001-057680 and 2005-328386.
  • Japanese Patent Laid-Open No. 2001-056780 JP 2005-328386 A Yoichi Miyake, “Introduction to Spectral Image Processing”, The University of Tokyo Press, February 24, 2006
  • the spectral radiance of illumination light has been measured exclusively using a dedicated measuring device such as a spectroradiometer.
  • a spectroradiometer separates light incident through an optical system with a diffraction grating (grating), and receives the light with an image sensor (CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor), etc.) for each wavelength. Get the brightness value of. If it is going to implement
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the present invention has been made to solve such a problem, and an object of the present invention is to easily reduce the spectral radiance of illumination light applied to the subject using an imaging device for imaging the subject. It is to provide an image processing apparatus and an image processing method that can be calculated.
  • an image processing device capable of performing image processing on imaging data imaged by an imaging device.
  • the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit Using the first estimation matrix calculated based on the autocorrelation matrix of the spectral radiance of the light source candidates that can be used to provide the environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, A first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data.
  • the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source.
  • the diffusing member is disposed on the optical axis of the imaging apparatus, and the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis.
  • the function value is a cosine function with respect to an angle with respect to the optical axis.
  • the imaging device is configured to output coordinate values defined in the RGB color system as imaging data.
  • the image processing apparatus uses a spectral radiance of the illumination light and a color matching function to calculate a coordinate value in the RGB color system corresponding to the spectral radiance of the illumination light, and a second calculator And a third calculation unit that calculates white balance in the imaging apparatus based on the calculated ratio of coordinate values.
  • the image processing apparatus has a second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging apparatus, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject.
  • a fourth calculation unit that calculates the spectral reflectance of the subject from the second imaging data obtained by imaging the subject with the imaging device in an illumination environment.
  • the image processing device includes a generation unit that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. In addition.
  • an image processing device capable of image processing with respect to imaging data imaged by an imaging device.
  • the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of the predetermined calculation matrices for each type of a plurality of light source candidates that can be used to provide an environment according to an external command, a calculation matrix selected by the selection unit, A first calculation for calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using the first estimation matrix calculated based on the spectral transmittance of the diffusing member and the spectral sensitivity of the imaging device.
  • Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
  • an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus.
  • the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of a plurality of first estimation matrices according to an external command, which is calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an environment, and a selection unit And a first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data using the selected first estimation matrix.
  • the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
  • an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus.
  • the image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit The spectral radiance of illumination light incident on the subject from the first imaging data using a plurality of first estimation matrices respectively calculated based on the spectral radiances of a plurality of light source candidates that can be used to provide an environment
  • a first calculation unit for calculating each candidate and each calculated spectral radiance candidate are evaluated by comparison with a predetermined reference pattern, and one of them is output as the spectral radiance of illumination light in an illumination environment.
  • the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
  • the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device.
  • a first estimation matrix calculated based on the autocorrelation matrix of spectral radiance of light source candidates that can be used to provide an illumination environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
  • the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of predetermined calculation matrices for each type of light source candidates that can be used to provide an illumination environment, the selected calculation matrix, and spectral transmission of the diffusing member Calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using a first estimation matrix calculated based on the rate and the spectral sensitivity of the imaging device.
  • Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
  • the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment, and the selected first estimation matrix And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
  • the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
  • the image processing method is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. And using the plurality of first estimation matrices calculated based on the spectral radiances of the plurality of light source candidates that can be used to provide the illumination environment, the spectrum of the illumination light incident on the subject from the first imaging data
  • Each radiance candidate calculation step is evaluated by comparing each calculated spectral radiance candidate with a predetermined reference pattern, one of which is output as the spectral radiance of the illumination light in the illumination environment Including the step of.
  • the first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
  • the present invention it is possible to easily calculate the spectral radiance of the illumination light applied to the subject using the imaging device for imaging the subject.
  • FIG. 1 is a functional configuration diagram of an image processing device according to a first embodiment of the present invention. It is a figure for demonstrating the acquisition method of the imaging data used as the process target in the image processing apparatus according to Embodiment 1 of this invention. It is a figure for demonstrating the production
  • Ax1, Ax2 optical axis, OBJ subject 1, 1A, 1B image processing device, 10, 20 input unit, 11, 11A, 11B, 11C, 11D spectral radiance calculation unit, 12 estimation matrix calculation unit, 13, 13A light source data Storage unit, 14 Tristimulus value conversion unit, 15 Coordinate conversion unit, 16 White balance calculation unit, 17 Estimation matrix storage unit, 18 Selection unit, 19 Evaluation unit, 21 Spectral reflectance calculation unit, 22 Estimation matrix calculation unit, 23 Spectroscopy Reflectance data storage unit, 24 image data generation unit, 25 coordinate conversion unit, 100, 100A, 100B, 100C illumination spectrum estimation unit, 150 computer body, 152 monitor, 154 keyboard, 156 mouse, 162 memory, 164 fixed disk, 166 FD drive, 168 CD-ROM drive 170 communication interface, 200-color reproduction unit, 300 light source, 400 an imaging device, 402 diffusing member.
  • FIG. 1 is a functional configuration diagram of an image processing apparatus 1 according to the first embodiment of the present invention.
  • the image processing apparatus 1 includes first imaging data g (1) RGB (m, n) and second imaging data g (2) RGB (m, n) captured by an imaging apparatus described later.
  • the image processing method according to the present embodiment can be executed.
  • the image processing apparatus 1 includes an illumination spectrum estimation unit 100 and a color reproduction unit 200.
  • the illumination spectrum estimation unit 100 calculates the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject using the first imaging data g (1) RGB (m, n).
  • the color reproduction unit 200 calculates the spectral reflectance of the subject from the second imaging data g (2) RGB (m, n) using the calculated spectral radiance E (1) .
  • the color reproduction unit 200 outputs image data g (OUT) RGB (m, n) obtained by performing color reproduction of the subject based on the calculated spectral reflectance of the subject.
  • the image data g (OUT) RGB (m, n) output from the color reproduction unit 200 is typically output to an output device (not shown) such as a display device (display) or a printing device (printer). Alternatively, it may be stored in a storage device (not shown).
  • an output device such as a display device (display) or a printing device (printer).
  • the image processing apparatus 1 is typically realized by hardware, but a part or all of the image processing apparatus 1 may be realized by software as will be described later.
  • FIG. 2 is a diagram for describing a method for acquiring imaging data to be processed in image processing apparatus 1 according to the first embodiment of the present invention.
  • FIG. 2 shows a case where the subject OBJ is imaged under a predetermined illumination environment.
  • 2A shows a procedure for acquiring the first imaging data g (1) RGB (m, n)
  • FIG. 2B shows the second imaging data g (2) RGB (m, n). The procedure to acquire is shown.
  • the imaging device 400 is used for acquisition (imaging) of imaging data.
  • the imaging apparatus 400 is a digital still camera or a digital video camera, and includes an imaging element (typically, CCD, CMOS, etc.) having spectral sensitivity characteristics in a specific wavelength band.
  • the imaging element includes a plurality of pixels arranged in a matrix, and outputs luminance corresponding to the intensity of light incident on each pixel as imaging data.
  • the luminance output from each image sensor has a value corresponding to the spectral sensitivity.
  • a specific wavelength band that can be imaged by the imaging apparatus is referred to as a band.
  • the imaging device 400 A case of using the imaging device 400 will be described.
  • the device structure either a structure in which a plurality of types of image sensors are formed on the same substrate or a structure in which a corresponding type of image sensor is formed on a plurality of substrates can be adopted.
  • the spectral sensitivity of the element itself may be made different, or an element having the same spectral sensitivity is used, and R, G are input to the input light side of each element.
  • B may be provided.
  • the imaging data output by the imaging apparatus 400 is three-dimensional color information of R, G, and B luminance values (typically, each of 12 bits: 0 to 4095 gradations).
  • the imaging data output by the imaging apparatus 400 is defined in the RGB color system.
  • (m, n) of the imaging data g (1) RGB (m, n), g (2) RGB (m, n) represents the coordinates of the corresponding pixel in the imaging device of the imaging device 400.
  • the imaging data g (1) RGB (m, n), g (2) RGB (m, n) [(the luminance value detected by the R imaging element at coordinates (m, n)), (coordinate ( (luminance value detected by G image sensor at m, n)), (luminance value detected by B image sensor at coordinates (m, n))].
  • illumination light emitted from some light source 300 is irradiated on the subject OBJ.
  • the imaging device 400 is used to enter the subject OBJ from the light source 300.
  • the light obtained through the diffusing member 402 that is, the light after passing through the diffusing member 402 is imaged at least part of the light to be transmitted.
  • the optical axis Ax1 is on any path on which the illumination light enters the subject OBJ.
  • a device 400 is arranged.
  • a diffusing member 402 is disposed between the imaging device 400 and the light source 300 on this optical axis Ax1 (preferably in the immediate vicinity of the imaging device 400).
  • the path of illumination light incident on the subject OBJ includes a path directly incident on the subject OBJ from the light source 300 and a path incident on the subject OBJ after being reflected from the light source 300 by a wall material or the like.
  • the diffusion member 402 is a member for spatially diffusing the light imaged by the imaging device 400, that is, for spatially averaging, and a milky white diffusion plate having a known spectral transmittance is typically used. Alternatively, an integrating sphere or the like may be used. By using such a diffusing member 402, the intensity distribution of the illumination light incident on the imaging device 400 can be made uniform, thereby increasing the estimation accuracy of the spectral radiance of the illumination light described later.
  • a diffusion plate having a predetermined incident angle characteristic (generally referred to as a cosine collector, a cosine diffuser, a cosine receptor, or the like).
  • a cosine collector a cosine collector
  • a cosine diffuser a cosine receptor
  • the incident intensity of light after passing through the diffusing member 402 is indicated by a cosine function (cosine) with respect to an angle (solid angle) with respect to the optical axis Ax1 of the imaging device 400.
  • the first imaging data g (1) RGB (m, n) acquired according to the above procedure includes color information reflecting illumination light incident on the subject OBJ under the illumination environment.
  • the second imaging data g (2) RGB (m, n) is obtained by imaging the subject OBJ using the same imaging device 400 as in FIG. To be acquired.
  • the diffusing member 402 is not disposed on the optical axis Ax2 of the imaging device 400.
  • the imaging device 400 used for acquiring the first imaging data g (1) RGB (m, n) and the imaging device 400 used for acquiring the second imaging data g (2) RGB (m, n). are not necessarily the same, and different imaging devices 400 may be used as long as at least the spectral sensitivity of the imaging device is substantially known.
  • the optical axis Ax1 of the imaging device 400 when imaging the first imaging data g (1) RGB (m, n), and the second imaging data g (2) in FIG. 2B It is preferable to match the optical axis Ax2 of the imaging device 400 when imaging RGB (m, n).
  • the second imaging data g (2) RGB (m, n) acquired in FIG. 2B is determined mainly depending on the reflected light from the subject OBJ. This reflected light is reflected by the subject OBJ and propagates in the opposite direction on the optical axis Ax2, and the illumination light that generates this reflected light is mainly directed toward the subject OBJ on the optical axis Ax2 of the imaging device 400. Propagate. Therefore, by capturing the illumination light that generates the reflected light as the first imaging data g (1) RGB (m, n), a more appropriate spectral radiance of the illumination light can be calculated.
  • the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject executed by the illumination spectrum estimation unit 100 will be described.
  • the spectral radiance E (1) of the illumination light should originally be a continuous function with respect to the wavelength ⁇ , but in the present embodiment, the spectral radiance E (1) is represented as a visible light region (380 to For 780 nanometers), discrete values sampled with a predetermined wavelength width (1 nanometer width) shall be used.
  • a matrix In the matrix indicating the spectral radiance E (1) , the luminance value at each wavelength is set in the diagonal element, and zero is set in elements other than the diagonal element.
  • the illumination spectrum estimation unit 100 includes an input unit 10, a spectral radiance calculation unit 11, an estimation matrix calculation unit 12, and a light source data storage unit 13.
  • the input unit 10 is obtained by imaging at least a part of light incident on the subject OBJ through the diffusing member 402 using an imaging device 400 in an illumination environment.
  • First imaging data g (1) RGB (m, n) is received. Further, the input unit 10 based on the first image data g (1) RGB (m, n) , the first imaging data g (1) RGB (m, n) image pickup data g (1) representing the RGB Is output.
  • the imaging data g (1) RGB is linearized color data composed of three luminance values (representative values) of R, G, and B.
  • the input unit 10 the first imaging data g (1) RGB (m, n) includes logic for averaging the luminance value included in the first imaging data g (1) RGB (m, n ) Is averaged separately for each of R, G, and B, and the averaged value (R, G, B) is output as imaging data g (1) RGB .
  • the input unit 10 performs processing for canceling the inverse gamma characteristic.
  • the first imaging data g (1) RGB (m, n) may be linearized.
  • a display device has a non-linear relationship (gamma characteristic) between an input signal level and an actually displayed luminance level.
  • the imaging device 400 has a non-linearity (inverse gamma characteristic) opposite to the gamma characteristic of the display device so that an image adapted to human vision is displayed by canceling such non-linearity in the display device.
  • imaging data is often output.
  • Imaging data g (1) RGB (m, n) is generated.
  • the gamma characteristic and the inverse gamma characteristic can be expressed as a power function.
  • the first imaging data g (1) RGB (m, n) can be linearized according to the following arithmetic expression.
  • This lookup table is a data table in which the result of the above-described conversion formula is stored in advance in association with each of all the luminance values that can be taken by the input imaging data. Refer to the correspondence between this input and output. Since the converted value can be acquired simply by doing this, the amount of calculation can be greatly reduced.
  • the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated by the estimation matrix calculation unit 12 to be described later, and the spectral radiance of illumination light incident on the subject OBJ from the imaging data g (1) RGB. E (1) is calculated. More specifically, the spectral radiance calculation unit 11 calculates the spectral radiance E (1) of the illumination light based on the matrix product of the first estimation matrix W (1) and the imaging data g (1) RGB .
  • the spectral radiance E (1) of 401 rows ⁇ 401 columns sampled with a predetermined wavelength width is used, so the first estimation matrix W ( 1) is the number of wavelength components ⁇ the number of bands of the imaging device 400, that is, a matrix of 401 rows ⁇ 3 columns.
  • the estimation matrix calculation unit 12 includes the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the spectral of the imaging device 400. Based on the sensitivity S, a first estimation matrix W (1) is calculated.
  • the spectral sensitivity S is a matrix of 401 rows ⁇ 3 columns
  • the spectral transmittance f (1) is a matrix of 401 rows ⁇ 3 columns.
  • the light (spectrum) that has passed through the diffusing member 402 incident on the imaging device 400 includes the spectral radiance E (1) ( ⁇ ) of the illumination light applied to the diffusing member 402 (or the subject OBJ), and the diffusing member.
  • the spectral transmittance f (1) ( ⁇ ) of the diffusing member 402 is assumed to be constant over the entire diffusing member 402.
  • Such a relationship can be expressed as a relational expression shown in Expression (1).
  • n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
  • a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (1) is expressed by the spectral sensitivity S that is a matrix indicating the sensitivity at each wavelength of each image sensor and the spectral radiance E 1 that is a matrix indicating the radiance at each wavelength. ) And spectral transmittance f (1) which is a matrix indicating the transmittance at each wavelength.
  • the spectral sensitivity S and the spectral transmittance f (1) are already known.
  • the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (1), the following matrix operation expression can be derived from the expression (1).
  • W (1) W (1) ⁇ g (1) (3)
  • W (1) is a first estimation matrix.
  • the first estimation matrix W (1) is calculated by the winner estimation method described below. Specifically, the first estimation matrix W (1) is derived as shown in equation (5) by modifying the equation (2) after defining the system matrix I as the following equation (4).
  • B is an autocorrelation matrix (hereinafter also referred to as “calculation matrix”) of spectral radiances of light source candidates that can be used to provide an illumination environment.
  • spectral radiance for a plurality of light source candidates is acquired in advance, and from a statistical viewpoint, the spectral radiance of illumination light is used by utilizing the correlation with the spectral radiance of each light source.
  • E (1) is estimated. That is, statistical data acquired for each type of light source is prepared in advance, and the spectral radiance E (1) of illumination light is calculated according to the characteristics of the statistical data.
  • the autocorrelation matrix B of the spectral radiance serves as a reference for estimating the spectral radiance E (1) of the illumination light
  • the type of the light source that is likely to be used to provide the illumination environment For example, it is preferable to use appropriate statistical data according to the light emission principle of fluorescent lamps, incandescent lamps, xenon lamps, mercury lamps, and the like.
  • the spectral radiance of such a light source can be obtained experimentally in advance for each light source, or standardized by the International Commission on Illumination (CIE), ISO (International Organization for Standardization), or JIS (Japan Industrial Standards). Statistical data may be used.
  • CIE International Commission on Illumination
  • ISO International Organization for Standardization
  • JIS Japan Industrial Standards
  • FIG. 3 is a diagram for describing processing for generating an autocorrelation matrix B of spectral radiance according to the first embodiment of the present invention.
  • a light source group matrix Est is generated with spectral radiance values of at least one type of light source candidates (light source 1 to light source N) as elements. That is, assuming that the component value (radiance) at each sampling wavelength ⁇ j (1 ⁇ j ⁇ k) of the light source i (1 ⁇ i ⁇ N) is e i ( ⁇ j ), the component value e i ( ⁇ Create a light source group matrix Est with j ) arranged in the row direction.
  • an autocorrelation matrix B is calculated based on the group matrix Est according to the following arithmetic expression.
  • a group matrix E st having the same sampling interval (number of elements ) is calculated. It is necessary to use it. Accordingly, a group matrix Est in which n matrixes of 401 rows ⁇ 1 column indicating the spectral radiance of one light source are combined is a 401 row ⁇ n column matrix, and the autocorrelation matrix of the group matrix Est is , 401 rows ⁇ 401 columns matrix.
  • the spectral radiance of the light source for example, spectral radiance emitted from a single light source such as a fluorescent lamp or an incandescent lamp may be used, or spectral radiance generated by combining a plurality of types of light sources may be used. Further, outdoors, spectral radiance such as sunlight may be combined. That is, in this embodiment, in order to estimate the spectral radiance E (1) of illumination light, the autocorrelation matrix B obtained from the spectral radiance of a light source that is likely to be used to provide an illumination environment. Is preferably used.
  • the light source data storage unit 13 stores in advance the autocorrelation matrix B, which is an arithmetic matrix, calculated by the procedure as described above.
  • the estimation matrix calculation unit 12 calculates the system matrix I based on the spectral transmittance f (1) of the diffusing member 402 stored in advance and the spectral sensitivity S of the imaging device 400 according to the above-described equation (4).
  • the first estimation matrix W (1) is calculated based on the system matrix I and the autocorrelation matrix B read from the light source data storage unit 13 in accordance with the above equation (5).
  • the spectral radiance calculation unit 11 converts the first estimation matrix W (1) from the estimation matrix calculation unit 12 and the imaging data g (1) RGB from the input unit 10 according to the above equation (3). Based on this, the spectral radiance E (1) of the illumination light is calculated.
  • the spectral radiance E (1) of the illumination light calculated by the estimation matrix calculation unit 12 is used for white balance calculation processing and color reproduction processing described later.
  • the illumination spectrum estimation unit 100 further includes a tristimulus value conversion unit 14, a coordinate conversion unit 15, and a white balance calculation unit 16. These parts calculate the white balance of the imaging apparatus 400 based on the calculated spectral radiance E (1) of the illumination light. Based on this white balance value, it is possible to perform white balance adjustment for mutually adjusting the levels of the R, G, and B luminance values output from the image sensor of the imaging apparatus 400.
  • the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z in the XYZ color system from the spectral radiance E (1) defined in the wavelength region.
  • the tristimulus values X, Y, and Z indicate characteristic values when it is assumed that the human has observed the spectral radiance E (1) in the illumination environment where the subject OBJ is imaged. More specifically, the tristimulus values X, Y, and Z of the XYZ color system for the spectral radiance E (1) of the illumination light are expressed by the following equation (7).
  • This color matching function h i ( ⁇ ) is defined by the International Commission on Illumination (CIE).
  • the tristimulus value conversion unit 14 performs an operation corresponding to Equation (7) by a matrix operation shown below. Realize.
  • Tristimulus values [X (1) , Y (1) , Z (1) ] h t ⁇ E (1) (8)
  • the matrix h is a matrix of 401 rows ⁇ 3 columns whose elements are values at the respective sampling wavelengths of the color matching function h i ( ⁇ ).
  • the coordinate conversion unit 15 converts the tristimulus values X (1) , Y (1) , Z (1) into coordinate values R (1) , G (1) , B ( defined in the RGB color system. Convert to 1) . More specifically, the coordinate conversion unit 15 calculates coordinate values R (1) , G (1) , and B (1) defined in the RGB color system according to the arithmetic expression shown below.
  • R (1) a 11 X (1) + a 12 Y (1) + a 13 Z (1)
  • G (1) a 21 X (1) + a 22 Y (1) + a 23 Z (1)
  • B (1) a 31 X (1) + a 32 Y (1) + a 33 Z (1)
  • a 11 to a 33 are 3 rows ⁇ 3 columns representing the correspondence between the colorimetric values of the subject (XYZ color system) and the signal values (RGB color system) actually recorded in the imaging apparatus. Is the transformation matrix.
  • the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) .
  • the white balance is adjusted by independently adjusting the output gains of the image pickup elements of the respective colors constituting the image pickup apparatus 400. That is, the adjustment gain to be multiplied by the R, G, and B image sensors is 1 / R (1) : 1 / G (1) : 1 / B (1) .
  • the white balance calculation unit 16 calculates the ratio of the coordinate values R (1) , G (1) , B (1) or the inverse ratio 1 / R (1) : 1 / G (1) : 1 / B (1) is output as white balance.
  • the white balance output from the white balance calculation unit 16 is used for manual gain adjustment by the user.
  • a gain adjustment unit (not shown) of the imaging device 400 may be provided so that the gain adjustment unit automatically adjusts the gain of the imaging device 400.
  • the color reproduction unit 200 includes an input unit 20, a spectral reflectance calculation unit 21, an estimation matrix calculation unit 22, a spectral reflectance data storage unit 23, an image data generation unit 24, and a coordinate conversion unit 25.
  • the input unit 20 receives second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ using the imaging device 400. Then, the input unit 20 outputs the second imaging data g (2) RGB (m, n) to the spectral reflectance calculation unit 21 according to the processing.
  • the input unit 20 when the second imaging data g (2) RGB (m, n) is provided with an inverse gamma characteristic (nonlinearity), the input unit 20 also has the inverse gamma, as with the input unit 10 described above. You may make it perform the process for negating a characteristic. That is, when the inverse gamma value in the imaging apparatus 400 is ⁇ c, the second imaging data g (2) RGB (m, n) can be linearized according to the following arithmetic expression.
  • the spectral reflectance calculator 21 calculates the spectral reflectance of the subject OBJ from the second imaging data g (2) using the second estimation matrix W (2) calculated by the estimation matrix calculator 22 described later. Further, the spectral reflectance calculator 21 outputs image data g (OUT) RGB (m, n) that is color reproduction data of the subject OBJ under an arbitrary illumination environment. This color reproduction data is a reproduction of how the subject OBJ is observed under an arbitrary illumination environment based on the spectral reflectance of the subject OBJ.
  • the estimation matrix calculation unit 22 includes an autocorrelation matrix A calculated from spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, and imaging. Based on the spectral sensitivity S of the apparatus 400, a second estimation matrix W (2) is calculated.
  • n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
  • a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (9) is expressed as follows: spectral sensitivity S that is a matrix indicating the spectral sensitivity at each wavelength of each image sensor, and spectral radiance E that is a matrix indicating the spectral radiance at each wavelength. This is realized by matrix calculation of (1) and spectral reflectance f (2) (m, n) which is a matrix indicating the spectral reflectance of the subject OBJ at each wavelength.
  • the spectral reflectance f (2) (m, n) is expressed as a matrix of 401 rows ⁇ 1 column for each element. Become.
  • the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (9), the following matrix operation expression can be derived from the expression (9).
  • W (2) is the second estimation matrix.
  • the second estimation matrix W (2) is calculated by the winner estimation method, similarly to the calculation of the first estimation matrix W (1) described above. Specifically, the second estimation matrix W (2) is derived as the following equation (13) by modifying the equation (11) after defining the system matrix H as the following equation (12).
  • A is an autocorrelation matrix calculated from spectral reflectances of colors that can be included in the subject OBJ, and serves as a reference for estimating the spectral reflectance of the subject OBJ.
  • the autocorrelation matrix A can be determined by referring to a standard object color sample (SOCS) which is a database of spectral reflectance standardized by ISO.
  • SOCS standard object color sample
  • the spectral reflectance of the subject OBJ itself may be measured in advance by another method to determine the autocorrelation matrix A.
  • This autocorrelation matrix A is generated by a process similar to the process of generating the autocorrelation matrix B in FIG.
  • the group matrix used to generate the autocorrelation matrix A for example, the spectral reflectance of each color of a color chart composed of a plurality of color samples can be used.
  • a spectral radiance E (1) of 401 rows ⁇ 401 columns obtained by sampling the visible light region (380 to 780 nanometers) with a width of 1 nanometer is used.
  • the matrix is 401 rows ⁇ 401 columns.
  • the autocorrelation matrix A is stored in advance in the spectral reflectance data storage unit 23. Further, a principal component analysis technique may be used instead of the above-described winner estimation technique.
  • the estimation matrix calculation unit 22 can be included in the spectral radiance E (1) of the illumination light, the spectral sensitivity S of the imaging device 400, and the subject OBJ according to the equations (12) and (13). Based on the autocorrelation matrix A obtained from the spectral radiance of the color, a second estimation matrix W (2) is calculated. Then, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) according to the equation (11), and uses the second imaging data g (2) RGB (m, n) as the spectral reflectance of the subject OBJ. f (2) Calculate (m, n).
  • the spectral reflectance f (2) (m, n) calculated in this way is the essence of the color of the subject OBJ.
  • the spectral reflectance f (2) (m, n) which subject OBJ is selected Even if it is observed under such an illumination environment, the color reproduction can be performed.
  • the tristimulus values X, Y, and Z of the XYZ color system when an object having a spectral reflectance f (m, n; ⁇ ) is observed under the condition of an arbitrary spectral radiance E ( ⁇ ) are as follows: (14) shown in FIG.
  • the spectral radiance E ( ⁇ ) used for color reproduction can be arbitrarily determined. However, in this embodiment, color reproduction is performed under the same illumination environment as when the subject OBJ is imaged. Illustrate.
  • g (OUT) XYZ (m, n) ht ⁇ E (1) ⁇ W (2) ⁇ g (2) RGB (m, n) (15)
  • the image data g (OUT) XYZ (m, n) is defined as coordinate values of the XYZ color system.
  • the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) into image data g (OUT) RGB (m, n) defined in the RGB color system. Since the coordinate conversion process executed by coordinate conversion unit 25 is the same as the process in coordinate conversion unit 15 described above, detailed description will not be repeated.
  • image data g (OUT) RGB (m, n) which is color reproduction data of the subject OBJ, is generated from the second imaging data g (2) RGB (m, n).
  • the coordinate conversion unit 25 may include a process for giving a gamma characteristic.
  • the process for imparting the gamma characteristic is realized by calculating the power of the gamma value ⁇ d for the generated image data g (OUT) RGB (m, n). .
  • the amount of calculation can be significantly reduced by using a lookup table (LUT).
  • the configuration in which the image data generation unit 24 performs color reproduction under the same illumination environment as when the subject OBJ is imaged is illustrated, but the illumination environment in which color reproduction is performed may be different. That is, the spectral radiance E used by the image data generation unit 24 to generate the image data g (OUT) XYZ (m, n) can be arbitrarily determined.
  • FIG. 4 is a flowchart showing an overall processing procedure in image processing apparatus 1 according to the first embodiment of the present invention.
  • the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
  • the estimation matrix calculation unit 12 uses the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the imaging device 400.
  • the first estimation matrix W (1) is calculated based on the spectral sensitivity S (step S104).
  • the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
  • the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
  • the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
  • the input unit 20 linearizes the second imaging data as necessary.
  • the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
  • the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m ) of the subject OBJ calculated in step S124. , N) is used to generate image data g (OUT) XYZ (m, n) obtained by performing color reproduction of the subject OBJ (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
  • the spectral radiance of the illumination light applied to the subject OBJ can be calculated using the imaging device for imaging the subject OBJ. Therefore, the spectral radiance can be easily acquired without using a dedicated measuring device for measuring the spectral radiance of the illumination light.
  • the spectral reflectance of the subject OBJ is accurately estimated, and then the color that will be imaged (observed) in the illumination environment to be imaged. Can be reproduced appropriately.
  • the white balance of the imaging device can be adjusted appropriately based on the spectral radiance of the illumination light, more accurate color reproduction can be realized without being affected by variations in the characteristics of the imaging device.
  • the second embodiment exemplifies a configuration in which a plurality of autocorrelation matrices are stored for each type of a plurality of light sources (for each category), and the user can select a suitable one for the illumination environment for imaging the subject OBJ.
  • FIG. 5 is a functional configuration diagram of an image processing apparatus 1A according to the second embodiment of the present invention.
  • the image processing apparatus 1 ⁇ / b> A includes an illumination spectrum estimation unit 100 ⁇ / b> A in place of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1.
  • color reproduction unit 200 is similar to color reproduction unit 200 of image processing apparatus 1 shown in FIG. 1, and therefore detailed description thereof will not be repeated.
  • the illumination spectrum estimation unit 100A is provided with a light source data storage unit 13A in place of the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
  • the light source data storage unit 13A stores in advance an autocorrelation matrix B 1 , B 2 ,..., B M, which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment. Store. Then, the light source data storage unit 13A selects the selected one of the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance in accordance with an external command from the user or the like. Output to.
  • an autocorrelation matrix B 1 , B 2 ,..., B M which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment.
  • the spectral irradiation luminance (spectrum) of a general fluorescent lamp has a waveform having a peak at a wavelength corresponding to an emission line spectrum of mercury or the like enclosed therein.
  • the spectral illumination luminance (spectrum) of the incandescent lamp due to its emission principle.
  • the spectral irradiation brightness (spectrum) of the light source candidate is different for each type. Therefore, in order to estimate the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject OBJ, it is necessary to appropriately select the autocorrelation matrix B serving as a reference.
  • a user having a certain level of prior knowledge can determine what light source is used in the illumination environment when the subject OBJ is imaged using the imaging device 400. For example, whether the subject OBJ is imaged indoors or outdoors, or if the imaged location is indoors, it is possible to determine whether a fluorescent light or an incandescent light is used as the light source. It is. Therefore, if a plurality of autocorrelation matrices are prepared in advance for each type of light source under such a classification that can be determined by the user and the user can arbitrarily select according to the imaging state of the subject OBJ, The estimation accuracy of the spectral radiance E (1) (illumination spectrum) can be increased.
  • the light source data storage unit 13A includes a plurality of autocorrelation matrices for each type of “fluorescent lamp”, “incandescent lamp”, “xenon lamp”, “mercury lamp”, and “sunlight”.
  • B 1 , B 2 ,..., B M are stored in advance, and in response to a selection command SEL by a user or the like, the corresponding one is output as an autocorrelation matrix B to the estimation matrix calculation unit 12. That is, the autocorrelation matrix B 1 is generated only from the statistical data of the spectral radiance of “fluorescent lamp”, and the autocorrelation matrix B 2 is generated only from the statistical data of the spectral radiance of “incandescent lamp”. is there.
  • the estimation matrix calculation unit 12 estimates the spectral radiance E (1) of the illumination light based on the autocorrelation matrix B received from the light source data storage unit 13A.
  • an autocorrelation matrix that further classifies the type of light source.
  • an autocorrelation matrix may be generated based on the spectral radiance generated when, for example, a “fluorescent lamp” and an “incandescent lamp” are combined. That is, it is preferable to store in advance in the light source data storage unit 13A autocorrelation matrices generated based on various spectral radiances that are assumed as illumination environments when the subject OBJ is imaged.
  • FIG. 6 is a flowchart showing an overall processing procedure in image processing apparatus 1A according to the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 6, steps having the same contents as the steps in the flowchart shown in FIG.
  • the input unit 10 captures at least part of the light incident on the subject OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
  • one autocorrelation matrix is estimated as an autocorrelation matrix B according to the selection command SEL. It outputs to the calculation part 12 (step S103). Thereafter, the estimation matrix calculator 12 determines the first estimation matrix W ( based on the autocorrelation matrix B from the light source data storage 13 ⁇ / b> A, the spectral transmittance f (1) of the diffusing member 402, and the spectral sensitivity S of the imaging device 400. 1) is calculated (step S104).
  • the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
  • the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
  • the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
  • the input unit 20 linearizes the second imaging data as necessary.
  • the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
  • the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
  • any one of a plurality of autocorrelation matrices is selected in response to a selection command SEL by a user or the like, and the first estimation matrix is further based on the selected autocorrelation matrix.
  • the configuration in which W (1) is generated has been illustrated.
  • the first estimation matrix W (1) is generated using the spectral transmittance f (1) of the diffusing member 402 and the spectral sensitivity S of the imaging device 400. These values are used for the imaging device 400 and the diffusing member 402. As long as is not exchanged.
  • FIG. 7 is a functional configuration diagram of an image processing device 1B according to a modification of the second embodiment of the present invention.
  • the image processing apparatus 1 ⁇ / b> B includes an illumination spectrum estimation unit 100 ⁇ / b> B instead of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1.
  • color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
  • the illumination spectrum estimation unit 100B includes an estimation matrix storage unit 17 instead of the estimation matrix calculation unit 12 and the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
  • the estimation matrix storage unit 17 includes first estimation matrices W (1) 1 , W (1) 2 ,... That are calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. ., W (1) M is stored in advance. And the estimation matrix storage part 17 respond
  • the first estimation matrix W (1) 1, W ( 1) 2, ⁇ , W (1) M is, B 1 is stored in the source data storage section 13A of the image processing apparatus 1A according to the second embodiment, Calculated from B 2 ,..., B M.
  • FIG. 8 is a flowchart showing an overall processing procedure in image processing apparatus 1B according to the modification of the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 8, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
  • the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least a part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
  • the estimation matrix storage unit 17 selects one of the first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) M stored in advance according to the selection command SEL.
  • One estimation matrix is output to the spectral radiance calculation unit 11 as a first estimation matrix W (1) (step S105).
  • the spectral radiance calculation unit 11 uses the first estimation matrix W (1) selected in step S105, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
  • the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
  • the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
  • the input unit 20 linearizes the second imaging data as necessary.
  • the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
  • the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
  • Embodiment 3 In Embodiment 2 described above, a plurality of autocorrelation matrices are stored in advance for each type of light source data, and the spectral radiance (spectrum) of illumination light is estimated using an arbitrarily selected autocorrelation matrix. The configuration in which is performed is illustrated. On the other hand, in Embodiment 3 described below, the most appropriate estimation result is output after evaluating the estimation result of the spectral radiance of the illumination light using each of the plurality of autocorrelation matrices. The configuration will be exemplified.
  • the image processing apparatus is the image processing apparatus 1 according to the first embodiment shown in FIG. 1 except that an illumination spectrum estimation unit 100C is provided instead of the illumination spectrum estimation unit 100.
  • color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
  • FIG. 9 is a functional configuration diagram of illumination spectrum estimation unit 100C of the image processing device according to the third embodiment of the present invention.
  • the color reproduction unit 200 included in the image processing apparatus according to the present embodiment is not shown.
  • the illumination spectrum estimation unit 100C includes an input unit 10, spectral radiance calculation units 11A, 11B, 11C, and 11D, a selection unit 18, an evaluation unit 19, and a tristimulus value conversion unit 14.
  • the coordinate conversion unit 15 and the white balance calculation unit 16 are further included.
  • the input unit 10, the tristimulus value conversion unit 14, the coordinate conversion unit 15, and the white balance calculation unit 16 have been described in the first embodiment (FIG. 1), detailed description will be repeated. Absent.
  • Spectral radiance calculation units 11A, 11B, 11C, and 11D calculate first estimation matrices W (1) 1 , W ( based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. 1) Spectral radiance E (1) 1 , E (1) 2 , E of illumination light incident on subject OBJ from imaging data g (1) using 2 , W (1) 3 , W (1) 4 (1) 3 and E (1) Calculate 4 respectively.
  • the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is the first estimation matrix W (1) stored in the estimation matrix storage unit 17 shown in FIG. 1 , W (1) 2 , W (1) 3 , and W (1) 4 are substantially the same.
  • the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is previously stored for each type of a plurality of light source candidates that can be used to provide an illumination environment. Based on the determined autocorrelation matrices B 1 , B 2 , B 3 , B 4 , the calculation is performed according to the same procedure as described above.
  • the number is not restrict
  • FIG. 9 illustrates a configuration in which the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is calculated in advance. Like the illumination spectrum estimation unit 100A, these may be dynamically calculated for each calculation process.
  • the first estimation matrix W (1) 1 is calculated from the autocorrelation matrix B 1 created based on the fluorescent lamp statistical data
  • the first estimation matrix W (1) 2 is calculated from the autocorrelation matrix B 2 created based on the incandescent lamp statistical data
  • the first estimation matrix W (1) 3 is the autocorrelation matrix created based on the xenon lamp statistical data. It assumed to be calculated from the B 3.
  • the first estimation matrix W (1) 4 is calculated from an autocorrelation matrix B 4 created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp.
  • Spectral radiance calculation units 11A, 11B, 11C, and 11D select the calculated spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light, respectively. 18 respectively.
  • the selection unit 18 selects one of the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the input illumination light according to the evaluation result by the evaluation unit 19 described later. Are output as the spectral radiance E (1) of the illumination light.
  • the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light calculated by the spectral radiance calculation units 11A, 11B, 11C, and 11D, respectively. Of these, the one that is most appropriately estimated is evaluated. More specifically, the evaluation unit 19 compares the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) of the illumination light by comparing with a reference pattern defined in advance. 4 is evaluated.
  • first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 (or corresponding autocorrelation matrices B 1 , B 2 , B 3 ), respectively.
  • the reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE calculated from the spectral radiance (statistical value or actually measured value) of the light source used at times are used. More specifically, for example, a reference pattern E (1) 1AVE corresponding to the first estimation matrix W (1) 1, the generation of the autocorrelation matrix B 1 used in the calculation of the first estimated matrix W (1) 1 It is calculated by averaging each element of the original light source group matrix E st (see FIG. 3). That is, as shown in FIG.
  • spectral radiance (spectrum) representative of each of a fluorescent lamp, an incandescent lamp, and a xenon lamp is calculated in advance as a reference pattern.
  • the reference pattern corresponding to the first estimation matrix W (1) 4 is not necessarily calculated. This is because the autocorrelation matrix B 4 corresponding to the first estimation matrix W (1) 4 is created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp. even create a reference pattern from the B 4, will blurred characteristics of each light source, because the effect of the reference pattern Usumaru.
  • the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light. explain.
  • FIG. 10 shows the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) of the illumination light by the evaluation unit 19. It is a figure for demonstrating the comparison process with 3AVE .
  • FIG. 11 is a diagram for explaining the similarity calculation process in FIG. 10.
  • the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of illumination light calculated by the spectral radiance calculation units 11A, 11B, and 11C, respectively.
  • the reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are respectively compared, and the comparison result (similarity as an example) is calculated.
  • the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are all 0. It is assumed that it is standardized in the range of ⁇ 1.
  • the evaluation unit 19 evaluates how similar each spectral radiance is to the corresponding reference pattern. Typically, the evaluation unit 19 calculates the similarity based on the deviation between the waveforms on the wavelength region.
  • FIG. 11 is a diagram for explaining a comparison process between the spectral radiance E (1) 1 of the illumination light calculated by the spectral radiance calculation unit 11A and the reference pattern E (1) 1AVE .
  • FIG. 11A shows a state where the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE are plotted on the same wavelength region, and FIG. The process which calculates a deviation is shown.
  • the evaluation unit 19 calculates the deviation (standardized value) err j between the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE at each sampling wavelength ⁇ j (1 ⁇ j ⁇ k). (1 ⁇ j ⁇ k) is calculated sequentially. Subsequently, the evaluation unit 19 calculates an evaluation result (similarity) by calculating a total average of all the deviations err j of the sampling wavelength ⁇ j . That is, the similarity SM can be calculated by the following arithmetic expression using the deviation err j of the sampling wavelength ⁇ j .
  • FIG. 10 shows the measured similarity when the image processing method according to the present embodiment is performed under the illumination environment of a fluorescent lamp.
  • the similarity of the spectral radiance E (1) 1 of the illumination light estimated based on the first estimation matrix W (1) 1 is the highest.
  • the spectral radiance E (1) 1 of the illumination light is output as the spectral radiance E (1) .
  • this evaluation result agrees with the fact that it was actually measured under the fluorescent lamp illumination environment.
  • the spectral radiance E (1) of the illumination light is provided by combining a fluorescent lamp, an incandescent lamp, and a xenon lamp, or provided by a light source other than these.
  • the spectral radiance E (1) 4 of the illumination light estimated based on the first estimation matrix W (1) 4 reflecting all the characteristics of the fluorescent lamp, the incandescent lamp, and the xenon lamp is used as the illumination light. It may be appropriate to output as the spectral radiance E (1) .
  • the evaluation unit 19 is used when the evaluation results (similarities) for the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of the illumination light are all below the allowable value. outputs the spectral radiance E (1) 4 as the spectral radiance E of the illumination light (1).
  • the spectral radiance calculation units 11A, 11B, 11C, and 11D perform spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of illumination light.
  • the calculation process of the spectral radiance of the illumination light and the calculation process of the similarity degree are sequentially executed for each of the first estimation matrices. May be.
  • a predetermined threshold for example, 95%)
  • the similarity may be calculated using a correlation coefficient or the like.
  • FIG. 12 is a flowchart showing an overall processing procedure in the image processing apparatus according to the third embodiment of the present invention. Of the steps in the flowchart shown in FIG. 12, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
  • FIG. 13 is a flowchart showing the procedure of the evaluation subroutine shown in step S108 shown in FIG.
  • the input unit 10 captures at least part of the light incident on the object OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
  • the spectral radiance calculation units 11A, 11B, 11C, and 11D capture images using the first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 , and W (1) 4 , respectively.
  • Data g (1) Spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light incident on the subject OBJ from RGB is calculated (step S107).
  • the evaluation unit 19 executes an evaluation subroutine, and the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light calculated in step S107. Among them, the one with the highest estimation accuracy is evaluated (step S108). Further, the selection unit 18, according to the evaluation result in step S108, one of the spectral radiance E of the illumination light (1) 1, E (1 ) 2, E (1) 3, E (1) 4, lighting Output as spectral radiance E (1) of light (step S109).
  • the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
  • the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120).
  • the input unit 20 linearizes the second imaging data as necessary.
  • the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124).
  • the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
  • the evaluation unit 19 compares the spectral radiance E (1) 1 of the illumination light with a predetermined reference pattern E (1) 1AVE , thereby determining the similarity SM 1 between the two. Is calculated (step S200). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 2 of the illumination light with a predetermined reference pattern E (1) 2AVE to calculate the similarity SM 2 between them ( Step S202). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 3 of the illumination light with a predetermined reference pattern E (1) 3AVE to calculate a similarity SM 3 between them ( Step S204).
  • the evaluation unit 19 extracts the one having the highest value among the similarities SM 1 , SM 2 , SM 3 calculated in steps S200, S202, S204 (step S206). Furthermore, the evaluation unit 19 determines whether or not the similarity extracted in step S206 is greater than or equal to a predetermined allowable value (step S208).
  • the evaluation unit 19 evaluates that the spectral radiance corresponding to the similarity extracted in step S206 has the highest estimation accuracy (Ste S210).
  • the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) of the illumination light. It is evaluated that the spectral radiance E (1) 4 of illumination light other than 3 has the highest estimation accuracy (step S212).
  • the processing proceeds to step S109 in FIG. ⁇ Operational effects of the present embodiment>
  • the same operational effects as those of the above-described first embodiment can be obtained, and the spectral radiance of illumination light is high even for a user who does not have any prior knowledge. It can be obtained with estimation accuracy. Therefore, even if the subject OBJ is performed under various conditions, the estimation accuracy of the spectral radiance of the illumination light can be maintained.
  • FIG. 14 is a schematic configuration diagram of a computer that realizes an image processing apparatus 1 # according to a modification of the embodiment of the present invention.
  • the computer includes a computer main body 150 equipped with an FD (Flexible Disk) driving device 166 and a CD-ROM (Compact Disk-Read Only Memory) driving device 168, a monitor 152, a keyboard 154, a mouse. 156.
  • FD Flexible Disk
  • CD-ROM Compact Disk-Read Only Memory
  • the computer main body 150 further includes a CPU (Central Processing Unit) 160 that is an arithmetic device, a memory 162, a fixed disk 164 that is a storage device, and a communication interface 170 that are connected to each other via a bus.
  • a CPU Central Processing Unit
  • Image processing apparatus 1 # can be realized by CPU 160 executing software using computer hardware such as memory 162.
  • such software is stored in a recording medium such as the FD 166a or the CD-ROM 168a, or distributed via a network or the like.
  • Such software is read from the recording medium by the FD driving device 166 or the CD-ROM driving device 168 or received by the communication interface 170 and stored in the fixed disk 164. Further, it is read from the fixed disk 164 to the memory 162 and executed by the CPU 160.
  • the monitor 152 is a display unit for displaying information output by the CPU 160, and includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like.
  • the mouse 156 receives a command from a user corresponding to an operation such as click or slide.
  • the keyboard 154 receives a command from the user corresponding to the input key.
  • the CPU 160 is an arithmetic processing unit that executes various arithmetic operations by sequentially executing programmed instructions.
  • the memory 162 stores various types of information according to the program execution of the CPU 160.
  • the communication interface 170 converts the information output from the CPU 160 into, for example, an electrical signal and sends it to another device, and receives the electrical signal from the other device and converts it into information that can be used by the CPU 160.
  • Fixed disk 164 is a non-volatile storage device that stores programs executed by CPU 160 and predetermined data. In addition, other output devices such as a printer may be connected to the computer as necessary.
  • the program according to the present embodiment is a program module that is provided as a part of a computer operating system (OS) and calls necessary modules in a predetermined arrangement at a predetermined timing to execute processing. There may be. In that case, the program itself does not include the module, and the process is executed in cooperation with the OS. A program that does not include such a module can also be included in the program according to the present invention.
  • OS computer operating system
  • the program according to the present embodiment may be provided by being incorporated in a part of another program. Even in this case, the program itself does not include the module included in the other program, and the process is executed in cooperation with the other program.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Color Television Image Signal Generators (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

An estimate matrix calculation unit (12) calculates a first estimate matrix (W(1)) based on autocorrelation matrix (B) of spectral radiance of a light source candidate that can be used for providing an illumination environment in an object, a spectral transmittance (f(1)) of a dispersion member and a spectral sensitivity (S) of an image picking-up device. A spectral radiance calculation unit (11) calculates spectral radiance (E(1)) of illumination light that is incident on an object (OBJ) from image picking-up data (g(1)RGB) by using the first estimate matrix (W(1)) calculated by the estimate matrix calculation unit (12).

Description

画像凊理装眮および画像凊理方法Image processing apparatus and image processing method
 本発明は、被写䜓を撮像する際の圓該被写䜓に照射される照明光の分光攟射茝床を算出可胜な画像凊理装眮および画像凊理方法に関するものである。 The present invention relates to an image processing apparatus and an image processing method capable of calculating the spectral radiance of illumination light applied to a subject when the subject is imaged.
 近幎、様々な照明環境䞋においお撮像された被写䜓の色を、衚瀺装眮や印刷装眮などの出力装眮においお、正確に再珟するための技術が提案されおいる。 In recent years, techniques for accurately reproducing the color of a subject imaged under various lighting environments in an output device such as a display device or a printing device have been proposed.
 代衚的な技術ずしお、被写䜓の分光反射率反射スペクトルに基づくカラヌマネゞメント技術が知られおいる。この技術は、被写䜓の色を波長領域で扱うこずで実珟され、被写䜓における照明環境にかかわらず正確な色再珟が可胜ずなる。このような被写䜓の分光反射率に基づいた撮像凊理に぀いおは、“䞉宅掋䞀線、「分光画像凊理入門」、財団法人東京倧孊出版䌚、幎月日”にその原理的な方法が開瀺されおいる。 As a representative technique, a color management technique based on the spectral reflectance (reflection spectrum) of a subject is known. This technique is realized by handling the color of the subject in the wavelength region, and enables accurate color reproduction regardless of the illumination environment of the subject. An imaging method based on the spectral reflectance of such a subject is disclosed in “Yoichi Miyake,“ Introduction to Spectral Image Processing ”, The University of Tokyo Press, February 24, 2006”. Has been.
 ずころで、被写䜓の分光反射率を掚定するためには、その前提ずなる撮像時の照明環境䞋における分光攟射茝床照明スペクトルを予め取埗しおおく必芁がある。なぜならば、被写䜓からの分光攟射茝床は、照明光の分光攟射茝床および被写䜓の分光反射率に応じお定たるため、照明光の分光攟射茝床が既知でなければ、被写䜓を撮像しお埗られる撮像デヌタに基づいお、被写䜓の分光反射率を正確に算出できないからである。 By the way, in order to estimate the spectral reflectance of the subject, it is necessary to acquire in advance the spectral radiance (illumination spectrum) under the illumination environment at the time of imaging, which is the premise. This is because the spectral radiance from the subject is determined in accordance with the spectral radiance of the illumination light and the spectral reflectance of the subject, so if the spectral radiance of the illumination light is not known, imaging data obtained by imaging the subject This is because the spectral reflectance of the subject cannot be accurately calculated based on the above.
 たた、照明光の分光攟射茝床は、撮像装眮のホワむトバランス調敎などにも甚いられる。このホワむトバランス調敎ずは、撮像装眮を構成する耇数の撮像玠子から出力される茝床倀のレベルを互いに調敎するための係数を決定する䜜業である。ホワむトバランスが厩れるず、癜色の被写䜓を撮像装眮によっお撮像した堎合に、本来の癜色ずは異なった色たずえば、赀味を垯びた色などが出力されるなど、正確に色再珟を行なうこずができない。なお、ホワむトバランス調敎に぀いおは、特開号公報や特開号公報に開瀺されおいる。
特開号公報 特開号公報 䞉宅掋䞀線、「分光画像凊理入門」、財団法人東京倧孊出版䌚、幎月日
The spectral radiance of the illumination light is also used for white balance adjustment of the imaging device. This white balance adjustment is an operation for determining a coefficient for mutually adjusting the levels of luminance values output from a plurality of image sensors constituting the imaging apparatus. When white balance is lost, when a white subject is imaged by an imaging device, a color that is different from the original white (for example, a reddish color) is output, and color reproduction is performed accurately. I can't. Note that white balance adjustment is disclosed in Japanese Patent Application Laid-Open Nos. 2001-057680 and 2005-328386.
Japanese Patent Laid-Open No. 2001-056780 JP 2005-328386 A Yoichi Miyake, "Introduction to Spectral Image Processing", The University of Tokyo Press, February 24, 2006
 埓来、照明光の分光攟射茝床は、もっぱら分光攟射蚈ずいった専甚の枬定装眮を甚いお枬定されおいた。分光攟射蚈は、光孊系を通じお入射した光を回折栌子グレヌティングによっお分光し、むメヌゞセンサ  や   などが甚いられるにお受光し、波長毎の茝床倀を取埗する。䞊述したようなカラヌマネゞメント技術を実珟しようずするず、スチルカメラやビデオカメラずいった撮像装眮の他に、分光攟射茝床を枬定するための枬定装眮が必芁ずなり、コストが高くなるずいう課題があった。 Conventionally, the spectral radiance of illumination light has been measured exclusively using a dedicated measuring device such as a spectroradiometer. A spectroradiometer separates light incident through an optical system with a diffraction grating (grating), and receives the light with an image sensor (CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor), etc.) for each wavelength. Get the brightness value of. If it is going to implement | achieve the color management techniques as mentioned above, in addition to an imaging device such as a still camera or a video camera, a measuring device for measuring the spectral radiance is required, and there is a problem that costs increase.
 たた、通垞、被写䜓に照射される照明光を枬定するためには、分光反射率が既知である暙準癜色板を被写䜓付近に蚭眮し、その板面からの反射光を分光攟射蚈によっお枬定する必芁があり、撮像だけでなく、枬定のための手間がかかるずいう課題があった。 Normally, in order to measure the illumination light irradiated to the subject, it is necessary to install a standard white plate with a known spectral reflectance near the subject and measure the reflected light from the plate surface with a spectroradiometer. There is a problem that it takes time and effort for measurement as well as imaging.
 そこで、この発明は、かかる課題を解決するためになされたものであり、その目的は、被写䜓を撮像するための撮像装眮を甚いお、圓該被写䜓に照射されおいる照明光の分光攟射茝床を容易に算出可胜な画像凊理装眮および画像凊理方法を提䟛するこずである。 Accordingly, the present invention has been made to solve such a problem, and an object of the present invention is to easily reduce the spectral radiance of illumination light applied to the subject using an imaging device for imaging the subject. It is to provide an image processing apparatus and an image processing method that can be calculated.
 この発明のある局面によれば、撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮を提䟛する。この画像凊理装眮は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを含む。 According to an aspect of the present invention, there is provided an image processing device capable of performing image processing on imaging data imaged by an imaging device. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit Using the first estimation matrix calculated based on the autocorrelation matrix of the spectral radiance of the light source candidates that can be used to provide the environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, A first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data.
 奜たしくは、光源候補の分光攟射茝床は、光源の皮類毎に予め取埗された特性倀である。 Preferably, the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source.
 奜たしくは、拡散郚材は、撮像装眮の光軞䞊に配眮され、拡散郚材の入射匷床は、光軞に察する角床に぀いおの所定の関数倀で瀺される。 Preferably, the diffusing member is disposed on the optical axis of the imaging apparatus, and the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis.
 さらに奜たしくは、関数倀は、光軞に察する角床に぀いおの䜙匊関数である。
 奜たしくは、撮像装眮は、撮像デヌタずしお、衚色系においお定矩される座暙倀を出力するように構成される。画像凊理装眮は、照明光の分光攟射茝床ず等色関数ずを甚いお、照明光の分光攟射茝床に察応する衚色系における座暙倀を算出する第算出郚ず、第算出郚においお算出された座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出する第算出郚ずをさらに含む。
More preferably, the function value is a cosine function with respect to an angle with respect to the optical axis.
Preferably, the imaging device is configured to output coordinate values defined in the RGB color system as imaging data. The image processing apparatus uses a spectral radiance of the illumination light and a color matching function to calculate a coordinate value in the RGB color system corresponding to the spectral radiance of the illumination light, and a second calculator And a third calculation unit that calculates white balance in the imaging apparatus based on the calculated ratio of coordinate values.
 奜たしくは、この画像凊理装眮は、照明光の分光攟射茝床ず、撮像装眮の分光感床ず、被写䜓に含たれ埗る色の分光反射率の自己盞関行列ず、に基づいお算出される第掚定行列を甚いお、照明環境䞋においお撮像装眮により被写䜓を撮像するこずで埗られる第撮像デヌタから被写䜓の分光反射率を算出する第算出郚をさらに含む。 Preferably, the image processing apparatus has a second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging apparatus, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject. And a fourth calculation unit that calculates the spectral reflectance of the subject from the second imaging data obtained by imaging the subject with the imaging device in an illumination environment.
 さらに奜たしくは、この画像凊理装眮は、第算出郚によっお算出された被写䜓の分光反射率に基づいお、被写䜓を所定の照明環境䞋においお撮像した堎合に取埗される画像デヌタを生成する生成郚をさらに含む。 More preferably, the image processing device includes a generation unit that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. In addition.
 この発明の別の局面によれば、撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮を提䟛する。この画像凊理装眮は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の皮類毎に予め定められた挔算行列のうち䞀぀を、倖郚指什に応じお遞択する遞択郚ず、遞択郚によっお遞択された挔算行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを含む。挔算行列の各々は、光源候補の分光攟射茝床を瀺す行列の自己盞関行列である。 According to another aspect of the present invention, there is provided an image processing device capable of image processing with respect to imaging data imaged by an imaging device. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of the predetermined calculation matrices for each type of a plurality of light source candidates that can be used to provide an environment according to an external command, a calculation matrix selected by the selection unit, A first calculation for calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using the first estimation matrix calculated based on the spectral transmittance of the diffusing member and the spectral sensitivity of the imaging device. Part. Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
 この発明のさらに別の局面によれば、撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮を提䟛する。この画像凊理装眮は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお予め算出される、耇数の第掚定行列のうち䞀぀を倖郚指什に応じお遞択する遞択郚ず、遞択郚によっお遞択された第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを含む。第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される。 According to still another aspect of the present invention, there is provided an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit A selection unit that selects one of a plurality of first estimation matrices according to an external command, which is calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an environment, and a selection unit And a first calculator that calculates the spectral radiance of the illumination light incident on the subject from the first imaging data using the selected first estimation matrix. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
 この発明のさらに別の局面によれば、撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮を提䟛する。この画像凊理装眮は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいおそれぞれ算出される耇数の第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床の候補をそれぞれ算出する第算出郚ず、それぞれ算出された分光攟射茝床の候補を予め定められた基準パタヌンずの比范によっお評䟡し、そのうち䞀぀を照明環境䞋における照明光の分光攟射茝床ずしお出力する評䟡郚ずを含む。第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される。 According to still another aspect of the present invention, there is provided an image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus. The image processing apparatus uses an imaging device to receive first imaging data obtained by imaging at least a part of light incident on a subject through a diffusing member in an illumination environment, and an illumination unit The spectral radiance of illumination light incident on the subject from the first imaging data using a plurality of first estimation matrices respectively calculated based on the spectral radiances of a plurality of light source candidates that can be used to provide an environment A first calculation unit for calculating each candidate and each calculated spectral radiance candidate are evaluated by comparison with a predetermined reference pattern, and one of them is output as the spectral radiance of illumination light in an illumination environment. And an evaluation unit. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
 この発明のさらに別の局面に埓う画像凊理方法は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出するステップずを含む。 The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. A first estimation matrix calculated based on the autocorrelation matrix of spectral radiance of light source candidates that can be used to provide an illumination environment, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
 この発明のさらに別の局面に埓う画像凊理方法は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の皮類毎に予め定められた耇数の挔算行列のうち䞀぀を遞択するステップず、遞択された挔算行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出するステップずを含む。挔算行列の各々は、光源候補の分光攟射茝床を瀺す行列の自己盞関行列である。 The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of predetermined calculation matrices for each type of light source candidates that can be used to provide an illumination environment, the selected calculation matrix, and spectral transmission of the diffusing member Calculating the spectral radiance of the illumination light incident on the subject from the first imaging data using a first estimation matrix calculated based on the rate and the spectral sensitivity of the imaging device. Each of the calculation matrices is an autocorrelation matrix of a matrix indicating the spectral radiance of the light source candidate.
 この発明のさらに別の局面に埓う画像凊理方法は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお予め算出される耇数の第掚定行列のうち䞀぀を遞択するステップず、遞択された第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床を算出するステップずを含む。第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される。 The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment, and the selected first estimation matrix And calculating the spectral radiance of the illumination light incident on the subject from the first imaging data. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
 この発明のさらに別の局面に埓う画像凊理方法は、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいおそれぞれ算出される耇数の第掚定行列を甚いお、第撮像デヌタから被写䜓に入射する照明光の分光攟射茝床の候補をそれぞれ算出するステップず、それぞれ算出された分光攟射茝床の候補を予め定められた基準パタヌンずの比范によっお評䟡し、そのうち䞀぀を照明環境䞋における照明光の分光攟射茝床ずしお出力するステップずを含む。第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、拡散郚材の分光透過率ず、撮像装眮の分光感床ず、に基づいお算出される。 The image processing method according to still another aspect of the present invention is a step of acquiring first imaging data by imaging at least part of light incident on a subject under a lighting environment through a diffusing member using an imaging device. And using the plurality of first estimation matrices calculated based on the spectral radiances of the plurality of light source candidates that can be used to provide the illumination environment, the spectrum of the illumination light incident on the subject from the first imaging data Each radiance candidate calculation step is evaluated by comparing each calculated spectral radiance candidate with a predetermined reference pattern, one of which is output as the spectral radiance of the illumination light in the illumination environment Including the step of. The first estimation matrix is calculated based on the autocorrelation matrix of the matrix indicating the spectral radiance of the corresponding light source candidate, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device.
 この発明によれば、被写䜓を撮像するための撮像装眮を甚いお、圓該被写䜓に照射されおいる照明光の分光攟射茝床を容易に算出できる。 According to the present invention, it is possible to easily calculate the spectral radiance of the illumination light applied to the subject using the imaging device for imaging the subject.
この発明の実斜の圢態に埓う画像凊理装眮の機胜構成図である。1 is a functional configuration diagram of an image processing device according to a first embodiment of the present invention. この発明の実斜の圢態に埓う画像凊理装眮においお凊理察象ずなる撮像デヌタの取埗方法を説明するための図である。It is a figure for demonstrating the acquisition method of the imaging data used as the process target in the image processing apparatus according to Embodiment 1 of this invention. この発明の実斜の圢態に埓う分光攟射茝床の自己盞関行列の生成凊理を説明するための図である。It is a figure for demonstrating the production | generation process of the autocorrelation matrix of the spectral radiance according to Embodiment 1 of this invention. この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。It is a flowchart which shows the whole process sequence in the image processing apparatus according to Embodiment 1 of this invention. この発明の実斜の圢態に埓う画像凊理装眮の機胜構成図である。It is a function block diagram of the image processing apparatus according to Embodiment 2 of this invention. この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。It is a flowchart which shows the whole process sequence in the image processing apparatus according to Embodiment 2 of this invention. この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮の機胜構成図である。It is a function block diagram of the image processing apparatus according to the modification of Embodiment 2 of this invention. この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。It is a flowchart which shows the whole process sequence in the image processing apparatus according to the modification of Embodiment 2 of this invention. この発明の実斜の圢態に埓う画像凊理装眮の照明スペクトル掚定郚の機胜構成図である。It is a function block diagram of the illumination spectrum estimation part of the image processing apparatus according to Embodiment 3 of this invention. 評䟡郚による照明光の分光攟射茝床ず基準パタヌンずの比范凊理を説明するための図である。It is a figure for demonstrating the comparison process of the spectral radiance of the illumination light by an evaluation part, and a reference pattern. 図における類䌌床の算出凊理を説明するための図である。It is a figure for demonstrating the calculation process of the similarity in FIG. この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。It is a flowchart which shows the whole process sequence in the image processing apparatus according to Embodiment 3 of this invention. 図に瀺すステップに瀺す評䟡サブルヌチンの凊理手順を瀺すフロヌチャヌトである。It is a flowchart which shows the process sequence of the evaluation subroutine shown to step S108 shown in FIG. この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮を実珟するコンピュヌタの抂略構成図である。It is a schematic block diagram of the computer which implement | achieves the image processing apparatus according to the modification of embodiment of this invention.
笊号の説明Explanation of symbols
  光軞、 被写䜓、 画像凊理装眮、 入力郚、 分光攟射茝床算出郚、 掚定行列算出郚、 光源デヌタ栌玍郚、 䞉刺激倀倉換郚、 座暙倉換郚、 ホワむトバランス算出郚、 掚定行列栌玍郚、 遞択郚、 評䟡郚、 分光反射率算出郚、 掚定行列算出郚、 分光反射率デヌタ栌玍郚、 画像デヌタ生成郚、 座暙倉換郚、 照明スペクトル掚定郚、 コンピュヌタ本䜓、 モニタ、 キヌボヌド、 マりス、 メモリ、 固定ディスク、 駆動装眮、 駆動装眮、 通信むンタヌフェヌス、 色再珟郚、 光源、 撮像装眮、 拡散郚材。 Ax1, Ax2 optical axis, OBJ subject, 1, 1A, 1B image processing device, 10, 20 input unit, 11, 11A, 11B, 11C, 11D spectral radiance calculation unit, 12 estimation matrix calculation unit, 13, 13A light source data Storage unit, 14 Tristimulus value conversion unit, 15 Coordinate conversion unit, 16 White balance calculation unit, 17 Estimation matrix storage unit, 18 Selection unit, 19 Evaluation unit, 21 Spectral reflectance calculation unit, 22 Estimation matrix calculation unit, 23 Spectroscopy Reflectance data storage unit, 24 image data generation unit, 25 coordinate conversion unit, 100, 100A, 100B, 100C illumination spectrum estimation unit, 150 computer body, 152 monitor, 154 keyboard, 156 mouse, 162 memory, 164 fixed disk, 166 FD drive, 168 CD-ROM drive 170 communication interface, 200-color reproduction unit, 300 light source, 400 an imaging device, 402 diffusing member.
 この発明の実斜の圢態に぀いお、図面を参照しながら詳现に説明する。なお、図䞭の同䞀たたは盞圓郚分に぀いおは、同䞀笊号を付しおその説明は繰返さない。 Embodiments of the present invention will be described in detail with reference to the drawings. Note that the same or corresponding parts in the drawings are denoted by the same reference numerals and description thereof will not be repeated.
 実斜の圢態
 党䜓構成
 図は、この発明の実斜の圢態に埓う画像凊理装眮の機胜構成図である。
[Embodiment 1]
<Overall configuration>
FIG. 1 is a functional configuration diagram of an image processing apparatus 1 according to the first embodiment of the present invention.
 図を参照しお、画像凊理装眮は、埌述する撮像装眮によっお撮像された第撮像デヌタ および第撮像デヌタ に察しお、本実斜の圢態に係る画像凊理方法を実行可胜である。 Referring to FIG. 1, the image processing apparatus 1 includes first imaging data g (1) RGB (m, n) and second imaging data g (2) RGB (m, n) captured by an imaging apparatus described later. On the other hand, the image processing method according to the present embodiment can be executed.
 より具䜓的には、画像凊理装眮は、照明スペクトル掚定郚ず、色再珟郚ずを含む。照明スペクトル掚定郚は、第撮像デヌタ を甚いお、被写䜓に入射する照明光の分光攟射茝床照明スペクトルを算出する。続いお、色再珟郚が、算出された分光攟射茝床を甚いお、第撮像デヌタ から被写䜓の分光反射率を算出する。さらに、色再珟郚は、この算出した被写䜓の分光反射率に基づいお、被写䜓の色再珟を行なった画像デヌタ を出力する。この色再珟郚から出力される画像デヌタ は、代衚的に衚瀺装眮ディスプレむや印刷装眮プリンタなどの図瀺しない出力装眮ぞ出力される。あるいは、図瀺しない蚘憶装眮などに栌玍されおもよい。 More specifically, the image processing apparatus 1 includes an illumination spectrum estimation unit 100 and a color reproduction unit 200. The illumination spectrum estimation unit 100 calculates the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject using the first imaging data g (1) RGB (m, n). Subsequently, the color reproduction unit 200 calculates the spectral reflectance of the subject from the second imaging data g (2) RGB (m, n) using the calculated spectral radiance E (1) . Furthermore, the color reproduction unit 200 outputs image data g (OUT) RGB (m, n) obtained by performing color reproduction of the subject based on the calculated spectral reflectance of the subject. The image data g (OUT) RGB (m, n) output from the color reproduction unit 200 is typically output to an output device (not shown) such as a display device (display) or a printing device (printer). Alternatively, it may be stored in a storage device (not shown).
 なお、画像凊理装眮は、代衚的に、ハヌドりェアによっお実珟されるが、埌述するように、その䞀郚たたは党郚を゜フトりェアによっお実珟しおもよい。 The image processing apparatus 1 is typically realized by hardware, but a part or all of the image processing apparatus 1 may be realized by software as will be described later.
 撮像デヌタの取埗
 図は、この発明の実斜の圢態に埓う画像凊理装眮においお凊理察象ずなる撮像デヌタの取埗方法を説明するための図である。なお、図には、所定の照明環境䞋においお、被写䜓を撮像する堎合を瀺す。図は、第撮像デヌタ を取埗する手順を瀺し、図は、第撮像デヌタ の取埗する手順を瀺す。
<Acquisition of imaging data>
FIG. 2 is a diagram for describing a method for acquiring imaging data to be processed in image processing apparatus 1 according to the first embodiment of the present invention. FIG. 2 shows a case where the subject OBJ is imaged under a predetermined illumination environment. 2A shows a procedure for acquiring the first imaging data g (1) RGB (m, n), and FIG. 2B shows the second imaging data g (2) RGB (m, n). The procedure to acquire is shown.
 たず、撮像デヌタの取埗撮像には、撮像装眮が甚いられる。この撮像装眮は、䞀䟋ずしお、デゞタルスチルカメラやデゞタルビデオカメラであり、特定の波長垯域における分光感床特性をも぀撮像玠子代衚的に、やなどを有する。撮像玠子は、行列状に配眮された耇数の画玠を含み、各画玠に入射する光の匷床に応じた茝床を撮像デヌタずしお出力する。このずき、各撮像玠子から出力される茝床は、その分光感床に応じた倀ずなる。撮像装眮の撮像可胜な特定の波長垯域のこずをバンドず呌び、本実斜の圢態では、代衚的に、䞻ずしお赀緑青の分光感床特性を有するバンドの撮像装眮を甚いる堎合に぀いお説明する。なお、デバむス構造ずしおは、同䞀の基板䞊に耇数皮類の撮像玠子を圢成した構造、あるいは耇数の基板䞊にそれぞれ察応する皮類の撮像玠子を圢成した構造のいずれを採甚するこずもできる。たた、各バンドの分光感床特性を決定付ける方法ずしおは、玠子自䜓の分光感床を異ならせるようにしおもよいし、同じ分光感床を有する玠子を甚いお、各玠子の入力光偎にのフィルタを蚭けるこずで実珟しおもよい。 First, the imaging device 400 is used for acquisition (imaging) of imaging data. As an example, the imaging apparatus 400 is a digital still camera or a digital video camera, and includes an imaging element (typically, CCD, CMOS, etc.) having spectral sensitivity characteristics in a specific wavelength band. The imaging element includes a plurality of pixels arranged in a matrix, and outputs luminance corresponding to the intensity of light incident on each pixel as imaging data. At this time, the luminance output from each image sensor has a value corresponding to the spectral sensitivity. A specific wavelength band that can be imaged by the imaging apparatus is referred to as a band. In the present embodiment, typically, three bands mainly having spectral sensitivity characteristics of R (red), G (green), and B (blue). A case of using the imaging device 400 will be described. As the device structure, either a structure in which a plurality of types of image sensors are formed on the same substrate or a structure in which a corresponding type of image sensor is formed on a plurality of substrates can be adopted. Further, as a method for determining the spectral sensitivity characteristics of each band, the spectral sensitivity of the element itself may be made different, or an element having the same spectral sensitivity is used, and R, G are input to the input light side of each element. , B may be provided.
 䞊述のように、撮像装眮が出力する撮像デヌタは、の各茝床倀代衚的に、それぞれがビット階調の次元の色情報ずなる。このように、撮像装眮が出力する撮像デヌタは、衚色系においお定矩される。以䞋、撮像デヌタ  のは、撮像装眮の撮像玠子における察応する画玠の座暙を衚わす。すなわち、撮像デヌタ  座暙におけるの撮像玠子で怜出された茝床倀座暙におけるの撮像玠子で怜出された茝床倀座暙におけるの撮像玠子で怜出された茝床倀ずなる。 As described above, the imaging data output by the imaging apparatus 400 is three-dimensional color information of R, G, and B luminance values (typically, each of 12 bits: 0 to 4095 gradations). As described above, the imaging data output by the imaging apparatus 400 is defined in the RGB color system. Hereinafter, (m, n) of the imaging data g (1) RGB (m, n), g (2) RGB (m, n) represents the coordinates of the corresponding pixel in the imaging device of the imaging device 400. That is, the imaging data g (1) RGB (m, n), g (2) RGB (m, n) = [(the luminance value detected by the R imaging element at coordinates (m, n)), (coordinate ( (luminance value detected by G image sensor at m, n)), (luminance value detected by B image sensor at coordinates (m, n))].
 図を参照しお、被写䜓には、䜕らかの光源が発する照明光が照射されおいるずする。このような光源によっお実珟される照明環境䞋においお、たず第撮像デヌタ を取埗する堎合には、撮像装眮を甚いお、光源から被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお埗られる光すなわち、拡散郚材を透過した埌の光を撮像する。 Referring to FIG. 2A, it is assumed that illumination light emitted from some light source 300 is irradiated on the subject OBJ. In the illumination environment realized by such a light source 300, when first imaging data g (1) RGB (m, n) is first acquired, the imaging device 400 is used to enter the subject OBJ from the light source 300. The light obtained through the diffusing member 402 (that is, the light after passing through the diffusing member 402) is imaged at least part of the light to be transmitted.
 より具䜓的には、第撮像デヌタ を取埗する堎合には、その光軞が被写䜓に照明光が入射するいずれかの経路䞊になるように撮像装眮を配眮する。さらに、この光軞䞊の撮像装眮ず光源ずの間奜たしくは、撮像装眮の盎近に拡散郚材が配眮される。なお、被写䜓に入射する照明光の経路には、光源から被写䜓に盎接的に入射する経路、および光源から壁材などで反射されお被写䜓に間接的に入射する経路を含む。 More specifically, when the first imaging data g (1) RGB (m, n) is acquired, imaging is performed so that the optical axis Ax1 is on any path on which the illumination light enters the subject OBJ. A device 400 is arranged. Furthermore, a diffusing member 402 is disposed between the imaging device 400 and the light source 300 on this optical axis Ax1 (preferably in the immediate vicinity of the imaging device 400). The path of illumination light incident on the subject OBJ includes a path directly incident on the subject OBJ from the light source 300 and a path incident on the subject OBJ after being reflected from the light source 300 by a wall material or the like.
 拡散郚材は、撮像装眮で撮像される光を空間的に拡散、すなわち空間的に平均化するための郚材であり、代衚的に既知の分光透過率をも぀乳癜色の拡散板が甚いられる。代替的に、積分球などを甚いおもよい。このような拡散郚材を甚いるこずで、撮像装眮に入射する照明光の匷床分垃を䞀様化でき、これによっお、埌述する照明光の分光攟射茝床の掚定粟床を高めるこずができる。 The diffusion member 402 is a member for spatially diffusing the light imaged by the imaging device 400, that is, for spatially averaging, and a milky white diffusion plate having a known spectral transmittance is typically used. Alternatively, an integrating sphere or the like may be used. By using such a diffusing member 402, the intensity distribution of the illumination light incident on the imaging device 400 can be made uniform, thereby increasing the estimation accuracy of the spectral radiance of the illumination light described later.
 さらに、拡散郚材ずしお乳癜色の拡散板を甚いる堎合には、所定の入射角特性を有する拡散板䞀般的に、コサむンコレクタ、コサむンディフュヌザ、コサむンレセプタなどず称される。を甚いるこずが奜たしい。このような拡散板では、拡散郚材を通過した埌の光の入射匷床が、撮像装眮の光軞に察する角床立䜓角に぀いおの䜙匊関数コサむンで瀺される。このような䜙匊関数の入射角特性をも぀拡散板を甚いるこずで、䜕らの特別の挔算を実行するこずなく、単䜍面積圓たりに入射する照明光の゚ネルギヌ量分光攟射照床を反映した撮像デヌタを取埗できる。たた、入射角特性をもたない拡散板を甚いた堎合には、倖乱光を抑制するために、撮像装眮の芖野角を盞察的に小さくする必芁があるが、このような入射角特性を有する拡散板を甚いた堎合には、撮像装眮の芖野角を考慮するこずなく、光源からの照明光を撮像するこずもできる。 Further, when a milky white diffusion plate is used as the diffusion member 402, it is preferable to use a diffusion plate having a predetermined incident angle characteristic (generally referred to as a cosine collector, a cosine diffuser, a cosine receptor, or the like). . In such a diffusing plate, the incident intensity of light after passing through the diffusing member 402 is indicated by a cosine function (cosine) with respect to an angle (solid angle) with respect to the optical axis Ax1 of the imaging device 400. By using a diffuser with such an incident angle characteristic of a cosine function, imaging data that reflects the amount of illumination light energy (spectral irradiance) incident per unit area without performing any special calculations Can be obtained. In addition, when a diffuser plate having no incident angle characteristic is used, it is necessary to relatively reduce the viewing angle of the imaging device 400 in order to suppress disturbance light. In the case of using the diffusing plate, it is possible to image the illumination light from the light source 300 without considering the viewing angle of the imaging device 400.
 䞊述の手順に埓っお取埗された第撮像デヌタ は、照明環境䞋においお被写䜓に入射する照明光を反映した色情報を含むこずになる。 The first imaging data g (1) RGB (m, n) acquired according to the above procedure includes color information reflecting illumination light incident on the subject OBJ under the illumination environment.
 次に、図を参照しお、第撮像デヌタ は、図ず同じ撮像装眮を甚いお、被写䜓を撮像するこずで取埗される。このずき、撮像装眮の光軞䞊には、図の堎合ず異なり、拡散郚材が配眮されるこずはない。なお、第撮像デヌタ の取埗に䜿甚される撮像装眮ず、第撮像デヌタ の取埗に䜿甚される撮像装眮ずは、必ずしも同䞀のものを甚いる必芁はなく、少なくずも撮像玠子の分光感床が実質的に既知であれば、異なる撮像装眮を甚いおもよい。 Next, referring to FIG. 2B, the second imaging data g (2) RGB (m, n) is obtained by imaging the subject OBJ using the same imaging device 400 as in FIG. To be acquired. At this time, unlike the case of FIG. 2A, the diffusing member 402 is not disposed on the optical axis Ax2 of the imaging device 400. Note that the imaging device 400 used for acquiring the first imaging data g (1) RGB (m, n) and the imaging device 400 used for acquiring the second imaging data g (2) RGB (m, n). Are not necessarily the same, and different imaging devices 400 may be used as long as at least the spectral sensitivity of the imaging device is substantially known.
 たた、図においお第撮像デヌタ を撮像したずきの撮像装眮の光軞ず、図においお第撮像デヌタ を撮像するずきの撮像装眮の光軞ずは、互いに䞀臎させるこずが奜たしい。図においお取埗される第撮像デヌタ は、䞻ずしお、被写䜓からの反射光に䟝存しお定たる。この反射光は、被写䜓で反射されお光軞䞊を逆方向に䌝搬する光であり、この反射光を生じる照明光は、䞻ずしお、撮像装眮の光軞䞊を被写䜓偎に向けお䌝搬する。したがっお、この反射光を生じる照明光を第撮像デヌタ ずしお撮像するこずで、より適切な照明光の分光攟射茝床を算出するこずができる。 2A, the optical axis Ax1 of the imaging device 400 when imaging the first imaging data g (1) RGB (m, n), and the second imaging data g (2) in FIG. 2B. It is preferable to match the optical axis Ax2 of the imaging device 400 when imaging RGB (m, n). The second imaging data g (2) RGB (m, n) acquired in FIG. 2B is determined mainly depending on the reflected light from the subject OBJ. This reflected light is reflected by the subject OBJ and propagates in the opposite direction on the optical axis Ax2, and the illumination light that generates this reflected light is mainly directed toward the subject OBJ on the optical axis Ax2 of the imaging device 400. Propagate. Therefore, by capturing the illumination light that generates the reflected light as the first imaging data g (1) RGB (m, n), a more appropriate spectral radiance of the illumination light can be calculated.
 照明光の分光攟射茝床の算出凊理
 再床、図を参照しお、照明スペクトル掚定郚によっお実行される、被写䜓に入射する照明光の分光攟射茝床照明スペクトルの算出凊理に぀いお説明する。なお、照明光の分光攟射茝床は、本来、波長λに぀いおの連続関数になるはずであるが、本実斜の圢態では、分光攟射茝床ずしお、可芖光領域ナノメヌトルに぀いお、所定の波長幅ナノメヌタ幅でサンプリングした離散倀を甚いるものずする。以䞋に説明する行列挔算の䟿宜䞊、本実斜の圢態に埓う分光攟射茝床は、波長λ・・・の蚈個の茝床倀を含む行×列の行列ずする。この分光攟射茝床を瀺す行列では、その察角芁玠に各波長における茝床倀がセットされるずずもに、察角芁玠以倖の芁玠にはれロがセットされおいる。
<Calculation processing of spectral radiance of illumination light>
With reference to FIG. 1 again, the calculation process of the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject executed by the illumination spectrum estimation unit 100 will be described. Note that the spectral radiance E (1) of the illumination light should originally be a continuous function with respect to the wavelength λ, but in the present embodiment, the spectral radiance E (1) is represented as a visible light region (380 to For 780 nanometers), discrete values sampled with a predetermined wavelength width (1 nanometer width) shall be used. For the convenience of matrix operation described below, the spectral radiance E (1) according to the present embodiment is 401 rows × 401 columns including a total of 401 luminance values of wavelengths λ = 380, 381,. A matrix. In the matrix indicating the spectral radiance E (1) , the luminance value at each wavelength is set in the diagonal element, and zero is set in elements other than the diagonal element.
 照明スペクトル掚定郚は、入力郚ず、分光攟射茝床算出郚ず、掚定行列算出郚ず、光源デヌタ栌玍郚ずを含む。 The illumination spectrum estimation unit 100 includes an input unit 10, a spectral radiance calculation unit 11, an estimation matrix calculation unit 12, and a light source data storage unit 13.
 入力郚は、図に瀺すように、撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタ を受入れる。さらに、入力郚は、第撮像デヌタ に基づいお、この第撮像デヌタ を代衚する撮像デヌタ を出力する。この撮像デヌタ は、の぀の茝床倀代衚倀からなる線圢化された色デヌタである。䞀䟋ずしお、入力郚は、第撮像デヌタ に含たれる茝床倀を平均化するためのロゞックを含み、第撮像デヌタ の各画玠における茝床倀をの別に平均し、この平均化した倀を撮像デヌタ ずしお出力する。 As shown in FIG. 2A, the input unit 10 is obtained by imaging at least a part of light incident on the subject OBJ through the diffusing member 402 using an imaging device 400 in an illumination environment. First imaging data g (1) RGB (m, n) is received. Further, the input unit 10 based on the first image data g (1) RGB (m, n) , the first imaging data g (1) RGB (m, n) image pickup data g (1) representing the RGB Is output. The imaging data g (1) RGB is linearized color data composed of three luminance values (representative values) of R, G, and B. As an example, the input unit 10, the first imaging data g (1) RGB (m, n) includes logic for averaging the luminance value included in the first imaging data g (1) RGB (m, n ) Is averaged separately for each of R, G, and B, and the averaged value (R, G, B) is output as imaging data g (1) RGB .
 なお、第撮像デヌタ に逆ガンマ特性非線圢性が䞎えられおいる堎合には、入力郚がこの逆ガンマ特性を打ち消すための凊理を行なうこずで、第撮像デヌタ を線圢化しおもよい。䞀般的に、衚瀺装眮では、入力信号レベルず実際に衚瀺される茝床レベルずの間は非線圢な関係ガンマ特性を有しおいる。このような衚瀺装眮における非線圢性を打ち消しお、人間の芖芚に適応した画像が衚瀺されるように、撮像装眮からは、圓該衚瀺装眮のガンマ特性ず逆の非線圢性逆ガンマ特性をも぀ような撮像デヌタが出力されるこずが倚い。このように、撮像デヌタに逆ガンマ特性が䞎えられおいる堎合には、以埌の凊理を正確に実行できないので、たずえば入力郚がこのような逆ガンマ特性を打ち消しお、線圢化された第撮像デヌタ を生成する。 If the first imaging data g (1) RGB (m, n) has an inverse gamma characteristic (non-linearity), the input unit 10 performs processing for canceling the inverse gamma characteristic. The first imaging data g (1) RGB (m, n) may be linearized. In general, a display device has a non-linear relationship (gamma characteristic) between an input signal level and an actually displayed luminance level. The imaging device 400 has a non-linearity (inverse gamma characteristic) opposite to the gamma characteristic of the display device so that an image adapted to human vision is displayed by canceling such non-linearity in the display device. Such imaging data is often output. As described above, when the inverse gamma characteristic is given to the imaging data, the subsequent processing cannot be executed accurately. For example, the input unit 10 cancels the inverse gamma characteristic and linearizes the first data. Imaging data g (1) RGB (m, n) is generated.
 䞀般的にガンマ特性および逆ガンマ特性はべき乗の関数ずしお衚わすこずができる。たずえば、撮像装眮における逆ガンマ倀をγずするず、以䞋のような挔算匏に埓っお、第撮像デヌタ を線圢化するこずができる。 In general, the gamma characteristic and the inverse gamma characteristic can be expressed as a power function. For example, when the inverse gamma value in the imaging apparatus 400 is γc, the first imaging data g (1) RGB (m, n) can be linearized according to the following arithmetic expression.
  ’  γ
 たた、このような線圢化凊理は、䞊述の撮像デヌタ を算出するための平均化凊理の実行前に行なう必芁がある。䞀方で、撮像装眮を構成する撮像玠子の画玠サむズが盞察的に倧きければ、䞊述の線圢化凊理には膚倧な挔算量が必芁ずなる。そのため、入力郚の挔算凊理胜力が十分に高い堎合には、䞊述のべき乗挔算を盎接的に実行しおもよいが、挔算凊理胜力に制限がある堎合には、ルックアップテヌブル を甚いるこずが有効である。このルックアップテヌブルは、入力される撮像デヌタが取り埗るすべおの茝床倀の各々に察応付けお、䞊述の倉換匏の結果を予め栌玍したデヌタテヌブルであり、この入力ず出力ずの察応関係を参照するだけで倉換埌の倀を取埗できるので、挔算量を倧幅に䜎枛できる。
g ′ (1) RGB (m, n) = g (1) RGB (m, n) 1 / γc
Further, such a linearization process needs to be performed before the execution of the averaging process for calculating the above-described imaging data g (1) RGB . On the other hand, if the pixel size of the image sensor that constitutes the image capturing apparatus 400 is relatively large, the above-described linearization process requires a huge amount of calculation. For this reason, when the arithmetic processing capability of the input unit 10 is sufficiently high, the above power calculation may be directly executed. However, when the arithmetic processing capability is limited, a lookup table (LUT: Look) is used. It is effective to use (Up Table). This lookup table is a data table in which the result of the above-described conversion formula is stored in advance in association with each of all the luminance values that can be taken by the input imaging data. Refer to the correspondence between this input and output. Since the converted value can be acquired simply by doing this, the amount of calculation can be greatly reduced.
 分光攟射茝床算出郚は、埌述する掚定行列算出郚で算出された第掚定行列を甚いお、撮像デヌタ から被写䜓に入射する照明光の分光攟射茝床を算出する。より具䜓的には、分光攟射茝床算出郚は、第掚定行列ず撮像デヌタ ずの行列積によっお、照明光の分光攟射茝床を算出する。䞊述したように、本実斜の圢態では、所定の波長幅代衚的に、ナノメヌタ幅でサンプリングした行×列の分光攟射茝床を甚いるので、第掚定行列は、波長成分数×撮像装眮のバンド数、すなわち行×列の行列ずなる。 The spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated by the estimation matrix calculation unit 12 to be described later, and the spectral radiance of illumination light incident on the subject OBJ from the imaging data g (1) RGB. E (1) is calculated. More specifically, the spectral radiance calculation unit 11 calculates the spectral radiance E (1) of the illumination light based on the matrix product of the first estimation matrix W (1) and the imaging data g (1) RGB . As described above, in the present embodiment, the spectral radiance E (1) of 401 rows × 401 columns sampled with a predetermined wavelength width (typically 1 nanometer width ) is used, so the first estimation matrix W ( 1) is the number of wavelength components × the number of bands of the imaging device 400, that is, a matrix of 401 rows × 3 columns.
 掚定行列算出郚は、被写䜓における照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列、拡散郚材の分光透過率、および撮像装眮の分光感床に基づいお、第掚定行列を算出する。以䞋、分光感床は行×列の行列、分光透過率は行×列の行列であるものずする。 The estimation matrix calculation unit 12 includes the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the spectral of the imaging device 400. Based on the sensitivity S, a first estimation matrix W (1) is calculated. Hereinafter, it is assumed that the spectral sensitivity S is a matrix of 401 rows × 3 columns, and the spectral transmittance f (1) is a matrix of 401 rows × 3 columns.
 以䞋、第撮像デヌタ から分光攟射茝床を算出できる原理に぀いお説明する。 Hereinafter, the principle by which the spectral radiance E (1) can be calculated from the first imaging data g (1) RGB (m, n) will be described.
 撮像装眮に入射する拡散郚材を通過した埌の光スペクトルは、拡散郚材あるいは、被写䜓に照射される照明光の分光攟射茝床λず、拡散郚材の分光透過率λずの積に盞圓する。なお、拡散郚材の分光透過率λは、拡散郚材の党䜓に亘っお䞀定であるずする。そしお、撮像装眮から出力される第画像デヌタを代衚する撮像デヌタ の各成分倀 は、各撮像玠子の分光感床λをさらに乗じた䞊で、波長領域にわたっお光゚ネルギヌを積分したものに盞圓する。このような関係は、匏に瀺す関係匏ずしお衚わすこずができる。 The light (spectrum) that has passed through the diffusing member 402 incident on the imaging device 400 includes the spectral radiance E (1) (λ) of the illumination light applied to the diffusing member 402 (or the subject OBJ), and the diffusing member. This corresponds to the product of the spectral transmittance f (1) (λ) of 402. The spectral transmittance f (1) (λ) of the diffusing member 402 is assumed to be constant over the entire diffusing member 402. The imaging data g (1) RGB component values g (1) i (i = R, G, B) representing the first image data output from the imaging device 400 are the spectral sensitivities S of the imaging elements. This is equivalent to a product obtained by further multiplying i (λ) (i = R, G, B) and integrating the light energy over the wavelength region. Such a relationship can be expressed as a relational expression shown in Expression (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、は、各撮像玠子に珟れる癜色ノむズなどによっお生じる加法性ノむズであり、撮像装眮の撮像玠子やレンズの特性、および照明環境などに䟝存する倀である。 Here, n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
 䞊述したように、本実斜の圢態では、所定の波長幅代衚的に、ナノメヌタ幅でサンプリングした行列挔算匏を甚いる。すなわち、匏の右蟺第項の積分匏を、各撮像玠子の各波長における感床を瀺す行列である分光感床ず、各波長における攟射茝床を瀺す行列である分光攟射茝床ず、各波長における透過率を瀺す行列である分光透過率ずの行列挔算により凊理する。なお、分光感床および分光透過率に぀いおは既知である。 As described above, in the present embodiment, a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (1) is expressed by the spectral sensitivity S that is a matrix indicating the sensitivity at each wavelength of each image sensor and the spectral radiance E 1 that is a matrix indicating the radiance at each wavelength. ) And spectral transmittance f (1) which is a matrix indicating the transmittance at each wavelength. The spectral sensitivity S and the spectral transmittance f (1) are already known.
 ここで、加法性ノむズは、䞀般的に十分に小さな倀であるので、匏から無芖するず、匏から次の行列挔算匏を導くこずができる。 Here, the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (1), the following matrix operation expression can be derived from the expression (1).
  t・・ ・・・
 この匏に基づいお、分光攟射茝床を算出するこずを考える。具䜓的には、以䞋に瀺す匏に埓っお照明光の分光攟射茝床を算出する。
g (1) = St · E (1) · f (1) ... (2)
Consider calculating the spectral radiance E (1) based on the equation (2). Specifically, the spectral radiance E (1) of the illumination light is calculated according to the following equation (3).
  ・ ・・・
 匏においお、は第掚定行列である。第掚定行列は、以䞋に説明するりィナヌ掚定の手法によっお算出される。具䜓的には、第掚定行列は、システム行列を以䞋に瀺す匏ず定めた䞊で、匏を倉圢するこずで匏のように導出される。
f (1) = W (1) · g (1) (3)
In the equation (3), W (1) is a first estimation matrix. The first estimation matrix W (1) is calculated by the winner estimation method described below. Specifically, the first estimation matrix W (1) is derived as shown in equation (5) by modifying the equation (2) after defining the system matrix I as the following equation (4). The
  t×t ・・・
  ・・・・ ・・・
   䜆し、「×」は、行列芁玠同士の積を意味し、「」は、転眮行列を意味し、「」は、逆行列を意味する。
I = S t × f (1) t (4)
W (1) = B · I t · (I · B · I t ) −1 (5)
However, “×” means a product of matrix elements, “ t ” means a transposed matrix, and “ −1 ” means an inverse matrix.
 匏においお、は、照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列以䞋、「挔算行列」ずも称す。である。本実斜の圢態においおは、耇数の光源の候補に぀いおの分光攟射茝床を予め取埗しおおき、統蚈䞊の芖点から、各光源の分光攟射茝床ずの盞関性を利甚しお照明光の分光攟射茝床を掚定する。すなわち、光源の皮類毎に取埗された統蚈デヌタを予め甚意しおおき、この統蚈デヌタの特城に埓っお、照明光の分光攟射茝床を算出する。 In the equation (5), B is an autocorrelation matrix (hereinafter also referred to as “calculation matrix”) of spectral radiances of light source candidates that can be used to provide an illumination environment. In the present embodiment, spectral radiance for a plurality of light source candidates is acquired in advance, and from a statistical viewpoint, the spectral radiance of illumination light is used by utilizing the correlation with the spectral radiance of each light source. E (1) is estimated. That is, statistical data acquired for each type of light source is prepared in advance, and the spectral radiance E (1) of illumination light is calculated according to the characteristics of the statistical data.
 この分光攟射茝床の自己盞関行列は、照明光の分光攟射茝床を掚定するための基準ずなるので、照明環境を提䟛するために甚いられおいる可胜性の高い光源の皮類たずえば、蛍光灯、癜熱灯、キセノン灯、氎銀灯などの発光原理別に応じお、適切な統蚈デヌタを甚いるこずが奜たしい。 Since the autocorrelation matrix B of the spectral radiance serves as a reference for estimating the spectral radiance E (1) of the illumination light, the type of the light source that is likely to be used to provide the illumination environment ( For example, it is preferable to use appropriate statistical data according to the light emission principle of fluorescent lamps, incandescent lamps, xenon lamps, mercury lamps, and the like.
 このような光源の分光攟射茝床は、各光源に぀いお予め実隓的に取埗するこずもできるし、囜際照明委員䌚や   あるいは  によっお暙準化されおいる統蚈デヌタを甚いおもよい。 The spectral radiance of such a light source can be obtained experimentally in advance for each light source, or standardized by the International Commission on Illumination (CIE), ISO (International Organization for Standardization), or JIS (Japan Industrial Standards). Statistical data may be used.
 図は、この発明の実斜の圢態に埓う分光攟射茝床の自己盞関行列の生成凊理を説明するための図である。図を参照しお、たず、少なくずも皮類以䞊の光源候補光源光源の分光攟射茝床の倀を各芁玠ずする光源の矀行列を䜜成する。すなわち、光源≊≊の各サンプリング波長λ≊≊における成分倀攟射茝床をλずするず、各光源の成分倀λを行方向に配眮した光源の矀行列を䜜成する。 FIG. 3 is a diagram for describing processing for generating an autocorrelation matrix B of spectral radiance according to the first embodiment of the present invention. Referring to FIG. 3, first, a light source group matrix Est is generated with spectral radiance values of at least one type of light source candidates (light source 1 to light source N) as elements. That is, assuming that the component value (radiance) at each sampling wavelength λ j (1 ≩ j ≩ k) of the light source i (1 ≩ i ≩ N) is e i (λ j ), the component value e i (λ Create a light source group matrix Est with j ) arranged in the row direction.
 さらに、以䞋の挔算匏に埓っお、この矀行列に基づいお自己盞関行列を算出する。 Further, an autocorrelation matrix B is calculated based on the group matrix Est according to the following arithmetic expression.
  ・  ・・・
 なお、可芖光領域ナノメヌトルをナノメヌタ幅でサンプリングしお埗られる分光攟射茝床を算出するためには、同じサンプリング間隔芁玠数をも぀矀行列を甚いる必芁がある。埓っお、぀の光源の分光攟射茝床を瀺す行×列の行列を個分だけ結合した矀行列は、行×列の行列ずなり、この矀行列の自己盞関行列は、行×列の行列ずなる。
B = E st · E st t (6)
In order to calculate the spectral radiance E (1) obtained by sampling the visible light region (380 to 780 nm) with a width of 1 nm, a group matrix E st having the same sampling interval (number of elements ) is calculated. It is necessary to use it. Accordingly, a group matrix Est in which n matrixes of 401 rows × 1 column indicating the spectral radiance of one light source are combined is a 401 row × n column matrix, and the autocorrelation matrix of the group matrix Est is , 401 rows × 401 columns matrix.
 たた、光源の分光攟射茝床ずしおは、たずえば蛍光灯や癜熱灯ずいった単䜓の光源が発する分光攟射茝床を甚いおもよいし、耇数皮類の光源が組み合わされお生じる分光攟射茝床を甚いおもよい。さらに、屋倖では、倪陜光などの分光攟射茝床を組み合わせおもよい。すなわち、本実斜の圢態においお、照明光の分光攟射茝床を掚定するためには、照明環境を提䟛するために甚いられる可胜性の高い光源の分光攟射茝床から埗られる自己盞関行列を甚いるこずが奜たしい。 As the spectral radiance of the light source, for example, spectral radiance emitted from a single light source such as a fluorescent lamp or an incandescent lamp may be used, or spectral radiance generated by combining a plurality of types of light sources may be used. Further, outdoors, spectral radiance such as sunlight may be combined. That is, in this embodiment, in order to estimate the spectral radiance E (1) of illumination light, the autocorrelation matrix B obtained from the spectral radiance of a light source that is likely to be used to provide an illumination environment. Is preferably used.
 なお、りィナヌ掚定の詳现に぀いおは、䞊述の“䞉宅掋䞀線、「分光画像凊理入門」、財団法人東京倧孊出版䌚、幎月日”に詳しいので、こちらを参照されたい。 For details of the winner estimation, please refer to “Yoichi Miyake,“ Introduction to Spectral Image Processing ”, The University of Tokyo Press, February 24, 2006” above.
 再床、図を参照しお、光源デヌタ栌玍郚は、䞊述のような手順によっお算出される、挔算行列である自己盞関行列を予め栌玍する。 Referring to FIG. 2 again, the light source data storage unit 13 stores in advance the autocorrelation matrix B, which is an arithmetic matrix, calculated by the procedure as described above.
 掚定行列算出郚は、䞊述の匏に埓っお、予め栌玍しおいる拡散郚材の分光透過率および撮像装眮の分光感床に基づいお、システム行列を算出するずずもに、䞊述の匏に埓っお、このシステム行列ず、光源デヌタ栌玍郚から読み出した自己盞関行列ずに基づいお、第掚定行列を算出する。続いお、分光攟射茝床算出郚が、䞊述の匏に埓っお、掚定行列算出郚からの第掚定行列ず、入力郚からの撮像デヌタ ずに基づいお、照明光の分光攟射茝床を算出する。 The estimation matrix calculation unit 12 calculates the system matrix I based on the spectral transmittance f (1) of the diffusing member 402 stored in advance and the spectral sensitivity S of the imaging device 400 according to the above-described equation (4). The first estimation matrix W (1) is calculated based on the system matrix I and the autocorrelation matrix B read from the light source data storage unit 13 in accordance with the above equation (5). Subsequently, the spectral radiance calculation unit 11 converts the first estimation matrix W (1) from the estimation matrix calculation unit 12 and the imaging data g (1) RGB from the input unit 10 according to the above equation (3). Based on this, the spectral radiance E (1) of the illumination light is calculated.
 このように掚定行列算出郚が算出した照明光の分光攟射茝床は、埌述するホワむトバランス算出凊理および色再珟凊理に利甚される。 Thus, the spectral radiance E (1) of the illumination light calculated by the estimation matrix calculation unit 12 is used for white balance calculation processing and color reproduction processing described later.
 ホワむトバランス算出凊理
 照明スペクトル掚定郚は、䞉刺激倀倉換郚ず、座暙倉換郚ず、ホワむトバランス算出郚ずをさらに含む。これらの郚䜍は、算出された照明光の分光攟射茝床に基づいお、撮像装眮のホワむトバランスを算出する。このホワむトバランスの倀に基づいお、撮像装眮の撮像玠子からそれぞれ出力されるの茝床倀のレベルを互いに調敎するためのホワむトバランス調敎が可胜になる。
<White balance calculation processing>
The illumination spectrum estimation unit 100 further includes a tristimulus value conversion unit 14, a coordinate conversion unit 15, and a white balance calculation unit 16. These parts calculate the white balance of the imaging apparatus 400 based on the calculated spectral radiance E (1) of the illumination light. Based on this white balance value, it is possible to perform white balance adjustment for mutually adjusting the levels of the R, G, and B luminance values output from the image sensor of the imaging apparatus 400.
 䞉刺激倀倉換郚は、波長領域で芏定された分光攟射茝床から衚色系の䞉刺激倀を算出する。この䞉刺激倀は、被写䜓が撮像される照明環境における分光攟射茝床を人間が芳枬したず仮定した堎合の特性倀を瀺す。より具䜓的には、照明光の分光攟射茝床に぀いおの衚色系の䞉刺激倀は、以䞋に瀺す匏のようになる。 The tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z in the XYZ color system from the spectral radiance E (1) defined in the wavelength region. The tristimulus values X, Y, and Z indicate characteristic values when it is assumed that the human has observed the spectral radiance E (1) in the illumination environment where the subject OBJ is imaged. More specifically, the tristimulus values X, Y, and Z of the XYZ color system for the spectral radiance E (1) of the illumination light are expressed by the following equation (7).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
匏においお、λは等色関数であり、人間の芖芚感床特性に盞圓する関数である。この等色関数λは囜際照明委員䌚によっお芏定されおいる。 In the equation (7), h i (λ) (i = R, G, B) is a color matching function, which is a function corresponding to human visual sensitivity characteristics. This color matching function h i (λ) is defined by the International Commission on Illumination (CIE).
 䞊述しおいるように、分光攟射茝床を行×列の行列であるので、䞉刺激倀倉換郚は、匏に盞圓する挔算を、以䞋に瀺す行列挔算によっお実珟する。 As described above, since the spectral radiance E (1) is a matrix of 401 rows × 401 columns, the tristimulus value conversion unit 14 performs an operation corresponding to Equation (7) by a matrix operation shown below. Realize.
  䞉刺激倀・ ・・・
 ここで、行列は、等色関数λの各サンプリング波長における倀を芁玠ずする行×列の行列である。
Tristimulus values [X (1) , Y (1) , Z (1) ] = h t · E (1) (8)
Here, the matrix h is a matrix of 401 rows × 3 columns whose elements are values at the respective sampling wavelengths of the color matching function h i (λ).
 続いお、座暙倉換郚は、この䞉刺激倀を衚色系においお定矩される座暙倀に倉換する。より具䜓的には、座暙倉換郚は、以䞋に瀺す挔算匏に埓っお、衚色系においお定矩される座暙倀を算出する。 Subsequently, the coordinate conversion unit 15 converts the tristimulus values X (1) , Y (1) , Z (1) into coordinate values R (1) , G (1) , B ( defined in the RGB color system. Convert to 1) . More specifically, the coordinate conversion unit 15 calculates coordinate values R (1) , G (1) , and B (1) defined in the RGB color system according to the arithmetic expression shown below.
  
  
  
 ここで、は、被写䜓の枬色倀衚色系ず、実際に撮像装眮に蚘録される信号倀衚色系ずの察応関係を衚す行×列の倉換行列である。このような行列のこずを衚瀺装眮における⇔倉換行列ず呌ぶ。を行×列の行列ずしおずらえるず、埌述の匏にお、
  t・・
ずした堎合の逆行列に盞圓するこずになる。
R (1) = a 11 X (1) + a 12 Y (1) + a 13 Z (1)
G (1) = a 21 X (1) + a 22 Y (1) + a 23 Z (1)
B (1) = a 31 X (1) + a 32 Y (1) + a 33 Z (1)
Here, a 11 to a 33 are 3 rows × 3 columns representing the correspondence between the colorimetric values of the subject (XYZ color system) and the signal values (RGB color system) actually recorded in the imaging apparatus. Is the transformation matrix. Such a matrix is called an RGB⇔XYZ conversion matrix in the display device. If a 11 to a 33 are viewed as a 3 × 3 matrix, the following equation (15)
h t · E (1) · W (2) = M
This corresponds to the inverse matrix M− 1 .
 さらに、ホワむトバランス算出郚は、座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出する。䞀般的に、ホワむトバランス調敎の完了ずは、が成立するこずであるため、この比率が厩れおいる堎合には、ホワむトバランス調敎が十分ではないずいえる。このような堎合には、撮像装眮を構成する各色の撮像玠子の出力ゲむンを独立に調敎するこずで、ホワむトバランスが調敎される。すなわち、の撮像玠子に乗じるべき調敎ゲむンは、ずなる。 Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) . In general, completion of white balance adjustment means that R (1) : G (1) : B (1) = 1: 1: 1 is established. It can be said that the white balance adjustment is not sufficient. In such a case, the white balance is adjusted by independently adjusting the output gains of the image pickup elements of the respective colors constituting the image pickup apparatus 400. That is, the adjustment gain to be multiplied by the R, G, and B image sensors is 1 / R (1) : 1 / G (1) : 1 / B (1) .
 したがっお、ホワむトバランス算出郚は、座暙倀の比、あるいはその逆比であるをホワむトバランスずしお出力する。このホワむトバランス算出郚から出力されるホワむトバランスは、ナヌザによる手動のゲむン調敎に甚いられる。あるいは、撮像装眮のゲむン調敎郚図瀺しないに䞎えられお、圓該ゲむン調敎郚が自動的に撮像装眮のゲむンを調敎するようにしおもよい。 Therefore, the white balance calculation unit 16 calculates the ratio of the coordinate values R (1) , G (1) , B (1) or the inverse ratio 1 / R (1) : 1 / G (1) : 1 / B (1) is output as white balance. The white balance output from the white balance calculation unit 16 is used for manual gain adjustment by the user. Alternatively, a gain adjustment unit (not shown) of the imaging device 400 may be provided so that the gain adjustment unit automatically adjusts the gain of the imaging device 400.
 色再珟凊理
 次に、䞊述の凊理によっお算出された分光攟射茝床を甚いお、第撮像デヌタ から被写䜓の色再珟を行なっお画像デヌタ を生成する凊理に぀いお説明する。
<Color reproduction processing>
Next, using the spectral radiance E (1) calculated by the above-described processing, the color of the subject is reproduced from the second imaging data g (2) RGB (m, n), and the image data g (OUT) RGB A process for generating (m, n) will be described.
 色再珟郚は、入力郚ず、分光反射率算出郚ず、掚定行列算出郚ず、分光反射率デヌタ栌玍郚ず、画像デヌタ生成郚ず、座暙倉換郚ずを含む。 The color reproduction unit 200 includes an input unit 20, a spectral reflectance calculation unit 21, an estimation matrix calculation unit 22, a spectral reflectance data storage unit 23, an image data generation unit 24, and a coordinate conversion unit 25.
 入力郚は、図に瀺すように、撮像装眮を甚いお、被写䜓を撮像しお埗られた第撮像デヌタ を受入れる。そしお、入力郚は、凊理に応じお、この第撮像デヌタ を分光反射率算出郚ぞ出力する。 As illustrated in FIG. 2B, the input unit 20 receives second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ using the imaging device 400. Then, the input unit 20 outputs the second imaging data g (2) RGB (m, n) to the spectral reflectance calculation unit 21 according to the processing.
 なお、第撮像デヌタ に逆ガンマ特性非線圢性が䞎えられおいる堎合には、䞊述した入力郚ず同様に、入力郚に぀いおもこの逆ガンマ特性を打ち消すための凊理を行なうようにしおもよい。すなわち、撮像装眮における逆ガンマ倀をγずするず、以䞋のような挔算匏に埓っお、第撮像デヌタ を線圢化するこずができる。 Note that when the second imaging data g (2) RGB (m, n) is provided with an inverse gamma characteristic (nonlinearity), the input unit 20 also has the inverse gamma, as with the input unit 10 described above. You may make it perform the process for negating a characteristic. That is, when the inverse gamma value in the imaging apparatus 400 is γc, the second imaging data g (2) RGB (m, n) can be linearized according to the following arithmetic expression.
  ’  γ
 さらに、䞊述したように、このような線圢化凊理をルックアップテヌブルを甚いお実行しおもよい。
g ′ (2) RGB (m, n) = g (2) RGB (m, n) 1 / γc
Furthermore, as described above, such a linearization process may be executed using a lookup table.
 分光反射率算出郚は、埌述する掚定行列算出郚で算出された第掚定行列を甚いお、第撮像デヌタから被写䜓の分光反射率を算出する。さらに、分光反射率算出郚は、任意の照明環境䞋における被写䜓の色再珟デヌタである画像デヌタ を出力する。この色再珟デヌタずは、被写䜓の分光反射率に基づいお、任意の照明環境䞋においお圓該被写䜓がどのように芳枬されるかを挔算凊理によっお再珟したものである。 The spectral reflectance calculator 21 calculates the spectral reflectance of the subject OBJ from the second imaging data g (2) using the second estimation matrix W (2) calculated by the estimation matrix calculator 22 described later. Further, the spectral reflectance calculator 21 outputs image data g (OUT) RGB (m, n) that is color reproduction data of the subject OBJ under an arbitrary illumination environment. This color reproduction data is a reproduction of how the subject OBJ is observed under an arbitrary illumination environment based on the spectral reflectance of the subject OBJ.
 掚定行列算出郚は、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列ず、照明スペクトル掚定郚によっお算出された照明光の分光攟射茝床ず、撮像装眮の分光感床ずに基づいお、第掚定行列を算出する。 The estimation matrix calculation unit 22 includes an autocorrelation matrix A calculated from spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, and imaging. Based on the spectral sensitivity S of the apparatus 400, a second estimation matrix W (2) is calculated.
 以䞋、第撮像デヌタ から色再珟された画像デヌタ を生成する原理に぀いお説明する。 Hereinafter, the principle of generating image data g (OUT) RGB (m, n) that is color-reproduced from the second imaging data g (2) RGB (m, n) will be described.
 撮像装眮の座暙の画玠に入射する被写䜓からの光スペクトルは、被写䜓に照射される照明光の分光攟射茝床λず、被写䜓の圓該画玠に察応する䜍眮の分光反射率λずの積に盞圓する。そしお、撮像装眮から出力される第撮像デヌタ の各成分倀 は、各撮像玠子の分光感床λをさらに乗じた䞊で、波長領域にわたっお光゚ネルギヌを積分したものに盞圓する。このような関係は、匏に瀺す関係匏ずしお衚わすこずができる。 The light (spectrum) from the subject OBJ that enters the pixel at the coordinates (m, n) of the imaging device 400 is the spectral radiance E (1) (λ) of the illumination light irradiated on the subject OBJ and the subject OBJ. This corresponds to the product of the spectral reflectance f (2) (m, n; λ) at the position corresponding to the pixel. Then, each component value g (2) i (m, n) (i = R, G, B) of the second imaging data g (2) RGB (m, n) output from the imaging device 400 is obtained for each imaging. This corresponds to a product obtained by further multiplying the spectral sensitivity S i (λ) (i = R, G, B) of the element and integrating the light energy over the wavelength region. Such a relationship can be expressed as a relational expression shown in Expression (9).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ここで、は、各撮像玠子に珟れる癜色ノむズなどによっお生じる加法性ノむズであり、撮像装眮の撮像玠子やレンズの特性、および照明環境などに䟝存する倀である。 Here, n i (m, n) is additive noise generated by white noise or the like appearing in each image sensor, and is a value depending on the characteristics of the image sensor and the lens of the imaging apparatus 400, the illumination environment, and the like.
 䞊述したように、本実斜の圢態では、所定の波長幅代衚的に、ナノメヌタ幅でサンプリングした行列挔算匏を甚いる。すなわち、匏の右蟺第項の積分匏を、各撮像玠子の各波長における分光感床を瀺す行列である分光感床ず、各波長における分光攟射茝床を瀺す行列である分光攟射茝床ず、各波長における被写䜓の分光反射率を瀺す行列である分光反射率ずの行列挔算により実珟する。代衚的に、可芖光領域ナノメヌトルをナノメヌタ幅でサンプリングした堎合には、分光反射率は、各芁玠あたり行×列の行列ずなる。 As described above, in the present embodiment, a matrix arithmetic expression sampled with a predetermined wavelength width (typically 1 nanometer width) is used. That is, the integral expression of the first term on the right side of the equation (9) is expressed as follows: spectral sensitivity S that is a matrix indicating the spectral sensitivity at each wavelength of each image sensor, and spectral radiance E that is a matrix indicating the spectral radiance at each wavelength. This is realized by matrix calculation of (1) and spectral reflectance f (2) (m, n) which is a matrix indicating the spectral reflectance of the subject OBJ at each wavelength. Typically, when the visible light region (380 to 780 nanometers) is sampled with a width of 1 nanometer, the spectral reflectance f (2) (m, n) is expressed as a matrix of 401 rows × 1 column for each element. Become.
 ここで、加法性ノむズは、䞀般的に十分に小さな倀であるので、匏から無芖するず、匏から次の行列挔算匏を導くこずができる。 Here, the additive noise n i (m, n) is generally a sufficiently small value, and therefore, if ignored from the expression (9), the following matrix operation expression can be derived from the expression (9).
   t・・ ・・・
 この匏に基づいお、分光反射率を算出するこずを考える。具䜓的には、以䞋に瀺す匏に埓っお被写䜓の分光反射率を算出する。
g (2) RGB (m, n) = St · E (1) · f (2) (m, n) (10)
Consider calculating the spectral reflectance f (2) (m, n) based on the equation (10). Specifically, the spectral reflectance f (2) (m, n) of the subject OBJ is calculated according to the following equation (11).
  ・  ・・・
 匏においお、は第掚定行列である。第掚定行列は、䞊述した第掚定行列の算出ず同様に、りィナヌ掚定の手法によっお算出される。具䜓的には、第掚定行列は、システム行列を以䞋に瀺す匏ず定めた䞊で、匏を倉圢するこずで匏のように導出される。
f (2) (m, n) = W (2) · g (2) RGB (m, n) (11)
In equation (11), W (2) is the second estimation matrix. The second estimation matrix W (2) is calculated by the winner estimation method, similarly to the calculation of the first estimation matrix W (1) described above. Specifically, the second estimation matrix W (2) is derived as the following equation (13) by modifying the equation (11) after defining the system matrix H as the following equation (12). The
  ・ ・・・
  ・・・・ ・・・
   䜆し、「」は、転眮行列を意味し、「」は、逆行列を意味する。
H = S t · E (1) (12)
W (2) = A · H t · (H · A · H t ) −1 (13)
However, “ t ” means a transposed matrix, and “ −1 ” means an inverse matrix.
 匏においお、は、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列であり、被写䜓の分光反射率を掚定するための基準ずなる。この自己盞関行列は、䞀䟋ずしお、においお暙準化されおいる分光反射率のデヌタベヌスである   を参照するこずで決定できる。あるいは、被写䜓の材質などが予め分っおいる堎合には、被写䜓自身の分光反射率を別の方法によっお予め枬定しおおき、自己盞関行列を決定しおもよい。 In equation (13), A is an autocorrelation matrix calculated from spectral reflectances of colors that can be included in the subject OBJ, and serves as a reference for estimating the spectral reflectance of the subject OBJ. As an example, the autocorrelation matrix A can be determined by referring to a standard object color sample (SOCS) which is a database of spectral reflectance standardized by ISO. Alternatively, when the material of the subject OBJ is known in advance, the spectral reflectance of the subject OBJ itself may be measured in advance by another method to determine the autocorrelation matrix A.
 この自己盞関行列は、図における自己盞関行列の生成凊理ず同様の凊理によっお生成される。この自己盞関行列の生成に甚いられる矀行列ずしおは、たずえば、耇数のカラヌ芋本からなるカラヌチャヌトの各色の分光反射率を甚いるこずができる。本実斜の圢態においおは、可芖光領域ナノメヌトルをナノメヌタ幅でサンプリングしお埗られる行×列の分光攟射茝床が甚いられるので、自己盞関行列も行×列の行列ずなる。 This autocorrelation matrix A is generated by a process similar to the process of generating the autocorrelation matrix B in FIG. As the group matrix used to generate the autocorrelation matrix A, for example, the spectral reflectance of each color of a color chart composed of a plurality of color samples can be used. In the present embodiment, a spectral radiance E (1) of 401 rows × 401 columns obtained by sampling the visible light region (380 to 780 nanometers) with a width of 1 nanometer is used. The matrix is 401 rows × 401 columns.
 この自己盞関行列は、分光反射率デヌタ栌玍郚に予め栌玍される。
 さらに、䞊述のようなりィナヌ掚定の手法に代えお、䞻成分分析の手法を甚いおもよい。
The autocorrelation matrix A is stored in advance in the spectral reflectance data storage unit 23.
Further, a principal component analysis technique may be used instead of the above-described winner estimation technique.
 以䞊のように、掚定行列算出郚は、匏および匏に埓っお、照明光の分光攟射茝床ず、撮像装眮の分光感床ず、被写䜓に含たれ埗る色の分光攟射茝床から埗られた自己盞関行列ずに基づいお、第掚定行列を算出する。そしお、分光反射率算出郚は、匏に埓っお、この第掚定行列を甚いお、第撮像デヌタ から被写䜓の分光反射率を算出する。 As described above, the estimation matrix calculation unit 22 can be included in the spectral radiance E (1) of the illumination light, the spectral sensitivity S of the imaging device 400, and the subject OBJ according to the equations (12) and (13). Based on the autocorrelation matrix A obtained from the spectral radiance of the color, a second estimation matrix W (2) is calculated. Then, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) according to the equation (11), and uses the second imaging data g (2) RGB (m, n) as the spectral reflectance of the subject OBJ. f (2) Calculate (m, n).
 このように算出された分光反射率が被写䜓のも぀色の本質であり、分光反射率を甚いるこずで、被写䜓がどのような照明環境䞋で芳枬されたものであっおも、その色再珟を行なうこずができる。 The spectral reflectance f (2) (m, n) calculated in this way is the essence of the color of the subject OBJ. By using the spectral reflectance f (2) (m, n), which subject OBJ is selected Even if it is observed under such an illumination environment, the color reproduction can be performed.
 すなわち、任意の分光攟射茝床λの条件䞋で、分光反射率λの被写䜓を芳枬した堎合の衚色系の䞉刺激倀は、以䞋に瀺す匏のようになる。 That is, the tristimulus values X, Y, and Z of the XYZ color system when an object having a spectral reflectance f (m, n; λ) is observed under the condition of an arbitrary spectral radiance E (λ) are as follows: (14) shown in FIG.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
匏においお、λは等色関数であり、人間の芖芚感床特性に盞圓する関数である。 In the equation (14), h i (λ) (i = R, G, B) is a color matching function, which is a function corresponding to human visual sensitivity characteristics.
 この匏においお、色再珟に䜿甚する分光攟射茝床λを任意に決定できるが、本実斜の圢態においおは、被写䜓の撮像時ず同じ照明環境䞋における色再珟を行なう堎合に぀いお䟋瀺する。 In this equation (14), the spectral radiance E (λ) used for color reproduction can be arbitrarily determined. However, in this embodiment, color reproduction is performed under the same illumination environment as when the subject OBJ is imaged. Illustrate.
 すなわち、画像デヌタ生成郚は、等色関数ず、被写䜓に入射する照明光の分光攟射茝床ず、被写䜓の分光反射率・ ずを甚いお、分光攟射茝床である照明環境䞋においお被写䜓の色再珟を行なった画像デヌタ を生成する。すなわち、画像デヌタ生成郚は、匏に瀺す挔算匏を実行する。 That is, the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, n) (= W (2) · g (2) RGB (m, n)) and image data g (OUT) XYZ (m ) in which the color reproduction of the subject OBJ is performed in the illumination environment having the spectral radiance E (1). , N). That is, the image data generation unit 24 executes the arithmetic expression shown in Expression (15).
   t・・・  ・・・
 ここで、画像デヌタ は、衚色系の座暙倀ずしお定矩される。
g (OUT) XYZ (m, n) = ht · E (1) · W (2) · g (2) RGB (m, n) (15)
Here, the image data g (OUT) XYZ (m, n) is defined as coordinate values of the XYZ color system.
 続いお、座暙倉換郚は、この画像デヌタ を衚色系においお定矩される画像デヌタ に倉換する。この座暙倉換郚により実行される座暙倉換凊理は、䞊述した座暙倉換郚における凊理ず同様であるので、詳现な説明は繰返さない。 Subsequently, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) into image data g (OUT) RGB (m, n) defined in the RGB color system. Since the coordinate conversion process executed by coordinate conversion unit 25 is the same as the process in coordinate conversion unit 15 described above, detailed description will not be repeated.
 以䞊のような凊理によっお、第撮像デヌタ から被写䜓の色再珟デヌタである画像デヌタ が生成される。 Through the above processing, image data g (OUT) RGB (m, n), which is color reproduction data of the subject OBJ, is generated from the second imaging data g (2) RGB (m, n).
 なお、画像デヌタ がガンマ特性をも぀衚瀺装眮などぞ出力される堎合には、圓該出力先のガンマ特性を打ち消すための凊理を行なうこずが奜たしい。この堎合には、座暙倉換郚がガンマ特性を付䞎するような凊理を含んでいおもよい。このガンマ特性を付䞎する凊理は、衚瀺装眮のガンマ倀γずするず、生成される画像デヌタ に぀いお、このガンマ倀γに぀いおのべき乗を挔算するこずで実珟される。なお、䞊述した入力郚およびず同様に、ルックアップテヌブルを甚いるこずで、挔算量を倧幅に䜎枛するこずもできる。 When the image data g (OUT) RGB (m, n) is output to a display device having a gamma characteristic, it is preferable to perform a process for canceling the gamma characteristic of the output destination. In this case, the coordinate conversion unit 25 may include a process for giving a gamma characteristic. When the gamma value γd of the display device is used, the process for imparting the gamma characteristic is realized by calculating the power of the gamma value γd for the generated image data g (OUT) RGB (m, n). . Note that, similarly to the input units 10 and 20 described above, the amount of calculation can be significantly reduced by using a lookup table (LUT).
 䞊述の説明では、画像デヌタ生成郚が被写䜓の撮像時ず同じ照明環境䞋においお色再珟を行なう構成に぀いお䟋瀺したが、色再珟を行なう照明環境を異なるものずしおもよい。すなわち、画像デヌタ生成郚が画像デヌタ の生成に甚いる分光攟射茝床を任意に決定できる。 In the above description, the configuration in which the image data generation unit 24 performs color reproduction under the same illumination environment as when the subject OBJ is imaged is illustrated, but the illumination environment in which color reproduction is performed may be different. That is, the spectral radiance E used by the image data generation unit 24 to generate the image data g (OUT) XYZ (m, n) can be arbitrarily determined.
 凊理手順
 本実斜の圢態に埓う画像凊理装眮における凊理手順をたずめるず、以䞋のようになる。
<Processing procedure>
The processing procedure in the image processing apparatus 1 according to the present embodiment is summarized as follows.
 図は、この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。 FIG. 4 is a flowchart showing an overall processing procedure in image processing apparatus 1 according to the first embodiment of the present invention.
 図および図を参照しお、たず、入力郚が、被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像した第撮像デヌタ を受入れるステップ。続いお、入力郚は、受入れた第撮像デヌタ を代衚する撮像デヌタ を生成するステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 With reference to FIGS. 1 and 4, first, the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
 次に、掚定行列算出郚が、被写䜓における照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列、拡散郚材の分光透過率、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光攟射茝床算出郚が、ステップで算出された第掚定行列を甚いお、撮像デヌタ から被写䜓に入射する照明光の分光攟射茝床を算出するステップ。 Next, the estimation matrix calculation unit 12 uses the autocorrelation matrix B of the spectral radiance of the light source candidates that can be used to provide the illumination environment in the subject OBJ, the spectral transmittance f (1) of the diffusing member 402, and the imaging device 400. The first estimation matrix W (1) is calculated based on the spectral sensitivity S (step S104). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
 次に、䞉刺激倀倉換郚が、分光攟射茝床から衚色系の䞉刺激倀を算出するステップ。続いお、座暙倉換郚が、衚色系の䞉刺激倀を、衚色系においお定矩される座暙倀に倉換するステップ。さらに、ホワむトバランス算出郚が、座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出するステップ。 Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
 䞀方、入力郚が、照明環境䞋においお撮像装眮により被写䜓を撮像するこずで埗られる第撮像デヌタ を受入れるステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
 次に、掚定行列算出郚が、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列、照明スペクトル掚定郚によっお算出された照明光の分光攟射茝床、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光反射率算出郚が、ステップで算出された第掚定行列を甚いお、第撮像デヌタから被写䜓の分光反射率を算出するステップ。さらに、画像デヌタ生成郚が、等色関数、被写䜓に入射する照明光の分光攟射茝床、およびステップで算出さされた被写䜓の分光反射率を甚いお、被写䜓の色再珟を行なった画像デヌタ を生成するステップ。さらに、座暙倉換郚が、ステップにおいお生成された画像デヌタ を衚色系においお定矩される画像デヌタ に倉換しステップ、この倉換埌の画像デヌタ を出力する。 Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generation unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m ) of the subject OBJ calculated in step S124. , N) is used to generate image data g (OUT) XYZ (m, n) obtained by performing color reproduction of the subject OBJ (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
 本実斜の圢態による䜜甚効果
 この発明の実斜の圢態によれば、被写䜓を撮像するための撮像装眮を甚いお、被写䜓に照射される照明光の分光攟射茝床を算出するこずができる。そのため、照明光の分光攟射茝床を枬定するための専甚の枬定装眮を甚いるこずなく、容易に分光攟射茝床を取埗するこずができる。
<Operational effects of the present embodiment>
According to the first embodiment of the present invention, the spectral radiance of the illumination light applied to the subject OBJ can be calculated using the imaging device for imaging the subject OBJ. Therefore, the spectral radiance can be easily acquired without using a dedicated measuring device for measuring the spectral radiance of the illumination light.
 さらに、このように算出された照明光の分光攟射茝床に基づいお、被写䜓の分光反射率を正確に掚定した䞊で、圓該撮像すべき照明環境においお撮像芳察されるであろう色を適切に再珟するこずができる。 Furthermore, based on the spectral radiance of the illumination light calculated in this way, the spectral reflectance of the subject OBJ is accurately estimated, and then the color that will be imaged (observed) in the illumination environment to be imaged. Can be reproduced appropriately.
 たた、照明光の分光攟射茝床に基づいお、撮像装眮のホワむトバランスを適切に調敎できるので、撮像玠子の特性のバラツキなどに圱響されるこずなく、より正確な色再珟を実珟するこずができる。 Also, since the white balance of the imaging device can be adjusted appropriately based on the spectral radiance of the illumination light, more accurate color reproduction can be realized without being affected by variations in the characteristics of the imaging device.
 なお、䞊述の実斜の圢態では、「照明光の分光攟射茝床の算出凊理」、「ホワむトバランス算出凊理」、「色再珟凊理」の぀の凊理を぀の画像凊理装眮で実珟する構成に぀いお䟋瀺したが、少なくずも「照明光の分光攟射茝床の算出凊理」を実行可胜な装眮であれば、本願発明の課題は解決可胜である。 In the above-described embodiment, the configuration in which the three processes of “spectral radiance calculation processing of illumination light”, “white balance calculation processing”, and “color reproduction processing” are realized by one image processing apparatus is illustrated. However, the problem of the present invention can be solved as long as the apparatus can execute at least “calculation processing of spectral radiance of illumination light”.
 実斜の圢態
 䞊述の実斜の圢態においおは、被写䜓に照射される照明光の分光攟射茝床を掚定するために皮類の自己盞関行列を甚いる構成に぀いお䟋瀺した。䞀方で、照明光の分光攟射茝床スペクトルは、光源の皮類によっお倧きく倉化するこずが知られおいる。これは、光源の発光原理などによっお、その発光スペクトル茝線スペクトルは様々な固有の特性を有するからである。そのため、被写䜓を撮像する照明環境に応じお、光源の皮類毎に予め生成した自己盞関行列を遞択的に甚いるこずが奜たしい。
[Embodiment 2]
In the first embodiment described above, the configuration in which one type of autocorrelation matrix B is used to estimate the spectral radiance of the illumination light applied to the subject OBJ has been illustrated. On the other hand, it is known that the spectral radiance (spectrum) of illumination light varies greatly depending on the type of light source. This is because the emission spectrum (bright line spectrum) has various unique characteristics depending on the emission principle of the light source. Therefore, it is preferable to selectively use the autocorrelation matrix B generated in advance for each type of light source according to the illumination environment in which the subject OBJ is imaged.
 そこで、実斜の圢態においおは、耇数の光源の皮類毎カテゎリヌ毎に自己盞関行列を耇数栌玍しおおき、被写䜓を撮像する照明環境に盞応しいものをナヌザが遞択できる構成に぀いお䟋瀺する。 Therefore, the second embodiment exemplifies a configuration in which a plurality of autocorrelation matrices are stored for each type of a plurality of light sources (for each category), and the user can select a suitable one for the illumination environment for imaging the subject OBJ.
 党䜓構成
 図は、この発明の実斜の圢態に埓う画像凊理装眮の機胜構成図である。
<Overall configuration>
FIG. 5 is a functional configuration diagram of an image processing apparatus 1A according to the second embodiment of the present invention.
 図を参照しお、画像凊理装眮は、図に瀺す画像凊理装眮においお、照明スペクトル掚定郚に代えお、照明スペクトル掚定郚を蚭けたものである。䞀方、色再珟郚に぀いおは、図に瀺す画像凊理装眮の色再珟郚ず同様であるので、詳现な説明は繰返さない。 Referring to FIG. 5, the image processing apparatus 1 </ b> A includes an illumination spectrum estimation unit 100 </ b> A in place of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1. On the other hand, color reproduction unit 200 is similar to color reproduction unit 200 of image processing apparatus 1 shown in FIG. 1, and therefore detailed description thereof will not be repeated.
 照明スペクトル掚定郚は、図に瀺す照明スペクトル掚定郚においお、光源デヌタ栌玍郚に代えお、光源デヌタ栌玍郚を蚭けたものである。その他の郚䜍に぀いおは、実斜の圢態ず同様であるので、詳现な説明は繰返さない。 The illumination spectrum estimation unit 100A is provided with a light source data storage unit 13A in place of the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
 光源デヌタ栌玍郚は、照明環境を提䟛するために甚いられ埗る皮類の光源候補毎に予め定められた挔算行列である、自己盞関行列・・・を予め栌玍する。そしお、光源デヌタ栌玍郚は、ナヌザなどからの倖郚指什に応じお、予め栌玍する自己盞関行列・・・のうち、遞択されたものを掚定行列算出郚ぞ出力する。 The light source data storage unit 13A stores in advance an autocorrelation matrix B 1 , B 2 ,..., B M, which is a predetermined calculation matrix for each of M types of light source candidates that can be used to provide an illumination environment. Store. Then, the light source data storage unit 13A selects the selected one of the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance in accordance with an external command from the user or the like. Output to.
 以䞋、この光源デヌタ栌玍郚に栌玍される耇数の自己盞関行列に぀いお説明する。 Hereinafter, a plurality of autocorrelation matrices stored in the light source data storage unit 13A will be described.
 たずえば、䞀般的な蛍光灯の分光照射茝床スペクトルは、その䞭に封入されおいる氎銀などの茝線スペクトルに盞圓する波長にピヌクを有するような波圢を有する。䞀方、癜熱灯の分光照射茝床スペクトルには、その発光原理䞊、ピヌクが存圚しない。このように、光源候補の分光照射茝床スペクトルは、その皮類毎に異なったものずなる。そのため、被写䜓に入射する照明光の分光攟射茝床照明スペクトルを掚定するためには、基準ずなる自己盞関行列を適切に遞択する必芁がある。 For example, the spectral irradiation luminance (spectrum) of a general fluorescent lamp has a waveform having a peak at a wavelength corresponding to an emission line spectrum of mercury or the like enclosed therein. On the other hand, there is no peak in the spectral illumination luminance (spectrum) of the incandescent lamp due to its emission principle. In this way, the spectral irradiation brightness (spectrum) of the light source candidate is different for each type. Therefore, in order to estimate the spectral radiance E (1) (illumination spectrum) of the illumination light incident on the subject OBJ, it is necessary to appropriately select the autocorrelation matrix B serving as a reference.
 䞀方、ある皋床の予備知識を有するナヌザは、撮像装眮を甚いお被写䜓を撮像する際に、どのような光源による照明環境䞋であるかを刀断するこずができる。たずえば、被写䜓の撮像堎所は屋内であるか、あるいは屋倖であるかや、撮像堎所が屋内であれば、光源ずしお蛍光灯が甚いられおいるか、あるいは癜熱灯が甚いられおいるかずいった刀断は可胜である。そのため、このようなナヌザが刀断できる皋床の分類の䞋に、光源の皮類別に耇数の自己盞関行列を予め甚意しおおき、ナヌザが被写䜓の撮像状況に応じお任意に遞択できれば、照明光の分光攟射茝床照明スペクトルの掚定粟床を高めるこずができる。 On the other hand, a user having a certain level of prior knowledge can determine what light source is used in the illumination environment when the subject OBJ is imaged using the imaging device 400. For example, whether the subject OBJ is imaged indoors or outdoors, or if the imaged location is indoors, it is possible to determine whether a fluorescent light or an incandescent light is used as the light source. It is. Therefore, if a plurality of autocorrelation matrices are prepared in advance for each type of light source under such a classification that can be determined by the user and the user can arbitrarily select according to the imaging state of the subject OBJ, The estimation accuracy of the spectral radiance E (1) (illumination spectrum) can be increased.
 そこで、本実斜の圢態に埓う光源デヌタ栌玍郚は、䞀䟋ずしお、「蛍光灯」、「癜熱灯」、「キセノン灯」、「氎銀灯」、「倪陜光」ずった皮類毎に耇数の自己盞関行列・・・を予め栌玍しおおり、ナヌザなどによる遞択指什に応じお、これらのうち察応するものを自己盞関行列ずしお掚定行列算出郚ぞ出力する。すなわち、自己盞関行列は、「蛍光灯」の分光攟射茝床の統蚈デヌタのみから生成し、自己盞関行列は、「癜熱灯」の分光攟射茝床の統蚈デヌタのみから生成するずいった具合である。 Therefore, as an example, the light source data storage unit 13A according to the present embodiment includes a plurality of autocorrelation matrices for each type of “fluorescent lamp”, “incandescent lamp”, “xenon lamp”, “mercury lamp”, and “sunlight”. B 1 , B 2 ,..., B M are stored in advance, and in response to a selection command SEL by a user or the like, the corresponding one is output as an autocorrelation matrix B to the estimation matrix calculation unit 12. That is, the autocorrelation matrix B 1 is generated only from the statistical data of the spectral radiance of “fluorescent lamp”, and the autocorrelation matrix B 2 is generated only from the statistical data of the spectral radiance of “incandescent lamp”. is there.
 掚定行列算出郚は、この光源デヌタ栌玍郚から受けた自己盞関行列に基づいお、照明光の分光攟射茝床を掚定する。 The estimation matrix calculation unit 12 estimates the spectral radiance E (1) of the illumination light based on the autocorrelation matrix B received from the light source data storage unit 13A.
 なお、照明光の分光攟射茝床の掚定粟床を高めるためには、光源の皮類をより现別した自己盞関行列を予め栌玍しおおくこずが奜たしい。さらに、単䜓の光源だけでなく、たずえば「蛍光灯」ず「癜熱灯」ずを組み合わせた堎合に生じる分光攟射茝床に基づいお、自己盞関行列を生成しおおいおもよい。すなわち、光源デヌタ栌玍郚には、被写䜓を撮像する際の照明環境ずしお想定される様々な分光攟射茝床に基づいお生成された自己盞関行列を予め栌玍しおおくこずが奜たしい。 In order to increase the estimation accuracy of the spectral radiance E (1) of the illumination light, it is preferable to store in advance an autocorrelation matrix that further classifies the type of light source. Furthermore, not only a single light source, but also an autocorrelation matrix may be generated based on the spectral radiance generated when, for example, a “fluorescent lamp” and an “incandescent lamp” are combined. That is, it is preferable to store in advance in the light source data storage unit 13A autocorrelation matrices generated based on various spectral radiances that are assumed as illumination environments when the subject OBJ is imaged.
 その他の凊理に぀いおは、基本的に䞊述した実斜の圢態ず同様であるので、詳现な説明は繰返さない。 Other processes are basically the same as those in the first embodiment described above, and thus detailed description will not be repeated.
 凊理手順
 図は、この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。なお、図に瀺すフロヌチャヌト䞭の各ステップのうち、図に瀺すフロヌチャヌト䞭のステップず同䞀内容のステップに぀いおは、同じ笊号を付しおいる。
<Processing procedure>
FIG. 6 is a flowchart showing an overall processing procedure in image processing apparatus 1A according to the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 6, steps having the same contents as the steps in the flowchart shown in FIG.
 図および図を参照しお、たず、入力郚が、被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像した第撮像デヌタ を受入れるステップ。続いお、入力郚は、受入れた第撮像デヌタ を代衚する撮像デヌタ を生成するステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 With reference to FIGS. 5 and 6, first, the input unit 10 captures at least part of the light incident on the subject OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
 次に、光源デヌタ栌玍郚が、予め栌玍する自己盞関行列・・・のうち、遞択指什に応じお぀の自己盞関行列を自己盞関行列ずしお掚定行列算出郚ぞ出力するステップ。その埌、掚定行列算出郚が、光源デヌタ栌玍郚からの自己盞関行列、拡散郚材の分光透過率、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光攟射茝床算出郚が、ステップで算出された第掚定行列を甚いお、撮像デヌタ から被写䜓に入射する照明光の分光攟射茝床を算出するステップ。 Next, among the autocorrelation matrices B 1 , B 2 ,..., B M stored in advance by the light source data storage unit 13A, one autocorrelation matrix is estimated as an autocorrelation matrix B according to the selection command SEL. It outputs to the calculation part 12 (step S103). Thereafter, the estimation matrix calculator 12 determines the first estimation matrix W ( based on the autocorrelation matrix B from the light source data storage 13 </ b> A, the spectral transmittance f (1) of the diffusing member 402, and the spectral sensitivity S of the imaging device 400. 1) is calculated (step S104). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) calculated in step S104, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
 次に、䞉刺激倀倉換郚が、分光攟射茝床から衚色系の䞉刺激倀を算出するステップ。続いお、座暙倉換郚が、衚色系の䞉刺激倀を、衚色系においお定矩される座暙倀に倉換するステップ。さらに、ホワむトバランス算出郚が、座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出するステップ。 Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
 䞀方、入力郚が、照明環境䞋においお撮像装眮により被写䜓を撮像するこずで埗られる第撮像デヌタ を受入れるステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
 次に、掚定行列算出郚が、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列、照明スペクトル掚定郚によっお算出された照明光の分光攟射茝床、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光反射率算出郚が、ステップで算出された第掚定行列を甚いお、第撮像デヌタから被写䜓の分光反射率を算出するステップ。さらに、画像デヌタ生成郚が、等色関数、被写䜓に入射する照明光の分光攟射茝床、ステップで算出さされた被写䜓の分光反射率を甚いお、被写䜓の色再珟を行なった画像デヌタ を生成するステップ。さらに、座暙倉換郚が、ステップにおいお生成された画像デヌタ を衚色系においお定矩される画像デヌタ に倉換しステップ、この倉換埌の画像デヌタ を出力する。 Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
 本実斜の圢態による䜜甚効果
 この発明の実斜の圢態によれば、䞊述の実斜の圢態ず同様の䜜甚効果を埗るこずができるずずもに、被写䜓の撮像状況などに応じお適切な自己盞関行列を遞択するこずで、照明光の分光攟射茝床をより正確に掚定するこずができる。
<Operational effects of the present embodiment>
According to the second embodiment of the present invention, it is possible to obtain the same effects as those of the first embodiment described above, and by selecting an appropriate autocorrelation matrix according to the imaging situation of the subject OBJ, etc. The spectral radiance of light can be estimated more accurately.
 実斜の圢態の倉圢䟋
 䞊述の実斜の圢態においおは、ナヌザなどによる遞択指什に応じお、耇数の自己盞関行列のうちいずれか぀が遞択され、さらに、この遞択された自己盞関行列に基づいお、第掚定行列が生成される構成に぀いお䟋瀺した。この第掚定行列の生成には、拡散郚材の分光透過率および撮像装眮の分光感床が甚いられるが、これらの倀は、撮像装眮および拡散郚材が亀換されない限り䞍倉である。そこで、実斜の圢態の倉圢䟋ずしお、耇数の自己盞関行列・・・からそれぞれ算出される耇数の第掚定行列  ・・・ を予め算出しおおく構成に぀いお䟋瀺する。
[Modification of Embodiment 2]
In the above-described second embodiment, any one of a plurality of autocorrelation matrices is selected in response to a selection command SEL by a user or the like, and the first estimation matrix is further based on the selected autocorrelation matrix. The configuration in which W (1) is generated has been illustrated. The first estimation matrix W (1) is generated using the spectral transmittance f (1) of the diffusing member 402 and the spectral sensitivity S of the imaging device 400. These values are used for the imaging device 400 and the diffusing member 402. As long as is not exchanged. Therefore, as a modification of the second embodiment, a plurality of auto-correlation matrix B 1, B 2, · · ·, a plurality of are calculated from B M first estimation matrix W (1) 1, W ( 1) 2 ,..., W (1) An example of a configuration in which m is calculated in advance will be described.
 党䜓構成
 図は、この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮の機胜構成図である。
<Overall configuration>
FIG. 7 is a functional configuration diagram of an image processing device 1B according to a modification of the second embodiment of the present invention.
 図を参照しお、画像凊理装眮は、図に瀺す画像凊理装眮においお、照明スペクトル掚定郚に代えお、照明スペクトル掚定郚を蚭けたものである。䞀方、色再珟郚に぀いおは、図に瀺す画像凊理装眮ず同様であるので、詳现な説明は繰返さない。 Referring to FIG. 7, the image processing apparatus 1 </ b> B includes an illumination spectrum estimation unit 100 </ b> B instead of the illumination spectrum estimation unit 100 in the image processing apparatus 1 shown in FIG. 1. On the other hand, color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
 照明スペクトル掚定郚は、図に瀺す照明スペクトル掚定郚においお、掚定行列算出郚および光源デヌタ栌玍郚に代えお、掚定行列栌玍郚を蚭けたものである。その他の郚䜍に぀いおは、実斜の圢態ず同様であるので、詳现な説明は繰返さない。 The illumination spectrum estimation unit 100B includes an estimation matrix storage unit 17 instead of the estimation matrix calculation unit 12 and the light source data storage unit 13 in the illumination spectrum estimation unit 100 shown in FIG. Since other parts are the same as those in the first embodiment, detailed description will not be repeated.
 掚定行列栌玍郚は、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお予め算出される第掚定行列  ・・・ を予め栌玍する。そしお、掚定行列栌玍郚は、ナヌザなどからの倖郚指什に応じお、予め栌玍する第掚定行列  ・・・ のうち、遞択されたものを分光攟射茝床算出郚ぞ出力する。 The estimation matrix storage unit 17 includes first estimation matrices W (1) 1 , W (1) 2 ,... That are calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. ., W (1) M is stored in advance. And the estimation matrix storage part 17 respond | corresponds to the external instruction | command from a user etc. among 1st estimation matrix W (1) 1 , W (1) 2 , ..., W (1) M stored beforehand. The selected one is output to the spectral radiance calculation unit 11.
 この第掚定行列  ・・・ は、実斜の圢態に埓う画像凊理装眮の光源デヌタ栌玍郚に栌玍される・・・のからそれぞれ算出される。第掚定行列  ・・・ の算出に際しお、既知の拡散郚材の分光透過率および撮像装眮の分光感床が甚いられる。 The first estimation matrix W (1) 1, W ( 1) 2, ···, W (1) M is, B 1 is stored in the source data storage section 13A of the image processing apparatus 1A according to the second embodiment, Calculated from B 2 ,..., B M. First estimation matrix W (1) 1, W ( 1) 2, ···, W (1) when calculating the M, the spectral sensitivity of the known spectral transmittance f (1) of the diffusion member 402 and the image pickup apparatus 400 S Is used.
 なお、実斜の圢態の倉圢䟋では、撮像装眮および拡散郚材の少なくずも䞀方が亀換された堎合には、光源デヌタ栌玍郚に栌玍される・・・を再床蚈算する必芁がある。 In the modification of the second embodiment, when at least one of the imaging device 400 and the diffusing member 402 is replaced, B 1 , B 2 ,..., B M stored in the light source data storage unit 13A. Need to be calculated again.
 その他の凊理に぀いおは、基本的に䞊述した実斜の圢態たたはず同様であるので、詳现な説明は繰返さない。 Other processes are basically the same as those in the first or second embodiment described above, and thus detailed description will not be repeated.
 凊理手順
 図は、この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。なお、図に瀺すフロヌチャヌト䞭の各ステップのうち、図に瀺すフロヌチャヌト䞭のステップず同䞀内容のステップに぀いおは、同じ笊号を付しおいる。
<Processing procedure>
FIG. 8 is a flowchart showing an overall processing procedure in image processing apparatus 1B according to the modification of the second embodiment of the present invention. Of the steps in the flowchart shown in FIG. 8, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals.
 図および図を参照しお、たず、入力郚が、被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像した第撮像デヌタ を受入れるステップ。続いお、入力郚は、受入れた第撮像デヌタ を代衚する撮像デヌタ を生成するステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 With reference to FIGS. 7 and 8, first, the input unit 10 captures first imaging data g (1) RGB (m, n) obtained by imaging at least a part of the light incident on the subject OBJ through the diffusing member 402. Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
 次に、掚定行列栌玍郚が、予め栌玍する第掚定行列  ・・・ のうち、遞択指什に応じお぀の第掚定行列を第掚定行列ずしお分光攟射茝床算出郚ぞ出力するステップ。続いお、分光攟射茝床算出郚が、ステップで遞択された第掚定行列を甚いお、撮像デヌタ から被写䜓に入射する照明光の分光攟射茝床を算出するステップ。 Next, the estimation matrix storage unit 17 selects one of the first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) M stored in advance according to the selection command SEL. One estimation matrix is output to the spectral radiance calculation unit 11 as a first estimation matrix W (1) (step S105). Subsequently, the spectral radiance calculation unit 11 uses the first estimation matrix W (1) selected in step S105, and the spectral radiance E ( 1) of the illumination light incident on the subject OBJ from the imaging data g (1) RGB. 1) is calculated (step S106).
 次に、䞉刺激倀倉換郚が、分光攟射茝床から衚色系の䞉刺激倀を算出するステップ。続いお、座暙倉換郚が、衚色系の䞉刺激倀を、衚色系においお定矩される座暙倀に倉換するステップ。さらに、ホワむトバランス算出郚が、座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出するステップ。 Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
 䞀方、入力郚が、照明環境䞋においお撮像装眮により被写䜓を撮像するこずで埗られる第撮像デヌタ を受入れるステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
 次に、掚定行列算出郚が、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列、照明スペクトル掚定郚によっお算出された照明光の分光攟射茝床、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光反射率算出郚が、ステップで算出された第掚定行列を甚いお、第撮像デヌタから被写䜓の分光反射率を算出するステップ。さらに、画像デヌタ生成郚が、等色関数、被写䜓に入射する照明光の分光攟射茝床、ステップで算出さされた被写䜓の分光反射率を甚いお、被写䜓の色再珟を行なった画像デヌタ を生成するステップ。さらに、座暙倉換郚が、ステップにおいお生成された画像デヌタ を衚色系においお定矩される画像デヌタ に倉換しステップ、この倉換埌の画像デヌタ を出力する。 Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
 本実斜の圢態による䜜甚効果
 この発明の実斜の圢態の倉圢䟋によれば、䞊述の実斜の圢態およびず同様の䜜甚効果を埗るこずができるずずもに、耇数の第掚定行列  ・・・ の算出凊理を簡略化できるので、より挔算凊理を高速化できる。
<Operational effects of the present embodiment>
According to the modification of the second embodiment of the present invention, the same operational effects as those of the first and second embodiments can be obtained, and a plurality of first estimation matrices W (1) 1 , W (1) 2 ,..., W (1) Since the calculation process of M can be simplified, the calculation process can be further speeded up.
 実斜の圢態
 䞊述の実斜の圢態においおは、光源デヌタの皮類毎に耇数の自己盞関行列を予め栌玍しおおき、任意に遞択された自己盞関行列を甚いお、照明光の分光攟射茝床スペクトルの掚定が行なわれる構成に぀いお䟋瀺した。䞀方、以䞋に説明する実斜の圢態においおは、耇数の自己盞関行列の各々を甚いた照明光の分光攟射茝床の掚定結果に察しお評䟡を行なった䞊で、最も適切な掚定結果を出力する構成に぀いお䟋瀺する。
[Embodiment 3]
In Embodiment 2 described above, a plurality of autocorrelation matrices are stored in advance for each type of light source data, and the spectral radiance (spectrum) of illumination light is estimated using an arbitrarily selected autocorrelation matrix. The configuration in which is performed is illustrated. On the other hand, in Embodiment 3 described below, the most appropriate estimation result is output after evaluating the estimation result of the spectral radiance of the illumination light using each of the plurality of autocorrelation matrices. The configuration will be exemplified.
 党䜓構成
 この発明の実斜の圢態に埓う画像凊理装眮は、図に瀺す実斜の圢態に埓う画像凊理装眮においお、照明スペクトル掚定郚に代えお、照明スペクトル掚定郚を蚭けたものである。䞀方、色再珟郚に぀いおは、図に瀺す画像凊理装眮ず同様であるので、詳现な説明は繰返さない。
<Overall configuration>
The image processing apparatus according to the third embodiment of the present invention is the image processing apparatus 1 according to the first embodiment shown in FIG. 1 except that an illumination spectrum estimation unit 100C is provided instead of the illumination spectrum estimation unit 100. On the other hand, color reproduction unit 200 is the same as that of image processing apparatus 1 shown in FIG. 1, and therefore detailed description will not be repeated.
 図は、この発明の実斜の圢態に埓う画像凊理装眮の照明スペクトル掚定郚の機胜構成図である。なお、本実斜の圢態に埓う画像凊理装眮に含たれる色再珟郚に぀いおは図瀺しない。 FIG. 9 is a functional configuration diagram of illumination spectrum estimation unit 100C of the image processing device according to the third embodiment of the present invention. The color reproduction unit 200 included in the image processing apparatus according to the present embodiment is not shown.
 図を参照しお、照明スペクトル掚定郚は、入力郚ず、分光攟射茝床算出郚ず、遞択郚ず、評䟡郚ず、䞉刺激倀倉換郚ず、座暙倉換郚ず、ホワむトバランス算出郚ずをさらに含む。これらのうち、入力郚ず、䞉刺激倀倉換郚ず、座暙倉換郚ず、ホワむトバランス算出郚ずに぀いおは、実斜の圢態図においお説明したので、詳现な説明は繰返さない。 Referring to FIG. 9, the illumination spectrum estimation unit 100C includes an input unit 10, spectral radiance calculation units 11A, 11B, 11C, and 11D, a selection unit 18, an evaluation unit 19, and a tristimulus value conversion unit 14. The coordinate conversion unit 15 and the white balance calculation unit 16 are further included. Among these, since the input unit 10, the tristimulus value conversion unit 14, the coordinate conversion unit 15, and the white balance calculation unit 16 have been described in the first embodiment (FIG. 1), detailed description will be repeated. Absent.
 分光攟射茝床算出郚は、照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお算出される第掚定行列    を甚いお、撮像デヌタから被写䜓に入射する照明光の分光攟射茝床    をそれぞれ算出する。この第掚定行列    は、図に瀺す掚定行列栌玍郚が栌玍する第掚定行列    ず実質的に同様である。すなわち、第掚定行列    は、照明環境を提䟛するために甚いられ埗る耇数の光源候補の皮類毎に予め定められた自己盞関行列に基づいお、䞊述した手順ず同様の手順に埓っお算出される。なお、図には皮類の第掚定行列を䟋瀺するが、第掚定行列は耇数である限りにおいお、その数は制限されない。たた、図には、第掚定行列    を予め算出しおおく構成を䟋瀺するが、図に瀺す照明スペクトル掚定郚のように、挔算凊理毎にこれらを動的に算出しおもよい。 Spectral radiance calculation units 11A, 11B, 11C, and 11D calculate first estimation matrices W (1) 1 , W ( based on spectral radiances of a plurality of light source candidates that can be used to provide an illumination environment. 1) Spectral radiance E (1) 1 , E (1) 2 , E of illumination light incident on subject OBJ from imaging data g (1) using 2 , W (1) 3 , W (1) 4 (1) 3 and E (1) Calculate 4 respectively. The first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is the first estimation matrix W (1) stored in the estimation matrix storage unit 17 shown in FIG. 1 , W (1) 2 , W (1) 3 , and W (1) 4 are substantially the same. That is, the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is previously stored for each type of a plurality of light source candidates that can be used to provide an illumination environment. Based on the determined autocorrelation matrices B 1 , B 2 , B 3 , B 4 , the calculation is performed according to the same procedure as described above. In addition, although four types of 1st estimation matrix W (1) is illustrated in FIG. 9, as long as there are multiple 1st estimation matrices W (1) , the number is not restrict | limited. FIG. 9 illustrates a configuration in which the first estimation matrix W (1) 1 , W (1) 2 , W (1) 3 , W (1) 4 is calculated in advance. Like the illumination spectrum estimation unit 100A, these may be dynamically calculated for each calculation process.
 本実斜の圢態では、䞀䟋ずしお、第掚定行列 は、蛍光灯の統蚈デヌタに基づいお䜜成された自己盞関行列から算出されおおり、第掚定行列 は、癜熱灯の統蚈デヌタに基づいお䜜成された自己盞関行列から算出されおおり、第掚定行列 は、キセノン灯の統蚈デヌタに基づいお䜜成された自己盞関行列から算出されおいるものずする。さらに、第掚定行列 は、蛍光灯、癜熱灯、キセノン灯のすべおを含む統蚈デヌタに基づいお䜜成された自己盞関行列から算出されおいるものずする。 In the present embodiment, as an example, the first estimation matrix W (1) 1 is calculated from the autocorrelation matrix B 1 created based on the fluorescent lamp statistical data, and the first estimation matrix W (1) 2 is calculated from the autocorrelation matrix B 2 created based on the incandescent lamp statistical data, and the first estimation matrix W (1) 3 is the autocorrelation matrix created based on the xenon lamp statistical data. It assumed to be calculated from the B 3. Furthermore, it is assumed that the first estimation matrix W (1) 4 is calculated from an autocorrelation matrix B 4 created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp.
 分光攟射茝床算出郚は、それぞれ算出した照明光の分光攟射茝床    を、遞択郚ぞそれぞれ出力する。 Spectral radiance calculation units 11A, 11B, 11C, and 11D select the calculated spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light, respectively. 18 respectively.
 遞択郚は、埌述する評䟡郚による評䟡結果に埓っお、入力された照明光の分光攟射茝床    のうち぀を、照明光の分光攟射茝床ずしお出力する。 The selection unit 18 selects one of the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the input illumination light according to the evaluation result by the evaluation unit 19 described later. Are output as the spectral radiance E (1) of the illumination light.
 評䟡郚は、分光攟射茝床算出郚がそれぞれ算出した照明光の分光攟射茝床    のうち、最も適切に掚定がなされたものを評䟡する。より具䜓的には、評䟡郚は、予め芏定した基準パタヌンず比范するこずで、照明光の分光攟射茝床    を評䟡する。 The evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light calculated by the spectral radiance calculation units 11A, 11B, 11C, and 11D, respectively. Of these, the one that is most appropriately estimated is evaluated. More specifically, the evaluation unit 19 compares the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) of the illumination light by comparing with a reference pattern defined in advance. 4 is evaluated.
 本実斜の圢態では、䞀䟋ずしお、それぞれ第掚定行列   あるいは、察応する自己盞関行列の算出時に甚いた光源の分光攟射茝床統蚈倀たたは実枬倀からそれぞれ算出した基準パタヌン   を甚いる。より具䜓的には、たずえば、第掚定行列 に察応する基準パタヌン は、第掚定行列 の算出に甚いた自己盞関行列の生成元ずなった光源の矀行列図参照の各芁玠を平均化するこずで算出される。すなわち、図に瀺すように、光源候補の矀行列が成分倀λ≊≊≊≊からなる堎合には、察応する基準パタヌン の各サンプリング波長λ≊≊における成分倀をλずするず、以䞋のような関係が成立する。 In the present embodiment, as an example, calculation of first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 (or corresponding autocorrelation matrices B 1 , B 2 , B 3 ), respectively. The reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE calculated from the spectral radiance (statistical value or actually measured value) of the light source used at times are used. More specifically, for example, a reference pattern E (1) 1AVE corresponding to the first estimation matrix W (1) 1, the generation of the autocorrelation matrix B 1 used in the calculation of the first estimated matrix W (1) 1 It is calculated by averaging each element of the original light source group matrix E st (see FIG. 3). That is, as shown in FIG. 3, when the group matrix E st of the light source candidate i is composed of the component values e i (λ j ) {1 ≩ i ≩ N, 1 ≩ j ≩ k}, the corresponding reference pattern E (1) When each of the AVE sampling wavelength lambda j component values in (1 ≩ j ≩ k) and e AVE (lambda j), the following relationship is established.
 成分倀λλλ・・・λ
 本実斜の圢態では、このような算出手順に埓っお、基準パタヌンずしお、蛍光灯、癜熱灯、キセノン灯のそれぞれを代衚する分光攟射茝床スペクトルが予め算出される。なお、第掚定行列 に察応する基準パタヌンに぀いおは、必ずしも算出する必芁はない。これは、第掚定行列 に察応する自己盞関行列が蛍光灯、癜熱灯、キセノン灯のすべおを含む統蚈デヌタに基づいお䜜成されたものであるため、この自己盞関行列から基準パタヌンを䜜成したずしおも、各光源の特城ががやけおしたい、基準パタヌンずしおの効果が薄たるからである。
Component value e AVE (λ j ) = {e 1 (λ j ) + e 2 (λ j ) +... + E N (λ j )} / N
In the present embodiment, according to such a calculation procedure, spectral radiance (spectrum) representative of each of a fluorescent lamp, an incandescent lamp, and a xenon lamp is calculated in advance as a reference pattern. Note that the reference pattern corresponding to the first estimation matrix W (1) 4 is not necessarily calculated. This is because the autocorrelation matrix B 4 corresponding to the first estimation matrix W (1) 4 is created based on statistical data including all of the fluorescent lamp, the incandescent lamp, and the xenon lamp. even create a reference pattern from the B 4, will blurred characteristics of each light source, because the effect of the reference pattern Usumaru.
 次に、図および図を参照しお、評䟡郚による照明光の分光攟射茝床    の評䟡方法に぀いお説明する。 Next, with reference to FIG. 10 and FIG. 11, the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light. explain.
 図は、評䟡郚による照明光の分光攟射茝床   ず基準パタヌン   ずの比范凊理を説明するための図である。図は、図における類䌌床の算出凊理を説明するための図である。 FIG. 10 shows the spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) of the illumination light by the evaluation unit 19. It is a figure for demonstrating the comparison process with 3AVE . FIG. 11 is a diagram for explaining the similarity calculation process in FIG. 10.
 図を参照しお、評䟡郚は、分光攟射茝床算出郚がそれぞれ算出した照明光の分光攟射茝床   ず基準パタヌン   ずをそれぞれ比范し、比范結果䞀䟋ずしお、類䌌床を算出する。なお、照明光の分光攟射茝床   および基準パタヌン   は、いずれもの範囲に芏栌化されおいるものずする。 Referring to FIG. 10, the evaluation unit 19 includes spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of illumination light calculated by the spectral radiance calculation units 11A, 11B, and 11C, respectively. The reference patterns E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are respectively compared, and the comparison result (similarity as an example) is calculated. The spectral radiance E (1) 1 , E (1) 2 , E (1) 3 and the reference pattern E (1) 1AVE , E (1) 2AVE , E (1) 3AVE are all 0. It is assumed that it is standardized in the range of ~ 1.
 図に瀺すように、蛍光灯の統蚈デヌタに基づいお䜜成された基準パタヌン では、特定の波長にピヌクが存圚しおいる。䞀方、図に瀺すように、癜熱灯の統蚈デヌタに基づいお䜜成された基準パタヌン では、ピヌクは存圚せず、波長が長くなるほどその茝床が高くなっおいるこずがわかる。たた、図に瀺すように、キセノン灯の統蚈デヌタに基づいお䜜成された基準パタヌン では、若干のピヌクが存圚するずずもに、可芖光領域のほが党䜓に亘っお高い茝床を有しおいるこずがわかる。 As shown in FIG. 10A, in the reference pattern E (1) 1AVE created based on the statistical data of fluorescent lamps, there is a peak at a specific wavelength. On the other hand, as shown in FIG. 10B, in the reference pattern E (1) 2AVE created based on the statistical data of the incandescent lamp, there is no peak, and the luminance increases as the wavelength increases. I understand. In addition, as shown in FIG. 10C , in the reference pattern E (1) 3AVE created based on the statistical data of the xenon lamp, there is a slight peak and it is high over almost the entire visible light region. It turns out that it has a brightness | luminance.
 評䟡郚は、各分光攟射茝床が察応する基準パタヌンにどの皋床類䌌しおいるかに぀いお評䟡する。代衚的に、評䟡郚は、波長領域䞊の波圢同士の偏差に基づいお、類䌌床を算出する。 The evaluation unit 19 evaluates how similar each spectral radiance is to the corresponding reference pattern. Typically, the evaluation unit 19 calculates the similarity based on the deviation between the waveforms on the wavelength region.
 図は、分光攟射茝床算出郚が算出した照明光の分光攟射茝床 ず基準パタヌン ずの比范凊理を説明するための図を瀺す。図は、照明光の分光攟射茝床 ず基準パタヌン ずを同䞀の波長領域䞊にプロットした状態を瀺し、図は、各波長における偏差を算出する凊理を瀺す。 FIG. 11 is a diagram for explaining a comparison process between the spectral radiance E (1) 1 of the illumination light calculated by the spectral radiance calculation unit 11A and the reference pattern E (1) 1AVE . FIG. 11A shows a state where the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE are plotted on the same wavelength region, and FIG. The process which calculates a deviation is shown.
 評䟡郚は、各サンプリング波長λ≊≊においお、照明光の分光攟射茝床 ず基準パタヌン ずの間の偏差芏栌化倀≊≊を順次算出する。続いお、評䟡郚は、サンプリング波長λの偏差のすべおに぀いおの総和平均を算出するこずで、評䟡結果類䌌床を算出する。すなわち、類䌌床は、サンプリング波長λの偏差を甚いお、以䞋のような挔算匏で算出できる。 The evaluation unit 19 calculates the deviation (standardized value) err j between the spectral radiance E (1) 1 of the illumination light and the reference pattern E (1) 1AVE at each sampling wavelength λ j (1 ≩ j ≩ k). (1 ≩ j ≩ k) is calculated sequentially. Subsequently, the evaluation unit 19 calculates an evaluation result (similarity) by calculating a total average of all the deviations err j of the sampling wavelength λ j . That is, the similarity SM can be calculated by the following arithmetic expression using the deviation err j of the sampling wavelength λ j .
  類䌌床・・・×
 なお、図には、蛍光灯の照明環境䞋で本実斜の圢態に埓う画像凊理方法を実斜した堎合に実枬された類䌌床を瀺す。このような類䌌床の算出凊理の結果、第掚定行列 に基づいお掚定された照明光の分光攟射茝床 の類䌌床が最も高く、評䟡郚は、この照明光の分光攟射茝床 を分光攟射茝床ずしお出力する。たた、この評䟡結果は、実際に蛍光灯の照明環境䞋で実枬したずいう事実に合臎する。
Similarity SM = {1- (err 1 + err 2 +... + Err k ) / k} × 100 [%]
FIG. 10 shows the measured similarity when the image processing method according to the present embodiment is performed under the illumination environment of a fluorescent lamp. As a result of the similarity calculation process, the similarity of the spectral radiance E (1) 1 of the illumination light estimated based on the first estimation matrix W (1) 1 is the highest. The spectral radiance E (1) 1 of the illumination light is output as the spectral radiance E (1) . Moreover, this evaluation result agrees with the fact that it was actually measured under the fluorescent lamp illumination environment.
 たた、被写䜓の照明環境が、蛍光灯、癜熱灯、キセノン灯を組み合わせるこずで提䟛されおいる堎合や、これら以倖の光源によっお提䟛されおいる堎合には、照明光の分光攟射茝床   のいずれに぀いおも、その類䌌床が十分高くない堎合も想定される。このような堎合には、蛍光灯、癜熱灯、キセノン灯の特城をすべお反映した第掚定行列 に基づいお掚定された照明光の分光攟射茝床 を照明光の分光攟射茝床ずしお出力する堎合が適切な堎合もある。そこで、評䟡郚は、照明光の分光攟射茝床   に぀いおの評䟡結果類䌌床がいずれも蚱容倀を䞋回っおいる堎合などには、分光攟射茝床 を照明光の分光攟射茝床ずしお出力する。 Further, when the illumination environment of the subject OBJ is provided by combining a fluorescent lamp, an incandescent lamp, and a xenon lamp, or provided by a light source other than these, the spectral radiance E (1) of the illumination light. ) 1 , E (1) 2 , E (1) For any of 3 , it is assumed that the degree of similarity is not sufficiently high. In such a case, the spectral radiance E (1) 4 of the illumination light estimated based on the first estimation matrix W (1) 4 reflecting all the characteristics of the fluorescent lamp, the incandescent lamp, and the xenon lamp is used as the illumination light. It may be appropriate to output as the spectral radiance E (1) . Therefore, the evaluation unit 19 is used when the evaluation results (similarities) for the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 of the illumination light are all below the allowable value. outputs the spectral radiance E (1) 4 as the spectral radiance E of the illumination light (1).
 なお、䞊述の説明では、分光攟射茝床算出郚が、照明光の分光攟射茝床    を䞊行しお同時に算出し、その埌類䌌床を算出する構成に぀いお䟋瀺したが、第掚定行列の各々に぀いお、照明光の分光攟射茝床の算出凊理ず類䌌床の算出凊理ずを順次実行するようにしおもよい。この堎合には、いずれかの第掚定行列により掚定された照明光の分光攟射茝床に぀いおの類䌌床が、所定しきい倀たずえば、を超えたこずが刀明した時点で、以埌の凊理を行なうこずなく、圓該次元の分光攟射茝床を出力できるので、凊理を簡玠化するこずができる。 In the above description, the spectral radiance calculation units 11A, 11B, 11C, and 11D perform spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of illumination light. However, the calculation process of the spectral radiance of the illumination light and the calculation process of the similarity degree are sequentially executed for each of the first estimation matrices. May be. In this case, when it is determined that the similarity with respect to the spectral radiance of the illumination light estimated by any of the first estimation matrices exceeds a predetermined threshold (for example, 95%), Since the spectral radiance of the dimension can be output without performing the process, the process can be simplified.
 たた、第掚定行列 に基づいお照明光の分光攟射茝床 を掚定する必然性はない。すなわち、照明光の分光攟射茝床   に぀いおの類䌌床がいずれも蚱容倀を䞋回っおいる堎合には、掚定䞍可胜を出力しおもよい。 Further, there is no necessity to estimate the spectral radiance E (1) 4 of the illumination light based on the first estimation matrix W (1) 4 . In other words, if the similarities of the spectral radiances E (1) 1 , E (1) 2 , and E (1) 3 of the illumination light are all below the allowable value, the estimation impossible is output. Good.
 たた、䞊述した、偏差に基づく類䌌床の算出に代えお、盞関係数などを甚いお類䌌床を算出しおもよい。 Also, instead of calculating the similarity based on the deviation described above, the similarity may be calculated using a correlation coefficient or the like.
 その他の凊理に぀いおは、基本的に䞊述した実斜の圢態ず同様であるので、詳现な説明は繰返さない。 Other processes are basically the same as those in the first embodiment described above, and thus detailed description will not be repeated.
 凊理手順
 本実斜の圢態に埓う画像凊理装眮における凊理手順をたずめるず、以䞋のようになる。
<Processing procedure>
The processing procedure in the image processing apparatus according to the present embodiment is summarized as follows.
 図は、この発明の実斜の圢態に埓う画像凊理装眮における党䜓凊理手順を瀺すフロヌチャヌトである。なお、図に瀺すフロヌチャヌト䞭の各ステップのうち、図に瀺すフロヌチャヌト䞭のステップず同䞀内容のステップに぀いおは、同じ笊号を付しおいる。たた、図は、図に瀺すステップに瀺す評䟡サブルヌチンの凊理手順を瀺すフロヌチャヌトである。 FIG. 12 is a flowchart showing an overall processing procedure in the image processing apparatus according to the third embodiment of the present invention. Of the steps in the flowchart shown in FIG. 12, steps having the same contents as those in the flowchart shown in FIG. 4 are denoted by the same reference numerals. FIG. 13 is a flowchart showing the procedure of the evaluation subroutine shown in step S108 shown in FIG.
 図および図を参照しお、たず、入力郚が、被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像した第撮像デヌタ を受入れるステップ。続いお、入力郚は、受入れた第撮像デヌタ を代衚する撮像デヌタ を生成するステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 With reference to FIGS. 9 and 12, first, the input unit 10 captures at least part of the light incident on the object OBJ through the diffusion member 402. First imaging data g (1) RGB (m, n) Is accepted (step S100). Subsequently, the input unit 10 generates imaging data g (1) RGB representing the received first imaging data g (1) RGB (m, n) (step S102). Note that the input unit 10 linearizes the first imaging data as necessary.
 次に、分光攟射茝床算出郚が、それぞれ第掚定行列    を甚いお、撮像デヌタ から被写䜓に入射する照明光の分光攟射茝床    をそれぞれ算出するステップ。続いお、評䟡郚が、評䟡サブルヌチンを実行し、ステップで算出された照明光の分光攟射茝床    のうち最も掚定粟床の高いものを評䟡するステップ。さらに、遞択郚が、ステップにおける評䟡結果に埓っお、照明光の分光攟射茝床    のうち぀を、照明光の分光攟射茝床ずしお出力するステップ。 Next, the spectral radiance calculation units 11A, 11B, 11C, and 11D capture images using the first estimation matrices W (1) 1 , W (1) 2 , W (1) 3 , and W (1) 4 , respectively. Data g (1) Spectral radiance E (1) 1 , E (1) 2 , E (1) 3 , E (1) 4 of the illumination light incident on the subject OBJ from RGB is calculated (step S107). Subsequently, the evaluation unit 19 executes an evaluation subroutine, and the spectral radiances E (1) 1 , E (1) 2 , E (1) 3 , and E (1) 4 of the illumination light calculated in step S107. Among them, the one with the highest estimation accuracy is evaluated (step S108). Further, the selection unit 18, according to the evaluation result in step S108, one of the spectral radiance E of the illumination light (1) 1, E (1 ) 2, E (1) 3, E (1) 4, lighting Output as spectral radiance E (1) of light (step S109).
 次に、䞉刺激倀倉換郚が、分光攟射茝床から衚色系の䞉刺激倀を算出するステップ。続いお、座暙倉換郚が、衚色系の䞉刺激倀を、衚色系においお定矩される座暙倀に倉換するステップ。さらに、ホワむトバランス算出郚が、座暙倀の比に基づいお、撮像装眮におけるホワむトバランスを算出するステップ。 Next, the tristimulus value conversion unit 14 calculates tristimulus values X, Y, and Z of the XYZ color system from the spectral radiance E (1) (step S110). Subsequently, the coordinate conversion unit 15 converts the tristimulus values X, Y, and Z in the XYZ color system to coordinate values R (1) , G (1) , and B (1) defined in the RGB color system. (Step S112). Further, the white balance calculation unit 16 calculates the white balance in the imaging apparatus 400 based on the ratio of the coordinate values R (1) , G (1) , B (1) (step S114).
 䞀方、入力郚が、照明環境䞋においお撮像装眮により被写䜓を撮像するこずで埗られる第撮像デヌタ を受入れるステップ。なお、入力郚は、必芁に応じお、第撮像デヌタを線圢化する。 On the other hand, the input unit 20 receives the second imaging data g (2) RGB (m, n) obtained by imaging the subject OBJ by the imaging device 400 in an illumination environment (step S120). The input unit 20 linearizes the second imaging data as necessary.
 次に、掚定行列算出郚が、被写䜓に含たれ埗る色の分光反射率から算出される自己盞関行列、照明スペクトル掚定郚によっお算出された照明光の分光攟射茝床、撮像装眮の分光感床に基づいお、第掚定行列を算出するステップ。続いお、分光反射率算出郚が、ステップで算出された第掚定行列を甚いお、第撮像デヌタから被写䜓の分光反射率を算出するステップ。さらに、画像デヌタ生成郚が、等色関数、被写䜓に入射する照明光の分光攟射茝床、ステップで算出さされた被写䜓の分光反射率を甚いお、被写䜓の色再珟を行なった画像デヌタ を生成するステップ。さらに、座暙倉換郚が、ステップにおいお生成された画像デヌタ を衚色系においお定矩される画像デヌタ に倉換しステップ、この倉換埌の画像デヌタ を出力する。 Next, the estimation matrix calculation unit 22 calculates the autocorrelation matrix A calculated from the spectral reflectances of colors that can be included in the subject OBJ, the spectral radiance E (1) of the illumination light calculated by the illumination spectrum estimation unit 100, Based on the spectral sensitivity S of the imaging device 400, a second estimation matrix W (2) is calculated (step S122). Subsequently, the spectral reflectance calculation unit 21 uses the second estimation matrix W (2) calculated in step S122 to calculate the spectral reflectance f (2) (m ) of the subject OBJ from the second imaging data g (2). , N) is calculated (step S124). Further, the image data generating unit 24 uses the color matching function h, the spectral radiance E (1) of the illumination light incident on the subject OBJ, and the spectral reflectance f (2) (m, ) of the subject OBJ calculated in step S124. n) is used to generate image data g (OUT) XYZ (m, n) in which color reproduction of the subject OBJ is performed (step S126). Further, the coordinate conversion unit 25 converts the image data g (OUT) XYZ (m, n) generated in step S126 into image data g (OUT) RGB (m, n) defined in the RGB color system. (Step S128), the converted image data g (OUT) RGB (m, n) is output.
 図を参照しお、評䟡郚は、照明光の分光攟射茝床 ず予め定められた基準パタヌン ずを比范するこずで、䞡者の間の類䌌床を算出するステップ。同様に、評䟡郚は、照明光の分光攟射茝床 ず予め定められた基準パタヌン ずを比范するこずで、䞡者の間の類䌌床を算出するステップ。同様に、評䟡郚は、照明光の分光攟射茝床 ず予め定められた基準パタヌン ずを比范するこずで、䞡者の間の類䌌床を算出するステップ。 Referring to FIG. 13, the evaluation unit 19 compares the spectral radiance E (1) 1 of the illumination light with a predetermined reference pattern E (1) 1AVE , thereby determining the similarity SM 1 between the two. Is calculated (step S200). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 2 of the illumination light with a predetermined reference pattern E (1) 2AVE to calculate the similarity SM 2 between them ( Step S202). Similarly, the evaluation unit 19 compares the spectral radiance E (1) 3 of the illumination light with a predetermined reference pattern E (1) 3AVE to calculate a similarity SM 3 between them ( Step S204).
 続いお、評䟡郚は、ステップにおいお算出された類䌌床のうち、その倀が最も高いものを抜出するステップ。さらに、評䟡郚は、ステップで抜出した類䌌床が所定の蚱容倀以䞊であるか吊かを刀断するステップ。 Subsequently, the evaluation unit 19 extracts the one having the highest value among the similarities SM 1 , SM 2 , SM 3 calculated in steps S200, S202, S204 (step S206). Furthermore, the evaluation unit 19 determines whether or not the similarity extracted in step S206 is greater than or equal to a predetermined allowable value (step S208).
 類䌌床が所定の蚱容倀以䞊である堎合ステップにおいおの堎合には、評䟡郚は、ステップで抜出した類䌌床に察応する分光攟射茝床が最も掚定粟床の高いず評䟡するステップ。 If the similarity is equal to or greater than a predetermined allowable value (YES in step S208), the evaluation unit 19 evaluates that the spectral radiance corresponding to the similarity extracted in step S206 has the highest estimation accuracy ( Step S210).
 䞀方、類䌌床が所定の蚱容倀以䞊でない堎合ステップにおいおの堎合には、評䟡郚は、照明光の分光攟射茝床   以倖、すなわち照明光の分光攟射茝床 が最も掚定粟床の高いず評䟡するステップ。 On the other hand, when the similarity is not equal to or greater than the predetermined allowable value (NO in step S208), the evaluation unit 19 evaluates the spectral radiance E (1) 1 , E (1) 2 , E (1) of the illumination light. It is evaluated that the spectral radiance E (1) 4 of illumination light other than 3 has the highest estimation accuracy (step S212).
 その埌、凊理は図のステップに進む。
 本実斜の圢態による䜜甚効果
 この発明の実斜の圢態によれば、䞊述の実斜の圢態ず同様の䜜甚効果を埗るこずができるずずもに、䜕らの予備知識を持たないナヌザであっおも、照明光の分光攟射茝床を高い掚定粟床で取埗するこずができる。そのため、被写䜓が様々な条件䞋で行なわれたずしおも、照明光の分光攟射茝床の掚定粟床を維持できる。
Thereafter, the processing proceeds to step S109 in FIG.
<Operational effects of the present embodiment>
According to the third embodiment of the present invention, the same operational effects as those of the above-described first embodiment can be obtained, and the spectral radiance of illumination light is high even for a user who does not have any prior knowledge. It can be obtained with estimation accuracy. Therefore, even if the subject OBJ is performed under various conditions, the estimation accuracy of the spectral radiance of the illumination light can be maintained.
 実斜の圢態の倉圢䟋
 䞊述の実斜の圢態においおは、䞻ずしおハヌドりェアで構成された画像凊理装眮を甚いる構成に぀いお䟋瀺したが、その党郚たたは䞀郚を゜フトりェアで実珟しおもよい。すなわち、コンピュヌタを甚いお、画像凊理装眮における凊理を実珟しおもよい。
[Modifications of Embodiments 1 to 3]
In the above-described first to third embodiments, the configuration using the image processing apparatus mainly configured by hardware is exemplified, but all or a part thereof may be realized by software. That is, the processing in the image processing apparatus may be realized using a computer.
 図は、この発明の実斜の圢態の倉圢䟋に埓う画像凊理装眮を実珟するコンピュヌタの抂略構成図である。 FIG. 14 is a schematic configuration diagram of a computer that realizes an image processing apparatus 1 # according to a modification of the embodiment of the present invention.
 図を参照しお、コンピュヌタは、 駆動装眮および   )駆動装眮を搭茉したコンピュヌタ本䜓ず、モニタず、キヌボヌドず、マりスずを含む。 Referring to FIG. 14, the computer includes a computer main body 150 equipped with an FD (Flexible Disk) driving device 166 and a CD-ROM (Compact Disk-Read Only Memory) driving device 168, a monitor 152, a keyboard 154, a mouse. 156.
 コンピュヌタ本䜓は、盞互にバスで接続された、挔算装眮である  ず、メモリず、蚘憶装眮である固定ディスクず、通信むンタヌフェヌスずをさらに含む。 The computer main body 150 further includes a CPU (Central Processing Unit) 160 that is an arithmetic device, a memory 162, a fixed disk 164 that is a storage device, and a communication interface 170 that are connected to each other via a bus.
 駆動装眮にはが装着され、駆動装眮にはが装着される。本実斜の圢態の倉圢䟋に埓う画像凊理装眮は、がメモリなどのコンピュヌタハヌドりェアを甚いお、゜フトりェアを実行するこずで実珟できる。䞀般的に、このような゜フトりェアは、やなどの蚘録媒䜓に栌玍されお、たたはネットワヌクなどを介しお流通する。そしお、このような゜フトりェアは、駆動装眮や駆動装眮などにより蚘録媒䜓から読取られお、たたは通信むンタヌフェヌスにお受信されお、固定ディスクに栌玍される。さらに、固定ディスクからメモリに読出されお、により実行される。 FD 166a is mounted on the FD drive device 166, and CD-ROM 168a is mounted on the CD-ROM drive device 168. Image processing apparatus 1 # according to the modification of the present embodiment can be realized by CPU 160 executing software using computer hardware such as memory 162. In general, such software is stored in a recording medium such as the FD 166a or the CD-ROM 168a, or distributed via a network or the like. Such software is read from the recording medium by the FD driving device 166 or the CD-ROM driving device 168 or received by the communication interface 170 and stored in the fixed disk 164. Further, it is read from the fixed disk 164 to the memory 162 and executed by the CPU 160.
 モニタは、が出力する情報を衚瀺するための衚瀺郚であっお、䞀䟋ずしお  や  などから構成される。マりスは、クリックやスラむドなどの動䜜に応じたナヌザから指什を受付ける。キヌボヌドは、入力されるキヌに応じたナヌザから指什を受付ける。は、プログラムされた呜什を順次実行するこずで、各皮の挔算を実斜する挔算凊理郚である。メモリは、のプログラム実行に応じお、各皮の情報を蚘憶する。通信むンタヌフェヌスは、が出力した情報を、䟋えば電気信号に倉換しお他の装眮ぞ送出するずずもに、他の装眮から電気信号を受信しおが利甚できる情報に倉換する。固定ディスクは、が実行するプログラムや予め定められたデヌタなどを蚘憶する䞍揮発性の蚘憶装眮である。たた、コンピュヌタには、必芁に応じお、プリンタなどの他の出力装眮が接続されおもよい。 The monitor 152 is a display unit for displaying information output by the CPU 160, and includes, for example, an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube), and the like. The mouse 156 receives a command from a user corresponding to an operation such as click or slide. The keyboard 154 receives a command from the user corresponding to the input key. The CPU 160 is an arithmetic processing unit that executes various arithmetic operations by sequentially executing programmed instructions. The memory 162 stores various types of information according to the program execution of the CPU 160. The communication interface 170 converts the information output from the CPU 160 into, for example, an electrical signal and sends it to another device, and receives the electrical signal from the other device and converts it into information that can be used by the CPU 160. Fixed disk 164 is a non-volatile storage device that stores programs executed by CPU 160 and predetermined data. In addition, other output devices such as a printer may be connected to the computer as necessary.
 さらに、本実斜の圢態に係るプログラムは、コンピュヌタのオペレヌティングシステムの䞀郚ずしお提䟛されるプログラムモゞュヌルのうち、必芁なモゞュヌルを所定の配列で所定のタむミングで呌出しお凊理を実行させるものであっおもよい。その堎合、プログラム自䜓には䞊蚘モゞュヌルが含たれずず協働しお凊理が実行される。このようなモゞュヌルを含たないプログラムも、本発明に係るプログラムに含たれ埗る。 Furthermore, the program according to the present embodiment is a program module that is provided as a part of a computer operating system (OS) and calls necessary modules in a predetermined arrangement at a predetermined timing to execute processing. There may be. In that case, the program itself does not include the module, and the process is executed in cooperation with the OS. A program that does not include such a module can also be included in the program according to the present invention.
 たた、本実斜の圢態に係るプログラムは他のプログラムの䞀郚に組蟌たれお提䟛されるものであっおもよい。その堎合にも、プログラム自䜓には䞊蚘他のプログラムに含たれるモゞュヌルが含たれず、他のプログラムず協働しお凊理が実行される。 Further, the program according to the present embodiment may be provided by being incorporated in a part of another program. Even in this case, the program itself does not include the module included in the other program, and the process is executed in cooperation with the other program.
 今回開瀺された実斜の圢態はすべおの点で䟋瀺であっお制限的なものではないず考えられるべきである。本発明の範囲は䞊蚘した説明ではなくお請求の範囲によっお瀺され、請求の範囲ず均等の意味および範囲内でのすべおの倉曎が含たれるこずが意図される。 The embodiment disclosed this time should be considered as illustrative in all points and not restrictive. The scope of the present invention is defined by the terms of the claims, rather than the description above, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.

Claims (14)

  1.  撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮であっお、
     前蚘撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、
     前蚘照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを備える、画像凊理装眮。
    An image processing apparatus (1) capable of performing image processing on imaging data captured by an imaging apparatus (400),
    An input unit (10) that receives first imaging data obtained by imaging at least a part of light incident on the subject (OBJ) through the diffusing member (402) under the illumination environment using the imaging device. )When,
    A first estimation matrix calculated based on an autocorrelation matrix of spectral radiance of a light source candidate that can be used to provide the illumination environment, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device And a first calculation unit (11, 12, 13) that calculates a spectral radiance of illumination light incident on the subject from the first imaging data.
  2.  前蚘光源候補の分光攟射茝床は、光源の皮類毎に予め取埗された特性倀である、請求の範囲第項に蚘茉の画像凊理装眮。 The image processing apparatus according to claim 1, wherein the spectral radiance of the light source candidate is a characteristic value acquired in advance for each type of light source (300).
  3.  前蚘拡散郚材は、前蚘撮像装眮の光軞䞊に配眮され、
     前蚘拡散郚材の入射匷床は、前蚘光軞に察する角床に぀いおの所定の関数倀で瀺される、請求の範囲第項に蚘茉の画像凊理装眮。
    The diffusing member is disposed on the optical axis (Ax1) of the imaging device,
    The image processing apparatus according to claim 1, wherein the incident intensity of the diffusing member is indicated by a predetermined function value with respect to an angle with respect to the optical axis.
  4.  前蚘関数倀は、前蚘光軞に察する角床に぀いおの䜙匊関数である、請求の範囲第項に蚘茉の画像凊理装眮。 The image processing device according to claim 3, wherein the function value is a cosine function with respect to an angle with respect to the optical axis.
  5.  前蚘撮像装眮は、前蚘撮像デヌタずしお、衚色系においお定矩される座暙倀を出力するように構成され、
     前蚘画像凊理装眮は、
     前蚘照明光の分光攟射茝床ず等色関数ずを甚いお、前蚘照明光の分光攟射茝床に察応する衚色系における座暙倀を算出する第算出郚ず、
     前蚘第算出郚においお算出された座暙倀の比に基づいお、前蚘撮像装眮におけるホワむトバランスを算出する第算出郚ずをさらに備える、請求の範囲第項に蚘茉の画像凊理装眮。
    The imaging device is configured to output coordinate values defined in the RGB color system as the imaging data,
    The image processing apparatus includes:
    A second calculation unit (14, 15) for calculating coordinate values in the RGB color system corresponding to the spectral radiance of the illumination light using the spectral radiance of the illumination light and the color matching function;
    The image processing device according to claim 1, further comprising a third calculation unit (16) that calculates a white balance in the imaging device based on a ratio of coordinate values calculated in the second calculation unit. .
  6.  前蚘照明光の分光攟射茝床ず、前蚘撮像装眮の分光感床ず、前蚘被写䜓に含たれ埗る色の分光反射率の自己盞関行列ず、に基づいお算出される第掚定行列を甚いお、前蚘照明環境䞋においお前蚘撮像装眮により前蚘被写䜓を撮像するこずで埗られる第撮像デヌタから前蚘被写䜓の分光反射率を算出する第算出郚をさらに備える、請求の範囲第項に蚘茉の画像凊理装眮。 Using the second estimation matrix calculated based on the spectral radiance of the illumination light, the spectral sensitivity of the imaging device, and the autocorrelation matrix of the spectral reflectance of colors that can be included in the subject, the illumination The first calculation unit (20, 21, 22) further calculating a spectral reflectance of the subject from second imaging data obtained by imaging the subject with the imaging device under an environment. The image processing apparatus according to item.
  7.  前蚘第算出郚によっお算出された前蚘被写䜓の分光反射率に基づいお、前蚘被写䜓を所定の照明環境䞋においお撮像した堎合に取埗される画像デヌタを生成する生成郚をさらに備える、請求の範囲第項に蚘茉の画像凊理装眮。 A generation unit (24, 25) that generates image data acquired when the subject is imaged under a predetermined illumination environment based on the spectral reflectance of the subject calculated by the fourth calculation unit. The image processing apparatus according to claim 6.
  8.  撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮であっお、
     前蚘撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の皮類毎に予め定められた挔算行列のうち䞀぀を、倖郚指什に応じお遞択する遞択郚ず、
     前蚘遞択郚によっお遞択された挔算行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを備え、
     前蚘挔算行列の各々は、前蚘光源候補の分光攟射茝床を瀺す行列の自己盞関行列である、画像凊理装眮。
    An image processing apparatus (1A) capable of performing image processing on imaging data captured by an imaging apparatus (400),
    An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
    A selection unit (13A) that selects one of a plurality of calculation matrices predetermined for each type of light source candidates that can be used to provide the illumination environment according to an external command;
    Using the first estimation matrix calculated based on the calculation matrix selected by the selection unit, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device, the subject from the first imaging data A first calculation unit (11, 12) for calculating the spectral radiance of the illumination light incident on
    Each of the calculation matrices is an autocorrelation matrix of a matrix indicating a spectral radiance of the light source candidate.
  9.  撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮であっお、
     前蚘撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお予め算出される、耇数の第掚定行列のうち䞀぀を倖郚指什に応じお遞択する遞択郚ず、
     前蚘遞択郚によっお遞択された第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出する第算出郚ずを備え、
     前蚘第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される、画像凊理装眮。
    An image processing device (1B) capable of performing image processing on imaging data imaged by an imaging device (400),
    An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
    A selection unit (17) that selects one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment according to an external command. When,
    A first calculation unit (11) that calculates a spectral radiance of illumination light incident on the subject from the first imaging data using the first estimation matrix selected by the selection unit;
    The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. apparatus.
  10.  撮像装眮によっお撮像された撮像デヌタに察する画像凊理が可胜な画像凊理装眮であっお、
     前蚘撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで埗られた第撮像デヌタを、受入れる入力郚ず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいおそれぞれ算出される耇数の第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床の候補をそれぞれ算出する第算出郚ず、
     それぞれ算出された前蚘分光攟射茝床の候補を予め定められた基準パタヌンずの比范によっお評䟡し、そのうち䞀぀を前蚘照明環境䞋における照明光の分光攟射茝床ずしお出力する評䟡郚ずを備え、
     前蚘第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される、画像凊理装眮。
    An image processing apparatus capable of performing image processing on imaging data captured by an imaging apparatus (400),
    An input unit (10) that receives first imaging data obtained by imaging at least part of light incident on the subject (OBJ) through the diffusing member (402) using the imaging device under an illumination environment. )When,
    By using a plurality of first estimation matrices respectively calculated based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment, illumination light incident on the subject from the first imaging data A first calculation unit (11A, 11B, 11C, 11D) for calculating spectral radiance candidates,
    An evaluation unit (18, 19) that evaluates each calculated spectral radiance candidate by comparison with a predetermined reference pattern, and outputs one of them as a spectral radiance of illumination light in the illumination environment; With
    The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. apparatus.
  11.  撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、
     前蚘照明環境を提䟛するために甚いられ埗る光源候補の分光攟射茝床の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出するステップずを備える、画像凊理方法。
    Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
    A first estimation matrix calculated based on an autocorrelation matrix of spectral radiance of a light source candidate that can be used to provide the illumination environment, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device And (S104, S106) calculating the spectral radiance of the illumination light incident on the subject from the first imaging data.
  12.  撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の皮類毎に予め定められた耇数の挔算行列のうち䞀぀を遞択するステップず、
     遞択された挔算行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出するステップずを備え、
     前蚘挔算行列の各々は、光源候補の分光攟射茝床を瀺す行列の自己盞関行列である、画像凊理方法。
    Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
    Selecting one of a plurality of operation matrices predetermined for each type of light source candidates that can be used to provide the illumination environment (S103);
    Illumination incident on the subject from the first imaging data using a first estimation matrix calculated based on the selected calculation matrix, the spectral transmittance of the diffusing member, and the spectral sensitivity of the imaging device Calculating spectral radiance of light (S104, S106),
    Each of the calculation matrices is an image processing method, which is an autocorrelation matrix of matrices indicating spectral radiances of light source candidates.
  13.  撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいお予め算出される耇数の第掚定行列のうち䞀぀を遞択するステップず、
     遞択された第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床を算出するステップずを備え、
     前蚘第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される、画像凊理方法。
    Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
    Selecting one of a plurality of first estimation matrices calculated in advance based on spectral radiances of a plurality of light source candidates that may be used to provide the illumination environment (S105);
    Calculating a spectral radiance of illumination light incident on the subject from the first imaging data using the selected first estimation matrix (S106),
    The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. Method.
  14.  撮像装眮を甚いお、照明環境䞋においお被写䜓に入射する光の少なくずも䞀郚を拡散郚材を介しお撮像するこずで第撮像デヌタを取埗するステップず、
     前蚘照明環境を提䟛するために甚いられ埗る耇数の光源候補の分光攟射茝床に基づいおそれぞれ算出される耇数の第掚定行列を甚いお、前蚘第撮像デヌタから前蚘被写䜓に入射する照明光の分光攟射茝床の候補をそれぞれ算出するステップず、
     それぞれ算出された前蚘分光攟射茝床の候補を予め定められた基準パタヌンずの比范によっお評䟡し、そのうち䞀぀を前蚘照明環境䞋における照明光の分光攟射茝床ずしお出力するステップずを備え、
     前蚘第掚定行列は、察応する光源候補の分光攟射茝床を瀺す行列の自己盞関行列ず、前蚘拡散郚材の分光透過率ず、前蚘撮像装眮の分光感床ず、に基づいお算出される、画像凊理方法。
    Using the imaging device (400), acquiring at least a part of light incident on the subject (OBJ) under an illumination environment through the diffusion member (402) to obtain first imaging data (S100, S102) )When,
    By using a plurality of first estimation matrices respectively calculated based on spectral radiances of a plurality of light source candidates that can be used to provide the illumination environment, illumination light incident on the subject from the first imaging data Calculating each spectral radiance candidate (S107);
    Evaluating each of the calculated spectral radiance candidates by comparison with a predetermined reference pattern, and outputting one of them as spectral radiance of illumination light in the illumination environment (S108, S109); Prepared,
    The first estimation matrix is calculated based on an autocorrelation matrix of a matrix indicating spectral radiance of a corresponding light source candidate, a spectral transmittance of the diffusing member, and a spectral sensitivity of the imaging device. Method.
PCT/JP2009/050453 2008-01-31 2009-01-15 Image processing device and image processing method WO2009096232A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2008021394A JP2009182845A (en) 2008-01-31 2008-01-31 Apparatus and method for processing image
JP2008021395A JP5120936B2 (en) 2008-01-31 2008-01-31 Image processing apparatus and image processing method
JP2008-021394 2008-08-21
JP2008-021395 2008-08-21

Publications (1)

Publication Number Publication Date
WO2009096232A1 true WO2009096232A1 (en) 2009-08-06

Family

ID=40912585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/050453 WO2009096232A1 (en) 2008-01-31 2009-01-15 Image processing device and image processing method

Country Status (1)

Country Link
WO (1) WO2009096232A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002344979A (en) * 2001-05-21 2002-11-29 Minolta Co Ltd Digital image pickup device, illuminance component acquisition method, program and recording medium
JP2005202673A (en) * 2004-01-15 2005-07-28 Kddi Corp Image recognition device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002344979A (en) * 2001-05-21 2002-11-29 Minolta Co Ltd Digital image pickup device, illuminance component acquisition method, program and recording medium
JP2005202673A (en) * 2004-01-15 2005-07-28 Kddi Corp Image recognition device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PLATT W.K. ET AL.: "Spectral estimation techniques for the spectral calibration of a color image scanner", APPLIED OPTICS, vol. 15, no. 1, January 1976 (1976-01-01), pages 73 - 75 *
TOMINAGA M.: "A Technique for Multi-band Imaging and Its Application to Vision", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN (CVIM13, vol. 47, no. SIG5, 15 March 2006 (2006-03-15), pages 20 - 34 *
UCHIYAMA T. ET AL.: "Capture of natural illumination environments and spectral-based image synthesis", IEICE TECHNICAL REPORT, vol. 105, no. 535, 12 January 2006 (2006-01-12), pages 7 - 12 *

Similar Documents

Publication Publication Date Title
KR100278642B1 (en) Color image processing apparatus and method
US7436997B2 (en) Light source estimating device, light source estimating method, and imaging device and image processing method
US10168215B2 (en) Color measurement apparatus and color information processing apparatus
US10514335B2 (en) Systems and methods for optical spectrometer calibration
US9076068B2 (en) Method and apparatus for evaluating color in an image
JP4967440B2 (en) Imaging apparatus and light source estimation apparatus thereof
US7616314B2 (en) Methods and apparatuses for determining a color calibration for different spectral light inputs in an imaging apparatus measurement
US20090294640A1 (en) System for capturing graphical images using hyperspectral illumination
US7457000B2 (en) Image input system, conversion matrix calculating method, and computer software product
JP2020012668A (en) Evaluation device, measurement device, evaluation method and evaluation program
JP6969164B2 (en) Evaluation device, evaluation program and evaluation method
JP2021113744A (en) Imaging system
JP5841091B2 (en) Image color distribution inspection apparatus and image color distribution inspection method
JP6113319B2 (en) Image color distribution inspection apparatus and image color distribution inspection method
JP5120936B2 (en) Image processing apparatus and image processing method
JP2010139324A (en) Color irregularity measuring method and color irregularity measuring device
JP2009182845A (en) Apparatus and method for processing image
US11825211B2 (en) Method of color inspection by using monochrome imaging with multiple wavelengths of light
WO2009096232A1 (en) Image processing device and image processing method
JP2003337067A (en) Spectrophotometry system and color reproduction system
JP2009188807A (en) Imaging method and imaging system
JP4588076B2 (en) Imaging method and imaging system
JP3577977B2 (en) Illumination light spectral characteristic estimation device
JP2021103835A (en) Method of quantifying color of object, signal processing device, and imaging system
JP2001186540A (en) Colorimetry conversion coefficient calculation method, colorimetry image conversion method, and colorimetry conversion coefficient calculation device and colorimetry image converter, and computer-readable information recording medium for recording colorimetry conversion coefficient calculation program or colorimetry image pickup program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09706560

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09706560

Country of ref document: EP

Kind code of ref document: A1