[go: up one dir, main page]

WO2022224917A1 - Three-dimensional image pickup device - Google Patents

Three-dimensional image pickup device Download PDF

Info

Publication number
WO2022224917A1
WO2022224917A1 PCT/JP2022/017973 JP2022017973W WO2022224917A1 WO 2022224917 A1 WO2022224917 A1 WO 2022224917A1 JP 2022017973 W JP2022017973 W JP 2022017973W WO 2022224917 A1 WO2022224917 A1 WO 2022224917A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
light
imaging
light receiving
imaging device
Prior art date
Application number
PCT/JP2022/017973
Other languages
French (fr)
Japanese (ja)
Inventor
のりこ 安間
達夫 長▲崎▼
広朗 長▲崎▼
聡美 森久保
Original Assignee
のりこ 安間
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2021070725A external-priority patent/JP6918395B1/en
Priority claimed from JP2021211632A external-priority patent/JP7058901B1/en
Application filed by のりこ 安間 filed Critical のりこ 安間
Publication of WO2022224917A1 publication Critical patent/WO2022224917A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/45Interferometric spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration

Definitions

  • the present invention relates to a three-dimensional imaging device.
  • the amplitude and phase of reflected light are detected by optical interferometry, three-dimensional resolution is performed by electrical processing using the detection results, and focusing and disturbance of the light wavefront are performed for each three-dimensional pixel.
  • the present invention relates to a three-dimensional imaging apparatus capable of recovering degraded resolution and performing spectral analysis.
  • Techniques for non-contact three-dimensional shape measurement include a focus movement method, a confocal movement method, an optical interference method, and a fringe projection method.
  • Spectral image detection technology As a spectral image detection technique, a hyperspectral camera using a line spectral method is known.
  • JP-A-2006-153654 Japanese Patent Application Laid-Open No. 2011-110290
  • the present invention has been made in view of such circumstances.
  • a first aspect of the three-dimensional imaging device of the present invention provides illumination light that sweeps the frequency of light or the frequency of amplitude modulation of light to illuminate a subject.
  • a light source that provides an optical interferometer that combines the reflected light from the object and the reference light to generate interference fringes; Two-dimensional detection of the interference fringes by two-dimensionally arranged light receiving elements, one-dimensional scanning of the one-dimensionally arranged light receiving elements, or two-dimensional scanning of a single light receiving element.
  • a two-dimensional detection mechanism that detects as an interference fringe signal at a position; the optical path length from the light source to the two-dimensional detection position of the two-dimensional detection mechanism of the reflected light reflected by the three-dimensionally distributed reflection points of the subject; an optical path difference calculating means for calculating an optical path difference between an optical path length from a light source to a two-dimensional detection position of the two-dimensional detection mechanism for each of the two-dimensional detection positions for all the reflection points to be resolved; a detection unit that obtains a three-dimensional data string by detecting the frequency of the interference fringe signal to resolve the light receiving direction for each of the two-dimensional detection positions; The light receiving direction and the a two-dimensional filtering unit for resolving intersecting planes, The three-dimensionally distributed reflection points of the object are three-dimensionally resolved.
  • the detection section converts the three-dimensional data string by Fourier transforming the interference fringe signal.
  • the three-dimensional data train is a complex signal of amplitude and phase;
  • the two-dimensional filter processing unit selects a data string corresponding to the imaging aperture from the three-dimensional data string, and selects a data string from the selected data string to the two-dimensional detection position to the reflection point.
  • Data corresponding to the optical path length is extracted using the information of the optical path difference, multiplied by the filter coefficient calculated from the optical path difference, and added to perform the imaging to resolve the reflection point, Similarly, by convolutively integrating the filter coefficients for all the reflection points to be resolved, the surface intersecting with the light receiving direction is resolved.
  • the three-dimensional imaging device comprising: a storage unit for storing the three-dimensional data string; an address generation unit that generates an address for reading out the data that matches the optical path length from the detection position of the two-dimensional detection mechanism to the reflection point to be resolved from the storage unit, using the optical path difference; a filter coefficient generation unit that reads the data using the address and generates the filter coefficients for data interpolation in the light receiving direction, initial phase matching, and weighting of the imaging aperture;
  • the two-dimensional filter processor superimposes and integrates the filter coefficient on the data of the complex signal.
  • the imaging aperture for performing the two-dimensional filtering is divided into a plurality of blocks, and the For each block, the same process as the two-dimensional filtering process is performed to resolve reflection points in the vicinity centering on the reflection point to be resolved, and complex signal data of the reflection points in the vicinity obtained in each block.
  • the distortion and fluctuation of the frequency sweep of the light source are detected, and the distortion and correction means for correcting the dispersion of the frequency components of the interference fringes caused by the phase matching filter.
  • an identification means is provided for calculating a spectral component and using the spectral component to identify the object from the reflectance spectrum of the object whose cluster is unknown.
  • the identification means uses AI that executes deep learning.
  • a low coherence light source instead of the light source, a low coherence light source, a spectroscope, and three-dimensional resolution is performed by the detection unit and the two-dimensional filtering unit.
  • a three-dimensional imaging apparatus according to any one of the first to eighth aspects, wherein the interference fringe signal detected by the two-dimensional detection mechanism has a three-dimensional and a memory for storing the interference fringe signal to which the information necessary for the three-dimensional resolution and the spectrum analysis is added as RAW data.
  • the degree of coherence of the light source, band characteristics (including distortion) and directivity of frequency sweep includes the coordinates of the detection position of the two-dimensional detection mechanism and the directivity of the light receiving element, the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the detection position of the two-dimensional detection mechanism, and information on the object.
  • the eleventh aspect of the three-dimensional imaging device of the present invention comprises a splitting unit that splits light emitted from a light source to generate illumination light and reference light; a synthesizing unit that interferes the reflected light from the subject with the reference light to generate interference light; an imaging optical system that forms an image of the reflected light; a slit provided on the imaging plane of the imaging optical system; a spectroscopic unit that disperses the interference light in a cross direction crossing the longitudinal direction of the opening of the slit.
  • a twelfth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein at least one of the subject and the imaging device is moved so as to scan the imaging range in the cross direction. and a scanning mechanism for scanning.
  • a thirteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein the imaging optical system is a cylindrical optical system in which the focal position is obliquely arranged with respect to the optical axis. with the elements of
  • a fourteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to thirteenth aspects, comprising a light source that generates broadband light or wideband wavelength swept light.
  • a fifteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to fourteenth aspects, wherein a predetermined wavelength band component is extracted from the interference light, and Fourier transform is performed. and a signal processing unit configured to generate an image in the predetermined wavelength band component.
  • a sixteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the fifteenth aspect, wherein the signal processing section extracts wavelength band components corresponding to three primary colors from the interference light, A signal processing unit is provided for performing conversion, generating three primary color image signals, and generating RGB image signals based on the three primary color image signals.
  • a seventeenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to sixteenth aspects, wherein a plurality of clusters are obtained in descending order of Fisher ratio from the reflectance spectrum of the subject whose clusters are known. , and uses the spectral components to discriminate from the reflectance spectrum of an object whose cluster is unknown.
  • An eighteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the thirteenth aspect, comprising a plurality of the imaging devices having pinholes instead of the slits, and imaging by adjacent imaging devices. are divided into blocks, the amount of deviation is detected by taking the correlation between the blocks, and the images are pasted together.
  • a nineteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eighteenth aspect, wherein the plurality of imaging devices are arranged in a line and have a positional relationship adjacent to each other in the line. The imaging device is driven at different timings.
  • the present invention it is possible to provide a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection for a subject with a simple structure.
  • FIG. 1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment;
  • FIG. It is a figure explaining the principle which produces an interference fringe in a three-dimensional imaging device.
  • FIG. 11 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device; It is a figure which shows the structure from a reflection point to a two-dimensional filter process. It is a figure explaining the processing operation of two-dimensional filter processing.
  • (a) to (g) are diagrams for explaining the arrangement interval and directivity of light receiving elements.
  • FIG. 4 is a diagram illustrating a configuration for recovering resolution deteriorated due to disturbance of an optical wavefront by two-dimensional filter processing;
  • FIG. 4 is a diagram showing a configuration for generating an RGB image from RAW data of interference fringes; It is a figure which shows the wavelength band of various spectral images produced
  • FIG. 4 is a diagram for explaining non-linear cutting of a substance to be specified by AI; It is a figure explaining the case where the linearity of the frequency sweep of a laser light source is distorted.
  • FIG. 4 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep; It is a figure which shows the structure of the application example of this embodiment.
  • FIG. 4 is a diagram showing a necessary focusing range when diagnosing a coronary artery; It is a figure which shows the structure which applied this embodiment to the intravascular OCT apparatus.
  • FIG. 4 is a diagram showing a necessary focusing range when diagnosing a coronary artery; It is a figure which shows the structure which applied this embodiment to the intravascular OCT apparatus.
  • FIG. 1 illustrates a method of detecting images of coronary arteries using an intravascular OCT device; It is a figure explaining the structure which applies this embodiment to X-ray imaging and gamma-ray imaging.
  • (a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process.
  • (b) is a diagram showing three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing.
  • 1 is a configuration diagram showing the configuration of an imaging device according to Example 1 of the present invention;
  • FIG. 23 is an explanatory diagram for explaining imaging processing by the imaging device of FIG. 22;
  • FIG. 2 is a block diagram illustrating detection of RGB and spectral images; It is a figure explaining the range of a Fourier transform.
  • FIG. 4 is a diagram for explaining FS (Foley Sammon) transformation;
  • FIG. 5 is a configuration diagram showing the configuration of an imaging device according to Example 2 of the present invention; It is a figure for demonstrating an observation image.
  • FIG. 11 is a configuration diagram showing the configuration of an imaging device according to Example 3 of the present invention;
  • a three-dimensional imaging apparatus detects the amplitude and phase of reflected light by optical interferometry, and performs three-dimensional resolution by electrical processing using them. Then, the three-dimensional imaging apparatus performs focusing, recovery of resolution deteriorated due to disturbance of the optical wavefront, and spectrum analysis for each three-dimensional pixel.
  • a three-dimensional imaging device two-dimensionally detects interference fringes of reflected light generated by an optical interferometer.
  • the light-receiving direction is resolved at each two-dimensionally detected position by Fourier transform processing, which will be described later.
  • the amplitude and phase of the reflected light hereafter referred to as a complex signal
  • the resolution of the plane intersecting the light receiving direction is obtained by the two-dimensional filtering process described later. conduct.
  • the three-dimensional imaging apparatus three-dimensionally resolves the subject through these two processes.
  • the two-dimensional filter processing described above performs focusing (dynamic focusing) for each pixel and restores the resolution that has been deteriorated due to the disturbance of the optical wavefront, which will be described later. Also, the spectrum of the reflected light is analyzed using the frequency sweep of the illumination light used for the resolution processing in the direction of light reception, and the composition of the subject is identified for each pixel.
  • the present embodiment is not limited to the visible light band, and includes a configuration in which an imaging optical system does not exist, a wavelength band of electromagnetic waves that are expensive even if an imaging optical system exists, such as infrared light, terahertz light, etc. It can also be applied to waves, millimeter waves, X-rays, ⁇ -rays, and the like.
  • FIG. 1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment.
  • the light source 1 emits light whose frequency is swept within the imaging time. Swept light, which is illumination light emitted from the light source 1 , is separated by the beam splitter 2 of the optical interferometer 13 .
  • One sweeping light reflected by the split surface illuminates the subject 3 .
  • the other sweeping light transmitted through the split surface is reflected by the mirror 4 .
  • the reference light reflected by the mirror 4 is combined with the reflected light 7 from the subject 3 by the beam splitter 2 to generate interference fringes.
  • the generated interference fringes are received by a two-dimensional array of light receiving elements 8 (hereinafter referred to as "imaging element").
  • imaging element a two-dimensional array of light receiving elements 8
  • a method of detecting the interference fringe signal with the imaging element 8 will be described later.
  • the interference fringe signal received by the imaging device 8 is stored in the memory 5 as RAW data. Then, the interference fringe signals necessary for resolution are read out from the memory 5, and the light receiving direction is resolved by the Fourier transform processing (detector) 11.
  • FIG. In this embodiment and other embodiments and examples to be described later, an example of a detection unit that performs data analysis by Fourier transform with respect to the light receiving direction for each two-dimensional detection position is described.
  • the data analysis method is not limited to Fourier transform, and various time-frequency analysis methods such as short-time Fourier transform and wavelet transform can be used.
  • the surface perpendicular to the optical axis 9 is resolved by a two-dimensional filtering process 12, which will be described later, and the reflection point 6 is detected three-dimensionally.
  • Fourier transform processing 11 and two-dimensional filter processing 12 will be described later.
  • interference fringes of a frequency proportional to the optical path difference between the reflected light from the reflection point 6 and the reference light are formed.
  • the above optical path difference is the optical path length of the illumination light emitted from the light source 1 that is reflected at the reflection point 6 via the beam splitter 2 and is received by each light receiving element of the image sensor 8, and the light path length of the light source 1 is the difference in the optical path length of the reference light from the reference light through the beam splitter 2, through the reflecting mirror 4 and the beam splitter 2, to the light receiving elements of the imaging element 8.
  • the reflection point 6 can be three-dimensionally resolved by a Fourier transform process 11 and a two-dimensional filter process 12, which will be described later.
  • the light source 1 has the spatial coherence (point light source property) required for resolution on a plane perpendicular to the optical axis 9, and the frequency sweep has the linearity and frequency band required for resolution in the light receiving direction. It has temporal coherence.
  • a frequency-swept laser light source using a deflection element such as MEMS (Micro Electro-Mechanical System) or KTN (potassium tantalate niobate) and a spectroscope can be used.
  • the light source 1 is an incoherent light source (partial coherent light source) having the spatial coherence (point light source property) required for resolution of a plane perpendicular to the optical axis 9, and the amplitude of the emitted light is modulated by the frequency sweep.
  • incoherent light source partial coherent light source
  • spatial coherence point light source property
  • the former light source is used to three-dimensionally resolve a small subject at a relatively short distance with high resolution in terms of coherence length.
  • the latter light source is used when three-dimensionally resolving a large object at a long distance.
  • an imaging device 8 is shown as a mechanism for two-dimensionally detecting interference fringes.
  • the present invention is not limited to this, and may be a mechanism that two-dimensionally detects interference fringes by a combination of a one-dimensional array of light-receiving elements and one-dimensional scanning, or a combination of a single light-receiving element and two-dimensional scanning.
  • an optical system may be arranged in each optical path of the illumination light, the reflected light, and the reference light.
  • the optical interferometer 13 may be placed anywhere on the light receiving path, and the beam splitter 2 may be separately placed for combining the reference wave and for separating the illumination light.
  • the optical interferometer 13 in FIG. 1 shows a basic configuration for explaining the principle.
  • the optical interferometer is not limited to this, and there are various methods, and the method can be selected according to the application.
  • an optical interferometer such as the Mirau method may be used to reduce the size of the structure.
  • an optical circulator using a Faraday rotator may be used in order to increase the light utilization efficiency.
  • the intermediate optical system and the beam splitter 2 and mirror 4 that constitute the optical interferometer 13 do not impair the coherence of the illumination light, the reflected light, and the reference light.
  • the shape must be such that the optical path length of the reflected light and the reference light to each light receiving element can be calculated.
  • the mirror 4 has a sufficiently small surface accuracy of 1/16 or less of the wavelength, and has a focal point such as a concave surface, a convex surface, and an ellipsoidal surface in addition to a point reflector or flat plate, so that the optical path length can be easily calculated. things are used.
  • the Fourier transform processing 11 of FIG. 1 for resolving the direction of light reception will be described below.
  • interference fringes are generated with the difference in frequency and the difference in phase. This is called optical heterodyne detection.
  • Optical heterodyne detection can convert a very high-frequency optical carrier into a low-frequency interference fringe carrier. Then, the interference fringes holding the information on the amplitude and phase of the light can be converted into electrical signals by the light receiving element. Optical heterodyne detection can also be applied to amplitude and phase detection of amplitude-modulated incoherent light.
  • FIG. 2 is a diagram for explaining the principle of generating interference fringes in a three-dimensional imaging device. Fourier transform processing for resolving the direction of light reception is based on the principle of this optical heterodyne detection. As shown in FIG. 2, a slight time difference (optical path difference) 14 is generated from the optical path length difference between the frequency-swept reference light 18 and the reflected light 19 . This causes a slight time difference 15 between the frequencies and phases of the reference light 18 and the reflected light 19 . Then, an interference fringe is generated which consists of a difference frequency and a difference phase.
  • FIG. 3 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device. Also, as shown in FIG. 3, when the frequency sweep bandwidth 21 is widened as indicated by the dotted line 22, the interference fringe frequency 23 increases as indicated by the dotted line 24 even if the optical path difference 25 is the same.
  • the frequency of the interference fringes is detected as a spectrum (complex signal) on the frequency axis.
  • the position of the spectrum on the frequency axis is proportional to the optical path difference between the reflected light and the reference light from the light source (point light source) 1 to the light receiving element 8 in FIG. Then, the distance from the light receiving element of the imaging element 8 to the reflection point 6 can be detected.
  • the resolution of the spectrum (the width of the single spectrum) is determined by the waveform obtained by Fourier transforming the envelope of the frequency sweep.
  • the frequency sweep bandwidth 23 of the interference fringes in FIG. 3 is widened as indicated by the dotted line 24, the number of spectra after Fourier transform with respect to the optical path difference increases, so the resolution in the light receiving direction can be increased.
  • the above-described processing can also be applied as it is when amplitude modulation of incoherent light is frequency-swept.
  • the reference light Es and the reflected light Er can be represented by the following equations (1) and (2), respectively.
  • Es As ⁇ cos ⁇ 2 ⁇ [f0+( ⁇ f/2T)t]t+ ⁇ 0 ⁇
  • Er Ar ⁇ cos ⁇ 2 ⁇ [f0+( ⁇ f/2T)(t-td)](t-td)+ ⁇ 0 ⁇ (2)
  • ⁇ f is the frequency sweep bandwidth
  • T is the sweep time
  • f0 is the sweep start frequency
  • ⁇ 0 is the initial phase of the sweep start
  • t is the time
  • td is the time difference (optical path difference) between the reference light and the reflected light
  • Ar is the amplitude of the reflected light
  • 2( ⁇ f/2T)td is the frequency of the interference fringe signal, and it can be seen that the frequency of the interference fringes changes linearly as the time difference (optical path difference) td changes. Further, from the second term of equation (4), 2 ⁇ [( ⁇ f/2T)td 2 +f0 ⁇ td] is the initial phase of the interference fringe signal, and the initial phase can change parabolically with respect to td. I understand.
  • the envelope When the envelope is a square wave, side lobes of the sinc function, which is the Fourier transform of the square wave, are generated. If the envelope is Gaussian (Gaussian function), the side lobes can be suppressed, but the resolution is slightly lowered, so the sweep bandwidth is increased accordingly.
  • Gaussian Gausian function
  • the three-dimensional point spread function (three-dimensional PSF (Point Spread Function)) when the interference fringe signal of the reflection point 6 is detected by the single light receiving element m shown in FIG. , the point spread function of the light receiving direction 7 is spherically extended to the directivity range of the light receiving element m.
  • Interference fringe signal obtained by each light receiving element or complex signal obtained by Fourier transforming it, the coherence degree of the light source, the band characteristics and directivity of the frequency sweep, the directivity of the light receiving element, the number of elements, and the array spacing , the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the light receiving surface of the image pickup device, and the information of the subject are added, and if it is archived as RAW data, various processing can be performed later using the phase information. can.
  • the light receiving direction can be resolved by performing a Fourier transform.
  • the principle of the resolution of the light receiving direction by Fourier transform processing is the same as that of pulse compression of radar. In other words, the phases of the frequency components of the reflected light are matched and added (passed through a phase-matched filter), and it is as if micron-order light pulses are transmitted and received like a radar, and the light receiving direction is resolved. It will be.
  • the Fourier transform of the interference fringe signal yields the point spread function in the light receiving direction, and the full width at half maximum of the point spread function is the resolution in the light receiving direction.
  • the sampling interval in the light receiving direction is set smaller than the resolution. Therefore, the number of pixels in the light-receiving direction is obtained by dividing the resolution range in the light-receiving direction by the sampling interval.
  • the interference fringe signal can be detected from the relationship of the Fourier transform pair while satisfying the sampling theorem.
  • the detection time is shortened, the frequency sweep time is shortened, and an imaging device 8 with a high frame rate is used accordingly.
  • the imaging device 8 is basically capable of global shutter operation.
  • the sweep time for light source 1 is set to 16.7 seconds. Due to the long detection time, it is applied to three-dimensional resolution and shape measurement of stationary objects.
  • the detection time when using a commercially available high-speed imaging device is 1 second.
  • the imaging time is 50 ms. For this reason, the application to a moving subject is expanded.
  • the imaging time can be further shortened by imaging with a plurality of imaging elements at different timings by means of a multi-plate prism.
  • the imaging time of the imaging device is shortened, it seems that sensitivity cannot be obtained.
  • the sensitivity is improved by the number of pixels in the light receiving direction, in other words, the SN ratio of the single spectrum is improved by the square root of the number of pixels.
  • the sensitivity is improved by the number of light receiving elements of the virtual lens 35 of FIG. Therefore, as a result, the sensitivity becomes almost the same as the shutter operation of an imaging device using an optical system, and no problem occurs.
  • FIG. 4 is a diagram showing a configuration from reflection points to two-dimensional filtering.
  • the interference fringes generated by combining the reference light and the reference light in the combining unit 32 are received by the light receiving elements 33-1 to 33-n of the imaging device.
  • the detected interference fringe signal is stored in memory 5 (FIG. 1).
  • the interference fringe signal corresponding to the aperture of the virtual lens 35 is read out from the memory, and Fourier transform processing 34 (11 in FIG. 1) is performed.
  • the light receiving direction of each of the light receiving elements 33-1 to 33-n is resolved, and three-dimensional data strings 36-1 to 36-n of complex signals in the light receiving direction are obtained.
  • the three-dimensional data trains 36-1 to 36-n of the complex signals in the light-receiving direction are processed by the two-dimensional filtering process 37.
  • the complex signals of the pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted.
  • the reflection point 31 can be resolved by matching the phase to the complex signal at the center position of the imaging aperture and adding it. This processing is performed for all reflection points (pixels) in the object space, and the object is three-dimensionally resolved.
  • FIG. 21(a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process.
  • Reflected light P1 to Pn from one reflection point can be expressed by the above equation (5).
  • R is the reference beam
  • Lp is the coefficient of the low-pass filter by the light receiving element
  • F is the Fourier transform of the light receiving direction.
  • Expression (6) represents three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing.
  • the frequency of the interference fringes changes linearly.
  • the initial phase of the fringe signal is 2 ⁇ [( ⁇ f/T)td 2 +f0 ⁇ td] and changes parabolically with respect to td.
  • This initial phase matching is performed when complex signals of pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted and added.
  • Initial phase matching is performed together with data interpolation processing by low-pass filters 42-1 to 42-n shown in FIG. 5, which will be described later.
  • the filter coefficient generator 50 shown in FIG. 5, which will be described later the data interpolation coefficient and the complex signal coefficient for phase matching are multiplied, and the coefficients 47-1 to 47- of the low-pass filters 42-1 to 42-n are obtained. n is generated.
  • FIG. 5 is a diagram for explaining the processing operation of the two-dimensional filtering process 37.
  • the data strings 36-1 to 36-n of the complex signals in the direction of light reception in FIG. 4 are stored in the line memories 41-1 to 41-n in FIG. From the line memories 41-1 to 41-n, the respective light receiving elements 33-1 to 33-n from the reflection point 31 in FIG.
  • the complex signals 48-1 to 48-n stored at addresses corresponding to optical path lengths up to n are read.
  • the low-pass filters 42-1 to 42-n perform data interpolation in the light receiving direction of the complex signals 48-1 to 48-n and the above-described phase matching. Then, the adder 49 performs addition.
  • the accuracy of data interpolation should be 1/16 or less of the resolution in the light receiving direction.
  • Data interpolation is preferably spline interpolation. However, linear interpolation using neighboring data is also sufficient.
  • Data near the complex signal that matches the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are read out from the line memories 41-1 to 41-n. The read data are input to low-pass filters 42-1 to 42-n for data interpolation.
  • the coefficients 47-1 to 47-n of the filters for data interpolation and phase matching are generated by the filter coefficient generator 50 according to addresses 44-1 to 44-p. In order to suppress side lobes, addition may be performed after multiplying by a weighting factor for correction. The multiplication of the weight coefficients is performed by the filter coefficient generator 50 by multiplying the filter coefficients 47-1 to 47-n of the low-pass filters 42-1 to 42-n.
  • These addresses 44-1 to 44-p are generated by calculation, or stored in a lookup table as calculated in advance, or considering the balance between calculation time and memory size. generated by a combination of these methods.
  • the optical path lengths of the reflected light and the reference light depend on the optical system arranged in the optical path, including the positions of the light receiving elements 33-1 to 33-n in FIG. 4, and the shape and position of the reflecting mirror. to each of the light receiving elements 33-1 to 33-n. Therefore, the optical path lengths of the reflected light and the reference light are accurately calculated in the address generator 45 and reflected in the addresses 44-1 to 44-p.
  • the optical path lengths of the reflected light and the reference light in the configuration of FIG. 1 are calculated.
  • the position of the center of the light receiving surface of the image sensor 8 is the origin (0, 0, 0) of the three-dimensional coordinates
  • the direction perpendicular to the paper surface is the X-axis
  • the vertical direction is the Y-axis
  • the direction of the optical axis 9 is the Z-axis.
  • the optical path length of the reflected light is obtained by folding the position of the light source 1 on the reflecting surface of the beam splitter 2, and from the position (0, 0, s) of the light source 1 on the optical axis 9 at that time to the position (x, y , z), the optical path length from the position (x, y, z) of the reflection point 6 to the position (dx, dy, 0) of each light receiving element of the imaging device 8 is added.
  • the optical path length of reflected light is represented by the following formula (7). [x 2 +y 2 +(zs) 2 ] 1/2 + [(x-dx) 2 +(y-dy) 2 +z 2 ] 1/2 (7)
  • the optical path length of the reference light is determined by reflecting the position of the light source 1 on the reflecting surface of the reflecting mirror 4 and further by reflecting on the reflecting surface of the beam splitter 2, and the position of the light source 1 on the optical axis 9 at that time (0 , 0, r) to the position (dx, dy, 0) of each light receiving element of the imaging device 8 .
  • the optical path length of the reference light is represented by Equation (8) below. [dx 2 +dy 2 +r 2 ] 1/2 (8)
  • the optical path lengths of the reflected light and the reference light can be easily calculated, so the optical path difference between the reflected light and the reference light can be calculated.
  • a value obtained by dividing the optical path difference between the reflected light and the reference light by the sampling interval in the light receiving direction corresponds to the pixel address when the light receiving direction is resolved by the Fourier transform processing 11 . In this way, addresses 44-1 to 44-n can be generated.
  • data interpolation by the low-pass filters 42-1 to 42-n can be used to convert the pixels into three-dimensional pixels in a cubic or equal arrangement. Addresses 44-1 to 44-p are thus generated accordingly.
  • the three-dimensional resolution can be achieved by Fourier transforming the reflected light from the object space into three dimensions.
  • the Fourier transform can greatly reduce the total number of multiplications due to the effect of the butterfly operation when the characteristics of the filter multiplied after the Fourier transform are constant (space invariant filter).
  • the one-dimensional Fourier transform processing in the light receiving direction is combined with the two-dimensional filter processing 12 in which the filter coefficient is optimized for each three-dimensional pixel and the convolution integration is performed.
  • Two-dimensional filtering 12 is the same as two-dimensional Fourier transforming the reflected light.
  • FIGS. 6A to 6G are diagrams for explaining the arrangement interval and directivity of the light receiving elements.
  • FIGS. 6(a) to 6(g) show a Fourier transform pair in which reflected light is received by a one-dimensional array of light receiving elements and Fourier transform is performed in the array direction. Directivity will be explained.
  • the y-axis indicates the position in the arrangement direction
  • the Y-axis indicates the position of the focal plane obtained by Fourier transforming the y-axis.
  • FIG. 6(a) shows a light receiving sensitivity distribution 51 when reflected light from a reflecting point on the optical axis is received by an opening 52.
  • FIG. The light sensitivity distribution 51 is the product of the set aperture 52 and the directivity of the single light receiving element (the light sensitivity distribution on the focal plane) 53 . Therefore, setting the opening 52 beyond the range of the directivity 53 is meaningless.
  • the maximum detectable resolution is determined by the directivity 53 of the light receiving element.
  • the directivity 53 is formed by the aperture of the single light receiving element and the microlens.
  • the waveform of the light receiving sensitivity distribution 51 shown in FIG. Further, since the directivity 53 of the single light receiving element is always in the direction of the optical axis, the waveform of the light receiving sensitivity distribution 51 changes when the reflection point to be detected moves away from the optical axis.
  • the waveform in FIG. 6(a) shows the case where the reflection point is on the optical axis.
  • FIG. 6(b) shows a light-receiving element arrangement with an interval P.
  • FIG. 6(c) shows the sensitivity distribution on the light receiving surface of the single light receiving element.
  • FIG. 6(d) shows a point spread function (resolution is full width at half maximum) 54 on the focal plane obtained by Fourier transforming the light sensitivity distribution 51 .
  • FIG. 6(e) shows diffraction poles caused by the arrangement of the light receiving elements. The pole spacing is 1/P.
  • FIG. 6(f) shows the directivity (light sensitivity distribution) 53 on the focal plane of a single light receiving element formed of microlenses (formed by Fourier transform of the microlenses).
  • the actual numerical value on the Y-axis is the value obtained by multiplying the reciprocal of the focal length by a coefficient proportional to the center wavelength, but the figure is omitted because it is not directly related to the description of this section.
  • the resolution 57 is proportional to 1/ ⁇ , which is the reciprocal of the aperture ⁇ of the light sensitivity distribution 51 from the Fourier transform pair relationship.
  • the resolution (numerical aperture) that can be synthesized is determined by the directivity 53 of the light receiving element.
  • the directivity of the microlens is set according to the desired resolution.
  • the directivity 53 of the single light receiving element is multiplied, thereby eliminating the diffraction from the second main pole 55 or higher (causing a ghost image).
  • the diffraction pole spacing 1/P must be set greater than the position 56 where the directivity 53 becomes null (0). In other words, the array interval P of the light receiving elements must be set smaller than the resolution.
  • the array interval of the light receiving elements must be 1 ⁇ m or less.
  • the manufacturing limit of the pixel interval of the imaging device is currently slightly below 1 ⁇ m.
  • the directivity of the microlenses can be controlled in the manufacturing process.
  • the positions of the ⁇ second main poles 55 are closest to the optical axis.
  • the directivity 53 of the light receiving element is always in the optical axis direction. Therefore, it is necessary to set the array interval P to a small value or narrow the angle of view so that the ⁇ second main poles 55 do not enter the directivity 53 of the light receiving element.
  • the Fourier transform processing 11 (Fig. 1) is the same as orthogonally detecting the interference fringe signal for each frequency component.
  • the carrier (carrier wave component) of the interference fringes disappears, and a complex signal of the point spread function in the light receiving direction is obtained.
  • the frequency band becomes narrower as the interference fringe carrier disappears, and becomes the bandwidth of the envelope of the point spread function.
  • the arrangement interval of the light-receiving elements required for the two-dimensional filtering 12 (FIG. 1) that is performed after converting the light into a complex signal can be on the order of microns, which is less than half the resolution.
  • the surface accuracy is required to be as high as 1/16 or less of the wavelength of light (on the order of several tens of nanometers).
  • the imaging lens is a very good two-dimensional Fourier transformer that can instantly form an image, and does not require processing time like two-dimensional filtering.
  • the focal position, aperture, magnification, etc., or correcting the disturbance of the optical wavefront a complicated optical system and mechanism are required, and it takes time to switch between them.
  • the two-dimensional filter processing is to electrically switch between them, to optimize for each pixel, to restore the deteriorated resolution, and to expand the depth of field with high resolution. etc. becomes possible.
  • FIG. 7 is a diagram for explaining a configuration for recovering the resolution deteriorated by the disturbance of the optical wavefront by two-dimensional filtering.
  • the opening is divided into a plurality of blocks 61-1 to 61-m to 61-n.
  • Interference fringe signals corresponding to each block are read out from the memory 5 (FIG. 1) and subjected to Fourier transforms 62-1 to 62-n.
  • two-dimensional filtering 63-1 to 63-n is performed for each block, and several pixels before and after the pixel of the reflection point 66 are filtered in the direction of the principal ray 67 of each block. Complex signals of a total of 5 pixels are detected.
  • cross-correlation processing 64-1 to 64-n is performed on the 5-pixel complex signal detected in each block and the 5-pixel complex signal in the central block 61-m to detect the optical path length deviation.
  • the cross-correlation processes 64-1 to 64-n are performed by superimposing the complex conjugate signals of five pixels of the central block 61-m on the complex signals of five pixels of other blocks.
  • the superposition integration is performed by interpolating data for 5 pixels so that the detection accuracy of the peak value indicating the deviation of the optical path length is 1/16 or less of the resolution in the light receiving direction.
  • the disturbance of the optical wavefront is large, it is dealt with by increasing the number of 5 pixels for cross-correlation processing.
  • the number of blocks is increased in order to increase the number of samples.
  • the number of blocks may be doubled by applying Gaussian weighting to the outputs of the light receiving elements of the blocks and overlapping the apertures.
  • the deviation of the optical path length between the central block and each block detected by the cross-correlation processing 64-1 to 64-n indicates the disturbance of the optical wavefront.
  • the data interpolating unit 66 interpolates the data so that the deviation of the optical path length for each block corresponds to the light receiving elements 33-1 to 33-n in FIG. Then, it is sent to the address generator 45 for optical path length matching shown in FIG. 5 and reflected in the address 46 (added to the address 46).
  • two-dimensional filter processing can be performed in which the disturbance of the optical wavefront is corrected.
  • the correlation between the central block and the blocks at the ends of the aperture weakens.
  • correlation processing is performed between the central first block and the adjacent second block that has a high correlation.
  • correlation processing is performed between the second block and the third block. This may be detected by accumulating deviations in the optical path length by repeating this while sequentially shifting to the outside.
  • detection errors are also accumulated, when the resolution in the light receiving direction is on the order of microns and the SN ratio is 40 db or more, if the complex signal in the light receiving direction is subjected to data interpolation and cross-correlation processing is performed, the deviation detection accuracy is sufficiently high. A high accuracy of several nano-orders can be obtained. Therefore, the accumulated error can be neglected. It is desirable to select a correlation processing method in consideration of numerical aperture and SN ratio.
  • the disturbance of the optical wavefront caused by the aberration of the optical system on the way is gentle, and the spatial frequency component in the optical wavefront is low.
  • the spatial frequency component increases due to disturbance of the light wavefront.
  • the number of blocks will be increased according to the sampling theorem.
  • the number of blocks and the numerical aperture (NA) of the blocks are in a trade-off relationship. Therefore, increasing the number of blocks reduces the accuracy of cross-correlation.
  • the disturbance of the optical wavefront for each subject is statistically grasped, and the S/N ratio is used as a constraint.
  • combinations of the number of blocks, the numerical aperture, and the cross-correlation pixel range are pre-solved for each subject by a combinatorial optimization problem. Then, after switching to the optimum balance for each subject, correction is performed.
  • the optimal combination for each subject is detected by annealing and iteration using the extension of the OTF of the image after two-dimensional filtering as an index. . Then, correction may be performed by switching to an optimum balance for each object.
  • the principle of correcting the disturbance of the optical wavefront of this embodiment is basically the same as that of adaptive optics used in astronomy.
  • a guide star point image
  • a guide star is set by irradiating a sodium atomic layer at an altitude of 90km with a laser beam to excite the sodium to make it glow.
  • a point image may be set on the surface of the subject using infrared light or the like, but the method of the present embodiment detects disturbance of the light wavefront by cross-correlation processing using the signal of the subject. Therefore, it is not necessary to set the same point image as the guide star in the object space.
  • the number and size of blocks 61-1 to 61-n in FIG. 7, which correspond to wavefront sensors and wavefront controllers of adaptive optics, can be appropriately set according to the application. Then, the balance between them can be optimized using processing such as an optimization problem.
  • Three-dimensional complex signal data of 5 ⁇ 5 ⁇ 5 pixels centered on the detection point is detected by two-dimensional filter processing of each block 61-1 to 61-m to 61-n.
  • Six-axis (x, y, z, x ⁇ , y ⁇ , z ⁇ ) cross-correlation processing using the three-dimensional complex signal is performed between blocks. Based on the result, the light receiving position of the light receiving element is corrected in addition to the correction of the disturbance of the light wavefront. After that, two-dimensional filtering is performed.
  • the interference fringe signals 71 corresponding to the aperture are read out sequentially or in parallel from the memory 5 in FIG.
  • the read signal is Fourier-transformed by the FFT 72 in the visible light band 81a shown in FIG. Then, a W (White) complex signal in which the direction of light reception is resolved is generated.
  • FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
  • the band W includes the near-infrared region 82 where the living body is highly transparent shown in FIG. may be generated.
  • FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
  • the W, R, and B complex signals are each subjected to two-dimensional filter processing, and three-dimensional resolution of W, R, and B is performed.
  • chromatic aberration device in optical path length
  • the pixels may be converted into a cubic array of pixels.
  • Each FFT and each two-dimensional filter processing shown in FIG. 8 have the same functions as those described for the Fourier transform processing 11 and two-dimensional filter 12 in FIG.
  • matrix conversion is performed by the matrix converter 75 in FIG. 8 to generate three-dimensionally resolved RGB signals.
  • images are displayed according to the purpose, such as surface images, cross-sectional images, transmission images, and three-dimensional constructed images by CG.
  • the R signal 83 and the B signal 84 in FIG. 9 have narrower wavelength bandwidths than the W signal and different center wavelengths. Therefore, the resolution of the light receiving direction of the R signal 83 and the B signal 84 is about 1/3 of that of the W signal. However, since the resolution of the human eye for R and B is also about 1/3, there is no problem.
  • Broadband swept light is considered to consist of a linear sum of a plurality of swept lights such as R, G, B, and infrared.
  • All processing is linear processing, including illumination, reflection, interference with reference light, detection of interference fringes, and Fourier transform. For this reason, from the principle of superposition, by extracting the swept frequency portion corresponding to the R and B bands from the interference fringe signal and performing the Fourier transform, the optical interference solution can be obtained by using the R and B swept light sources independently. The same result as image processing is obtained.
  • an XYZ complex signal can be obtained by multiplying the swept visible light band interference fringe signal by an XYZ color matching function and performing a Fourier transform.
  • the reflection spectrum in the visible light band mainly includes absorption of wavelengths that excite outer-shell electrons of atoms, absorption of wavelengths that excite molecular vibrations, spins, and intermolecular vibrations, and diffraction scattering due to the arrangement of refractive indices. changes the spectral components.
  • Good results can be obtained by using statistical analysis, such as multivariate analysis or deep learning AI, as a method of identifying such clusters.
  • statistical analysis such as multivariate analysis or deep learning AI
  • the procedure for identifying two clusters by such a method is described below.
  • the additional information 70 includes normalization processing information for reducing the variance of the clusters to be identified and making it easier to distinguish between the clusters, an address for extracting the substance to be identified from the image, and other information necessary for generating the image. includes information about
  • the information for the normalization process is the brightness of the illumination light, the wavelength band characteristics of the illumination light, and so on.
  • An expert who can identify a substance observes an RGB image or a spectral analysis image, which will be described later, and designates the extraction address using a mouse or the like.
  • An image may be generated by an external computer and then specified.
  • the information necessary to generate an image corresponds to the frequency sweep band, linearity, arrangement interval and directivity of the light receiving elements, and the like.
  • the interference fringe signal is read out from the recording device, and the acquired data is normalized by the computer. After that, Fourier transform processing and two-dimensional filtering processing are performed to generate a three-dimensional image.
  • the image portion of the substance to be identified is cut out from the 3D image according to the cutout address.
  • the complex signal in the light receiving direction of the extracted pixel is mainly the complex signal of the reflected light from the object surface.
  • propagation attenuation is large, so only reflection from the surface of the object is performed.
  • the pixel data in the target light receiving direction is three-dimensionally cut out.
  • the subject is a living body
  • the attenuation when propagating through the living body varies greatly depending on the wavelength.
  • the attenuation of tissue in the propagation path is superimposed.
  • the spectrum analysis is mainly performed on the image of the surface of the object except for the object with high transparency.
  • a computer performs FS (Foley-Sammon) transformation on a large amount of multispectral data of the two substances to be identified in a multidimensional coordinate space with each spectral component as an orthogonal axis.
  • FS Freley-Sammon
  • the FS transform is an orthogonal transform that calculates the feature axes that increase the Fisher ratio of two clusters in descending order. Similar to data compression, it is possible to narrow down to at most 5 to 6 feature axes based on cumulative contribution rates and experience.
  • the number of AI input terminals is 5 to 6 on the characteristic axis narrowed down by FS conversion. Therefore, the scale of AI, including the number of layers, becomes much smaller.
  • a feature of identification by AI is that identification by nonlinear segmentation Z becomes possible, as shown in FIG.
  • the interference fringe signal 71 is multiplied by the matrix conversion coefficients 77-1 to 77-n sent from the computer to the control unit 78 and stored by the multipliers 79-1 to 79-n. As a result, projective transformation of the interference fringe signal 71 to the characteristic axes EU1 to EU6 is performed.
  • 10A and 10B are diagrams for explaining non-linear segmentation of a substance to be specified by AI.
  • the interference fringe signals projectively transformed onto the characteristic axes EU1 to EU6 are subjected to Fourier transform processing and two-dimensional filtering to generate spectral analysis images, which are images of the characteristic axes EU1 to EU6.
  • the top EU1, EU2, and EU3 of the spectral analysis images may be assigned to YIQ (component method used in NTSC internal processing) in descending order of visual sensitivity, and may be displayed after performing matrix conversion to RGB. After that, the observer's visual brain performs non-linear discrimination.
  • YIQ component method used in NTSC internal processing
  • the spectrum analysis image may be input to the AI 80 for each pixel, and the two substances may be identified for each pixel.
  • AI neuron coefficients 76 have generally been loaded into AI 80 via controller 78 from the computer.
  • the results identified by AI80 may be displayed in a fusion with the RGB image by pseudo-coloring the pixel portions of the identified substances.
  • the above-described identification by FS conversion is just a method for identifying two clusters.
  • the characteristic axis is switched each time. Even if the switching is performed multiple times in a tree-like combination, the tree-like identification operation can be performed at high speed because the characteristic axes are narrowed down to 5 to 6 and the circuit scale of the AI 80 is small.
  • multispectral waveforms of multiple substances to be specified may be directly learned (supervised) by AI to specify multiple substances.
  • the number of input terminals of AI at that time is required as many as the number of multi-spectrum. Therefore, the scale of AI increases.
  • FIG. 11 is a diagram for explaining a case where the linearity of the frequency sweep of the laser light source is distorted.
  • frequency modulation (frequency dispersion) 104 occurs in the interference fringe signal due to the optical path difference 103 between the reflected light 101 and the reference light 102, as shown in FIG. This widens the spectral width after the Fourier transform and reduces the resolution. Then, when the optical path difference 103 changes, the frequency modulation 104 also changes.
  • the frequency modulation 104 caused by such sweep distortion can be corrected by performing phase-matched filtering on the spectral dispersion after the Fourier transform.
  • Phase matching is performed by using FIR filters (Finite Impulse Response Filters) 86-1 to 86-n as shown in FIG. .
  • the length of the FIR filter is set to allow for the range of frequency dispersion after Fourier transformation.
  • the coefficients 87-1 to 87-n (FIG. 5) of the phase matching filter are switched and multiplied for each pixel in the light receiving direction.
  • a complex conjugate signal generated by Fourier transforming the frequency modulation 104 shown in FIG. 11 is used for the coefficients 87-1 to 87-n (FIG. 5) of the phase-matched filter. Changes in distortion of the linearity of the light source frequency sweep are detected at an appropriate time period, the FIR coefficient generator 88 in FIG. (Fig. 5).
  • phase-matched filter coefficients 87-1 to 87-n are added to the additional information 70 via the control section 78 in FIG.
  • FIG. 12 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep.
  • the light is split into wavelength components by a spectroscope 113, and an image is formed on a one-dimensionally arrayed light receiving element (line sensor) 114 arranged in the spectroscopic direction by an imaging optical system 115 to receive the light.
  • the light emitted from the light source 1 is imaged in a spot shape and moves on the one-dimensional light receiving element 114 according to the frequency sweep.
  • the reading of the one-dimensional light receiving element 114 is repeated multiple times to detect the movement of the spot light and the distortion of the sweep frequency.
  • a peak value detection circuit 115 interpolates the pixel data of the light receiving element to detect the peak value, thereby improving the accuracy of the position of the spot light.
  • the phase of the frequency modulation 104 (Fig. 11) is calculated in the FIR coefficient generator 88 (Fig. 5) using the time integration formula used when calculating the phase of FM modulation.
  • the position of the spot light detected by the one-dimensional light receiving element 114 is temporarily stored in the memory 116, converted into a frequency-modulated waveform, and sent to the FIR coefficient generator 88 in FIG. 5 to correct linearity distortion.
  • FIR filter coefficients 87-1 to 87-n are generated by computation and sent to FIR filters 86-1 to 86-n for correction.
  • the reading repetition frequency of the one-dimensional light receiving element 114 does not require a large value because the distortion of the frequency sweep of the light source is moderate. Data interpolation can reproduce the characteristic of frequency sweep. The distortion detection accuracy of the frequency sweep requires a value corresponding to the resolution in the light receiving direction.
  • the complex signal of the reflected light from the reference reflection point is detected using an optical interferometer, and the complex conjugate signal obtained by Fourier transforming it is the coefficient of the phase matching filter (87-1 to 87-n in FIG. 5). ) can be used as
  • a single light-receiving element is used instead of the image sensor 8 (FIG. 1), and a two-dimensional scanning mechanism detects the reflected light from the subject in two dimensions, and Fourier transform processing and two-dimensional filter processing perform three-dimensional detection. perform resolution.
  • Some single photodetectors have ultra-high sensitivity and some can detect special wavelength bands other than visible light, so they can be applied to three-dimensional imaging devices and inspection devices that use such wavelength bands.
  • a one-dimensional array of light-receiving elements is used in place of the imaging element 8 (FIG. 1), and scanning is performed by a one-dimensional scanning mechanism in a direction intersecting the array, so that the reflected light from the object is captured two-dimensionally.
  • three-dimensional resolution is performed by Fourier transform processing and two-dimensional filter processing.
  • Line sensors include those with a large number of pixels, those with high sensitivity, and those that can detect special wavelength bands. It can be applied to a visual sensor for FA robots and the like.
  • This embodiment can resolve three dimensions without using an imaging optical system. However, as described above, by combining this embodiment with the imaging optical system, the number of processes of the two-dimensional filter can be reduced.
  • FIG. 13 is a diagram showing the configuration of an application example of this embodiment.
  • the direction of the chief ray 123 is resolved by Fourier transform processing.
  • the resolution of the plane perpendicular to the optical axis 121 is performed by the imaging optical system 122 .
  • the imaging optical system 122 uses the obtained three-dimensional complex signal, it is possible to extend the depth of field of the imaging optical system 122 and recover resolution degradation due to disturbance of the optical wavefront by two-dimensional filtering. By correcting the disturbance of the optical wavefront, the aberration of the optical system can also be corrected.
  • 4a is a reflecting mirror.
  • FIG. 14(a) shows the imaging light flux 126 when the imaging position of the reflection point 124 (FIG. 13) is in front of the imaging device 125 (FIG. 13) (on the subject side).
  • FIG. 14(b) shows the imaging light flux 127 behind (on the image side) the imaging element.
  • Dotted light beams 128-1 and 128-2 indicate light beams re-imaged by virtual lenses 129-1 and 129-2 by two-dimensional filtering, respectively.
  • two-dimensional filtering By performing two-dimensional filtering on the pixels in the direction of the principal ray 123, it is possible to extend the depth of field and restore the resolution that has deteriorated due to the disturbance of the light wavefront described above.
  • this embodiment is applied to a fundus imaging device, unnecessary reflections on the surfaces of the objective optical system and eyeball optical system and unnecessary reflections due to turbidity of the vitreous body can be removed due to the difference in optical path length.
  • the aperture of the eyeball optical system which has conventionally been divided into rings for illumination and imaging in order to avoid unnecessary reflection from the eyeball optical system, can now be used entirely. Therefore, high-definition, high-contrast fundus imaging can be performed at high speed, and a three-dimensional tomographic image of the retina can be detected.
  • the imaging mechanism is scanned one-dimensionally to detect a tomographic image, and resolution and depth of field are expanded only in the optical axis direction 121 by Fourier transform processing and two-dimensional filter processing.
  • the number of image pickup elements required to expand the depth of field is 100 pixels or less. For this reason, the number of processes of the two-dimensional filter can be further reduced, and a tomogram with a high horizontal resolution and a deep depth of field can be detected in real time.
  • the frequency swept light source is replaced with a low coherence light source (for example, SLD: Super Luminescent Diode), and the interference fringe signal is separated by a spectroscope.
  • a frequency-swept interference fringe signal can be obtained in the same way as obtained with a frequency-swept light source, and by performing Fourier transform processing and two-dimensional filter processing on this, three-dimensional resolution can be performed. can be done.
  • FIG. 15 is a diagram showing a configuration using a low coherence light source in this embodiment.
  • broadband light emitted from a low coherence light source (SLD) 131 is reflected by a beam splitter 132 and illuminates an object 133 .
  • Reflected light from the reflection point 130 is imaged by an objective optical system 134, and unwanted light is removed through a slit 135 provided on the imaging plane.
  • SLD low coherence light source
  • the collimating optical system 136 After that, it is converted into parallel light by the collimating optical system 136 and enters the spectroscope 137 .
  • the center of the aperture of the slit 135 is positioned on the optical axis of the objective optical system 134 .
  • the reflected light is split by a spectroscope 137 and then imaged on an imaging device 139 by an imaging optical system 138 .
  • One of the lights separated by the beam splitter 132 is imaged as a line segment 142 on the reflecting mirror 141 by the cylindrical optical system 140 .
  • Light reflected from line segment 142 is converted into parallel light through cylindrical optical system 140 , objective optical system 134 , slit 135 , and collimator optical system 136 .
  • the parallel light is split by the spectroscope 137 , an image is formed on the imaging device 139 by the imaging optical system 138 . Then, the reflected light and the reference light are combined on the light receiving surface of the imaging element 139, and the generated interference fringes are converted into electrical signals by the imaging element.
  • the optical axis of the cylindrical optical system 140 folded by the reflecting surface of the beam splitter 132 is aligned with the optical axis of the objective optical system 134, and the line segment 142 and the opening of the slit 135 are optically conjugate.
  • a cylindrical optical system 140 and a reflecting mirror 141 are arranged.
  • the spectroscope 137, the image sensor 139, and the imaging optical system 138 are arranged so that the direction and range of the light split by the spectroscope 137 match the direction and range of the vertical arrangement of the pixels of the image sensor 139.
  • the subject image formed on the aperture of the slit 135 by the objective optical system 134 is formed on the horizontal pixel array of the imaging device 139 . Then, the interference fringe signals generated by the horizontal array pixels are detected from the vertical pixel array.
  • Interference fringe signals are sequentially read out from the imaging device 139 and stored in a memory (not shown). Then, the scanning mechanism 143 scans the object 133 or the optical axis 144 in the vertical direction 130 in FIG. 15 to acquire the interference fringe signal two-dimensionally and store it in the memory.
  • the interference fringe signal is read out from the memory, the light receiving direction is resolved by Fourier transform processing, and a three-dimensional resolution is obtained. Then, the depth of field is expanded and the disturbance of the light wavefront is corrected by two-dimensional filtering.
  • FIG. 16(a), (b), and (c) show the reflected light from a reflection point 146 located out of the focus position 145 of the objective optical system 134 (FIG. 15), with the optical axis 144 in the vertical direction of the paper surface. shows optical paths 147-1, 147-m, and 147-n of reflected light when detecting while scanning.
  • the reflection point 146 is distant from the focus position 145, the reflected light is 147-1 to 147-n can be detected.
  • the reflection point 146 can be resolved by Fourier transform processing and two-dimensional filter processing in the light receiving direction. Even when the reflection point 146 is behind the focus position 145, it can be similarly resolved.
  • the reflection point 146 When the reflection point 146 deviates from the focus position 145 of the objective optical system 134 (FIG. 15), the light beam is kicked by the slit 149 and the sensitivity is lowered. Since the amplitudes of the reflected lights 147-1 to 147-n are added, the same sensitivity as when the reflection point 146 is at the focus position 145 can be obtained.
  • An intravascular OCT (optical coherence tomography) apparatus percutaneously inserts a guide wire into a coronary artery of the heart from a blood vessel such as the root of the leg, arm, or wrist under X-ray fluoroscopy.
  • An OCT catheter of 1 mm ⁇ is inserted along the guide wire and rotated to detect a tomographic image of a coronary artery (2 to 4 mm ⁇ , length 15 cm).
  • PCI percutaneous coronary intervention
  • the challenge with intravascular OCT devices is in qualitative diagnosis, which identifies substances that cause stenosis in blood vessels, such as plaque (mass of fat and cholesterol), thrombus, and calcification, and assesses their risk grades.
  • qualitative diagnosis is made from the morphological information (shape, texture, brightness density) of the tomogram, but a high level of experience is required. Treatment methods vary depending on the material causing the stenosis and its grade. In particular, qualitative diagnosis is important because oily plaque, when detached, clogs small blood vessels and causes angina pectoris and myocardial infarction.
  • plaques, thrombi, and calcifications can be distinguished by color using visible light images from fiber angioscopes.
  • plaque is yellowish
  • thrombus is reddish determined by mixing with fibrin
  • calcified and normal mucosa are both whiteish, but the color tone including transparency is slightly different.
  • an intravascular OCT apparatus can simultaneously perform qualitative diagnosis by analyzing the spectrum of the vessel wall in addition to morphological diagnosis using tomographic images. And it is desirable that these diagnoses can be made over the 15 cm length of the coronary artery.
  • FIG. 17 is a diagram showing the focusing range required for diagnosing coronary arteries. Another problem with the intravascular OCT apparatus is that, as shown in FIG. 17, the focal range 150 required for diagnosing a coronary artery with a maximum diameter of 4 mm is as wide as 1 mm to 4 mm. Cannot set high resolution. If the horizontal resolution is low, horizontal reflections are superimposed, resulting in poor depth resolution.
  • the OCT catheter is rotated at high speed and pulled back to detect an image of the 15 cm coronary artery wall. Since the pullback is performed while flashing an optically transparent contrast agent under X-ray fluoroscopy, the OCT catheter should be rotated at high speed during the contrast agent flushing time limit of 2 to 3 seconds (recommended time for biological safety). Even if it is, the image of the blood vessel wall can only be obtained with a resolution of millimeter order. In addition, as described above, in the near-infrared band, the spectrum indicating the characteristics of the causative substance is not as clear as in the visible light band.
  • the present embodiment By applying the present embodiment to an intravascular OCT apparatus, the above problems can be solved, and it becomes possible to detect a high-resolution tomographic image with a deep depth of field and an image of a blood vessel wall. can improve the accuracy of qualitative diagnosis.
  • An application example will be described below.
  • FIG. 18 is a diagram showing a configuration in which this embodiment is applied to an intravascular OCT apparatus.
  • the imaging catheter 151 of FIG. 18 is rotated and pulled back within a sheath inserted from the aorta of the lower extremity into the coronary artery via a guidewire.
  • the guidewire and sheath are not shown due to existing therapeutic equipment.
  • the connector 152 has a role of fixing (chucking) the imaging catheter 151 to the rotor section 153 in addition to attaching and detaching the imaging catheter 151 .
  • the connector 152 rotates the imaging catheter 151 together with the rotor portion 153, and the one-dimensional array of the fiber array 154 and the pixel array of the line sensor 155 incorporated in the imaging catheter 151 are paired via the telecentric optical system 168. It is fixed so that it corresponds to one.
  • the imaging catheter 151 and the rotor section 153 incorporate mechanisms described below.
  • a frequency-swept light source (not shown) installed in the device main body emits light whose frequency is swept from visible to near-infrared. The emitted light passes through an optical rotary joint 156 and is guided to a fiber coupler 158 by a fiber 157, where it is separated into illumination light and reference light.
  • the illumination light is guided by a fiber 159, converted into parallel light by a collimator optical system 160, passed through a cylindrical optical system 161, reflected by a beam splitter 162, and passed through about 100 one-dimensionally arranged fiber arrays 154. It is focused at edge 163 .
  • the NA (numerical aperture) of the cylindrical optical system 161 is set to match the NA of the fiber array 154 .
  • the arrangement of the fiber arrays 154 may be arranged in a one-dimensional staggered arrangement to increase the number of arrangements to 200.
  • the illumination light guided by the fiber array 154 is emitted from the end 164 of the fiber array 154 and illuminates the inside of the blood vessel via the objective optical system 165 and the mirror 166 .
  • the objective optical system 165 is an image-side telecentric system, and its focal point is set at the center of the range indicated by 150 in FIG. Reflected light from the inside of the blood vessel, the blood vessel wall, and the inner layer of the blood vessel wall is imaged on the end 164 of the fiber array 154 by the objective optical system 165 and guided to the rotor section 153 .
  • the reflected light emitted from the end 163 of the fiber array 154 is combined with the reference light by the beam splitter 162 to generate interference fringes.
  • the length of fiber 167 that guides the reference light from fiber coupler 158 corresponds to the round trip length of fiber array 154 .
  • the interference fringes are imaged on the line sensor 155 by the telecentric optical system 168.
  • the telecentric optical system 168 magnifies the image of the end 163 of the fiber train 154 and is a double-telecentric optical system so that the NA of the fiber train 154 and the directivity of the one-dimensional light receiving element 155 correspond one-to-one. ing.
  • the interference fringe signal received by each element of the one-dimensional light receiving element 155 is sampled. Since the number of pixels of the one-dimensional light receiving element 155 is as small as 100, high-speed driving can be sufficiently achieved.
  • the sensitivity of the one-dimensional light receiving element 155 seems to have no margin at first glance, but the amplitude (SN ratio) after Fourier transform processing of the interference fringes can be changed by the phase matching of the Fourier transform. (to the SN ratio of the monospectral bandwidth), no problem arises.
  • the imaging catheter 151 is rotated and pulled back integrally with the rotor section 153 by the drive system 169 that performs rotation and pullback, and the interference fringe signal is sequentially detected over 15 cm of the coronary artery.
  • the interference fringe signal is sent to the main body of the apparatus via the rotary transformer 170 and stored in a memory (not shown) in the main body of the apparatus.
  • the optical rotary joint 156 may be multi-channeled, optically modulated including other control signals, and interfaced with the apparatus main body.
  • a slip ring is used for the power supply.
  • the interference fringe signal is read out from the memory of the main body of the device, divided into interference fringe signals in the visible light band and the near-infrared band, and Fourier transform is performed. Then, using the complex signal obtained by the Fourier transform, the extension of the depth of field and the correction of the disturbance of the optical wavefront described with reference to FIGS. 14A and 14B are performed by two-dimensional filtering.
  • the angle of view of the objective optical system 165 is set so that the imaging range 171 (corresponding to the width 181 of the image in FIG. 19) closest to the imaging catheter 151 in FIG. 18 is 1.5 mm. Then, while the imaging catheter 151 and the rotor section 153 are rotated at a speed of 75 rotations/second, a 15 cm coronary artery is pulled back for 2 seconds to obtain a three-dimensional image.
  • FIG. 19 shows an image of a blood vessel wall that has been cut open by pulling back.
  • 150 images of the blood vessel wall with a width of 1 mm excluding the overlapping portion 182 are detected over the blood vessel length of 15 cm.
  • the width of overlapping portion 182 varies with the distance to the vessel wall.
  • the position and magnification of the pixels of the overlapping portion 182 are corrected by CG technology, and the amplitude intensity of the image is smoothed in the direction of the pullback and added to obtain the image. Can be pasted together.
  • the resolution in the light receiving direction obtained by Fourier transform processing is higher than the horizontal resolution determined by the arrangement interval of the fibers. Therefore, if the angle of the mirror 166 (FIG. 18) is adjusted so that the vascular wall is obliquely illuminated and imaged, the resolution of the image of the vascular wall can be increased.
  • the fiber array 155 uses a broadband optical fiber that guides both the visible light band and the near infrared band. They may be arranged vertically in parallel, and two systems of processing circuits may be prepared. Also, the frequency swept light source may be separately prepared for the visible light band and for the near infrared band.
  • an ultrasonic transducer is provided at the tip of the imaging catheter 151, and a mechanism for detecting a tomographic image with ultrasonic waves, the above-described image detection of the blood vessel wall, and spectrum detection. Analysis mechanisms may be combined.
  • Ultrasonic tomography has lower resolution than near-infrared, but the detection depth of the tomographic image is deep. They also have their own characteristics in morphological diagnosis.
  • FIG. 20 is a diagram for explaining an example in which the present embodiment is applied to X-ray imaging and ⁇ -ray imaging.
  • X-rays emitted from the X-ray source 191 in FIG. 20 have a frequency sweep and coherence that match the resolution to be detected.
  • the amplitude of the X-ray is amplitude-modulated with a frequency sweep corresponding to the resolution to be detected.
  • X-rays emitted from an X-ray source 191 pass through a beam splitter 192 for X-rays and irradiate an object 193 .
  • the shape of the beam splitter 192 is an elliptical sphere, one of the focal points is located at the exit of the X-ray source 191 and the other is located on the reflecting surface of the reflector 194 .
  • the X-rays emitted from the X-ray source 191 are partly reflected by the X-ray beam splitter 192, further reflected by the reflector 194, and irradiated onto the two-dimensional light receiving element 195 as reference X-rays. .
  • the beam splitter 192 for X-rays is an X-ray-only mirror whose surface is polished by the EEM (Elastic Emission Machining) method and has a very high surface accuracy of ⁇ 1 to 2 nm. In recent years, mirrors dedicated to such X-rays have become commercially available. By adjusting the angle at which the X-ray mirror is installed, the reflectance and transmittance are adjusted and used as the beam splitter 192 .
  • the X-rays reflected (backscattered) from the subject are combined with the reference X-rays reflected from the reflector 194 by the beam splitter 192 to generate interference fringes.
  • the interference fringes are converted into electric signals by a two-dimensional light receiving element 195 such as a CMOS or CCD imaging element or a flat panel detector (FPD).
  • the time to acquire 3D data has been shortened to about 1 second.
  • the resolution remains in the order of millimeters because the resolution is based on only the absorption information without using the phase information.
  • the time to acquire three-dimensional data can be reduced to several milliseconds, which is the same as the shutter operation. Therefore, resolution on the order of microns can be obtained from the intervals of the pixel array of the two-dimensional light receiving element 195 .
  • the angle of view, magnification, and resolution can be freely set according to the purpose.
  • the scale of the apparatus can also be simplified compared to CT.
  • the frequency sweep required to obtain micron-order resolution requires only a very narrow fractional bandwidth, causing non-linear scattering of X-ray fluorescence and the like. It can be set to avoid wavelengths.
  • the aperture of the imaging element 195 can be made small, the number of three-dimensional filtering processes can be reduced. Announcements of X-ray sources capable of frequency sweeping have also become popular in recent years.
  • the array interval of the imaging elements is set to the production limit of 1 ⁇ m, three-dimensional resolution on the order of several ⁇ m becomes possible, and the imaging magnification and the corresponding resolution can be freely set. Accordingly, transmission images, cross-sectional images, and images constructed in three dimensions can be displayed.
  • the frequency sweep is matched to the spectral absorption band where the characteristics of the substance appear, there is a possibility of specifying the substance by analyzing the spectrum.
  • FIG. 22 shows an imaging device 1001 according to the first embodiment, and the imaging device 1001 is capable of three-dimensional space resolution and spectral analysis for each pixel.
  • FIG. 23 stereoscopically shows the imaging device 1001 of FIG. 22 to facilitate understanding of the imaging device 1001 .
  • the light emitted from the point light source 1039 passes through the cylindrical optical system 1017 and the first slit 1015 and is introduced into the first beam splitter 1011 by the first collimating optical system 1013 and the mirror 1031 .
  • the illumination light separated by the first beam splitter 1011 is input to the first collimating optical system 1007 via the second beam splitter 1008, which is a dividing section, and then passes through the second slit 1006 to the objective optical system.
  • a subject 1003 is illuminated by 1005 .
  • the illumination light separated by the second beam splitter 1008 in the middle of the optical path is applied to the reflector 1010 via the second collimating optical system 1009 to generate reference light.
  • the first and second slits 1006 and 1015 have linear (substantially rectangular) openings 1006a in the direction (X direction) perpendicular to the plane of FIG. Illumination light passing through 1006a illuminates the subject 1003 in a linear fashion.
  • the position of the second slit 1006 on the optical path is at the imaging position of the objective optical system 1005, and the position of the second slit 1006, the position of the reflector 1010, and the position of the light receiving surface of the two-dimensional light receiving sensor 1019 are conjugate is.
  • the illumination light is emitted through the second slit 1006. Therefore, depending on the application of the imaging device 1001, the objective optical system 1005 or the right side of the second slit 1006 (upstream in the optical path) side) can be exchanged.
  • Reflected light from the subject 1003 passes through the objective optical system 1005, the second slit 1006, and the first collimating optical system 1007, and the second beam splitter 1008 interferes with the reference light, which is the reflected light from the reflector 1010. be done.
  • 22 corresponds to the length of the opening 1006a of the second slit 1006 in the X direction in the direction perpendicular to the paper in FIG. is doing.
  • the optical path length from the second beam splitter 1008 to the reflector 1010 corresponds to the optical path length from the second beam splitter 1008 to the second slit 1006 .
  • the components of the interference optical system that interferes with the illumination optical system that performs illumination indicated by the dashed line II in FIG. may be installed separately.
  • the components of the interference optical system are placed on the object 1003 side (downstream side of the optical path) from the second slit 1006, the components including the second slit 1006 are placed on the right side (upstream side of the optical path). can use known hyperspectral cameras.
  • the reflected light (interference light) that interferes with the reference light is input to the spectroscope 1014.
  • the spectroscope 1014 in FIG. 22 uses a transmission type diffraction grating which is advantageous for miniaturization, but a reflection type spectroscope may be used.
  • the reflected light split into wavelength components by the spectroscope 1014 is imaged on the two-dimensional light receiving sensor 1019 by the imaging optical system 1016 .
  • the two-dimensional light receiving sensor 1019 a known CCD image sensor or a global shutter CMOS image sensor can be used.
  • the generated interference fringes are Fourier transformed, the time difference corresponding to the difference in optical distance can be converted into a difference in spatial frequency components. It becomes possible to detect Further, for each spectrally separated wavelength, an image is formed on the element row (see PX in FIG. 23) of the two-dimensional light receiving sensor 1019 by the imaging optical system 1016, and further, the wavelength band component of the signal received by the element row.
  • the phase-matched summation of the wavelength band components of the light yields a result similar to pulse compression in radar.
  • the light source 1039 uses a broadband light source in order to satisfy the bandwidth required for resolution and spectral analysis. If a sweep-type broadband light source appears in the future, instead of the spectroscope 1014 and the two-dimensional light receiving (area) sensor 1019, a one-dimensional light receiving sensor should be placed in the direction perpendicular to the paper surface (X direction). .
  • a broadband light source requires high linearity and frequency stability in a broadband wavelength sweep. A function to perform Fourier transform while reading is required.
  • the scanning mechanism 1004 scans in the vertical direction (S direction) in FIG. can be detected.
  • the process of generating an RGB image is explained below.
  • the output of the two-dimensional light receiving sensor 1019 shown in FIG. 22 is input to the FFT 61 shown in FIG.
  • a W (white) signal 81 is generated.
  • Fourier transform is performed on the range including the W signal 81 in the near-infrared region 85 with good transparency shown in FIG. is generated as
  • the FFT 62 In parallel with the generation of the W signal 81, the FFT 62 first performs the Fourier transform of the R band shown in FIG. 25 to generate the R signal 82.
  • the R signal 82 undergoes pixel interpolation by the interpolation memory unit 63 shown in FIG. 24, and is synchronized with the W signal (luminance signal) 81 on the time axis (pixel position).
  • the B-band Fourier transform shown in FIG. 25 is performed by the FFT 62 shown in FIG. 24 to generate the B signal 83 .
  • the pixel position of the B signal 83 is similarly interpolated by the interpolation memory unit 64 shown in FIG.
  • the output of the two-dimensional light receiving sensor 1019 can be multiplied by a coefficient corresponding to the XYZ color matching function and Fourier transformed to obtain an XYZ signal. Obtainable.
  • the resolution of the R signal 82 and the B signal 83 is about 1/3 that of the W signal 82. There is no problem because the resolution of the eye is also about 1/3.
  • the R signal 82 and the B signal 83 can be generated by performing the Fourier transform in the range corresponding to the divided R band and B band. Its signal generation process is based on the principle of superposition in linear systems. In other words, a broadband light source is considered to consist of a linear sum of multiple light sources with divided wavelength bands, including R, G, B, and infrared. is a linear process. Therefore, by extracting time-series signals corresponding to the R and B bands from the output of the two-dimensional light receiving sensor 1019 and performing Fourier transform, signal generation processing is performed using a single light source for R and B. You will get the same image.
  • Multispectral analysis performed using the multispectral data obtained by the imaging device 1001 will be described below.
  • the imaging device 1001 can be used as a known hyperspectral camera by sliding the reflector 1008 in the direction of arrow H so that the absorption band 1012 does not generate reference light.
  • Multispectral data spectral characteristics
  • the imaging device 1001 acquires as much multispectral data necessary for specifying the target substance as possible.
  • the name and composition of the target substance information necessary for preprocessing (normalization processing to reduce cluster dispersion) (this information includes variations in the brightness of illumination light, variations in the wavelength band of illumination light, image cut including information and called a tag) is added to the acquired multispectral data by the data format creation unit 70 and stored as RAW data in an off-line computer.
  • AI learns with a teacher
  • multispectral waveforms and principal component analysis and The feature axis is narrowed down by multivariate analysis such as FS (Foley-Sammon) transformation.
  • FS Freley-Sammon
  • multispectral data corresponding to each substance can be directly learned (supervised) by AI to identify multiple substances. Identification by AI is possible with a non-linear partition Z, as shown in FIG.
  • the above identification method is a method for identifying two clusters, it is necessary to switch the characteristic axis each time when identifying multiple substances. Even if the switching is performed multiple times in a tree-like combination, the material can be specified at high speed because the circuit scale is greatly reduced.
  • the imaging apparatus 1001 of the first embodiment is suitable for detecting stationary subjects and subjects with little movement.
  • an imaging device that can suppress the number of spectra to about 256 if an image sensor with 2 million pixels and 10,000 frames per second is used, detection is possible at a frame rate of 60 frames per second.
  • a specific application is a three-dimensional measuring device for computer graphics (CG). It is possible to display an image observed from a free viewpoint and direction, a transmission image, and a cross-sectional image (tomographic image) using CG from captured image data.
  • the spectral information of the spectrum enables accurate color reproduction and display that matches the lighting color.
  • the imaging apparatus 1001 of the present embodiment is capable of component analysis for each pixel, a microscope apparatus with high resolution in the Z-axis direction, a surface inspection apparatus capable of surface shape measurement and colorimetry, and a tomographic imaging apparatus can be used. It can be applied to a fundus camera capable of detection and composition analysis.
  • Example 2 Imaging device capable of imaging and spectrum analysis of the surface of an object in one shot FIG. Processing Based on Imaging Principles: See Embodiment 1.)
  • an imaging apparatus 1101 capable of imaging an object 1003 and analyzing its spectrum by one-shot imaging suitable for dynamic measurement will be shown.
  • the objective optical system 1005 of Example 1 (see FIG. 2) comprises a special cylindrical optical system 1025 element.
  • resolution by optical interference resolution processing in the horizontal direction (direction of arrow Z) of the paper surface of FIG. 27 is used.
  • three-dimensional shape measurement is not possible, but imaging of a moving subject 1003 and spectrum analysis are possible in one shot.
  • the focal position of the second cylindrical optical system 1025 becomes The curvature and aperture of the second cylindrical optical system 1025 in the direction perpendicular to the plane of the paper (X direction) gradually change with respect to the longitudinal direction of the condensed light beam 1022 so that the condensed light beam 1022 is gradually elongated while maintaining a constant degree of condensed light. is set.
  • the second cylindrical optical system 1025 is arranged so that condensed rays of illumination light that have passed through each point in the X direction of the aperture 1026a of the second slit 1026 are irradiated (projected) in parallel onto the surface of the subject 1033.
  • 27 is equipped with an optical system element in which the projection magnification in the direction (X direction) perpendicular to the plane of FIG. Diffusion and intensity distribution (lens power) in the vertical direction (arrow Y direction) of the second cylindrical optical system 1025 are appropriately set according to the longitudinal direction of the condensed light beam 1022 . Therefore, it is desirable to include a free-form surface (having an asymmetrical shape with respect to the optical axis OA) imaging element as a component of the second cylindrical optical system 1025 .
  • the longitudinal resolution and sensitivity of the condensed light beam 1022 will be explained.
  • a condensed ray 1022 of illumination light that has passed through the point where the aperture center C of the second slit 1026 and the optical axis OA intersect is obliquely projected onto the surface of the subject 1003 through the second cylindrical optical system 1025.
  • the reference light which is the reflected light from the reflector 1010, interferes.
  • OCI processing is based on OCT (Optical Coherence Tomography) processing.
  • OCI processing uses Michelson interferometry and Fourier transform processing to obtain a one-dimensional image by illuminating a micron-order short light pulse obliquely and receiving the light pulses that are successively reflected from the object surface. interference resolution processing).
  • the configuration located on the left side (downstream side of the optical path) from the second slit 1026 has the same configuration (in the vertical direction (Y direction in FIG. 27)) as a pinhole camera, so it seems to have low sensitivity.
  • the phases of each wavelength component of light are matched and added, so the intensity of the signal after Fourier transform is similar to pulse compression in radar. , is multiplied by the number of pixels in the longitudinal direction of the condensed ray 1022 .
  • the SN ratio is the square root of the number of pixels, so there are no concerns about sensitivity.
  • the reflected light is out of phase in proportion to the difference in round-trip distance from the second slit 1026 to each reflection point on the condensed light beam 1022 (as the reflection points on the condensed light beam 1022 are separated), and they are superimposed. received as a signal.
  • the superimposed reflected light is caused to interfere with the reference light from reflector 1010, interference fringes are generated at a frequency proportional to the phase shift, resulting in a superimposed signal.
  • the frequency of the interference fringes is higher at the reflection point where the optical path length from the second slit 1026 to the reflection position of the condensed beam 1022 is longer.
  • Resolution in the direction (arrow X) perpendicular to the plane of FIG. 27 is achieved by imaging using the cylindrical optical system 1025.
  • the configuration shown on the right side of the second slit 1026 (on the upstream side of the optical path) is the same as the configuration of Example 1 shown in FIG. do.
  • the imaging of the surface of the object 1003 and the spectrum analysis can be performed in one shot, so it is suitable for detecting a moving object.
  • an image obtained by this imaging method is obtained by obliquely illuminating the subject 1003 and aligning the reflection point 6 where the condensed light 1022 on the subject 1003 and the wavefront 5 of the illumination light intersect with the tangent line of the wavefront 5. It becomes the same as the image observed from the direction 7.
  • the object 1003 is translucent, as shown in the lower left enlarged view 7A of FIG. The same image as observed in transmission is obtained.
  • the illumination light should include a highly transmissive near-infrared region (0.68 ⁇ m to 1.5 ⁇ m).
  • T in FIG. 7A schematically indicates a tissue such as a blood vessel inside the subject 1003 .
  • Specific applications of the second embodiment include handy inspection devices such as surface inspection devices, colorimeters, microscopes, and intraoperative microscope devices.
  • Example 3 Imaging Apparatus Capable of Capturing Large-Screen Imaging and Spectrum Analysis at High Speed FIG.
  • the imaging device 1201 of the third embodiment replaces the second and first slits 1026 and 1027 of the imaging device 1101 of the second embodiment with a plurality of known pinholes, and replaces the two-dimensional light receiving sensor 1053 with a known line sensor.
  • the element rows of the line sensor are arranged in the vertical direction (see arrow Y direction in FIG. 27).
  • the configuration in which the imaging device 1201 performs detection while moving the subject 1003 in the direction perpendicular to the plane of FIG. can do.
  • FIG. 29 shows an embodiment of an inspection apparatus capable of inspecting a large screen at high speed by adopting a configuration in which the imaging apparatuses 1201 described above are combined in multiple stages.
  • the image of the overlapping portion 1032 acquired by imaging by the adjacent imaging devices 1201 is divided into blocks, the correlation between the blocks is calculated, the three-dimensional shift amount is detected, and based on the shift amount
  • a large screen can be detected by moving and interpolating pixels and pasting images together.
  • frequency component analysis can be performed for each pixel.
  • the imaging devices 1 to n are driven separately into odd-numbered imaging devices and even-numbered imaging devices to prevent mutual interference of illumination light in the overlapping portion 1032. , an image of one line (in the direction of arrow W in FIG. 29) is detected from two imagings.
  • an inspection device that inspects large screens such as sheets and iron plates at high speed, and a large number of pits to be inspected, such as blood analysis and genetic testing, can be collectively analyzed by spectral analysis.
  • An inspection device and the like can be mentioned.
  • the present invention is not limited to this embodiment.
  • the present invention is suitable for a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection of a subject with a simple structure.
  • This application claims the benefit of Japanese Patent Application No. 2021-70725 filed on April 19, 2021 and Japanese Patent Application No. 2021-211632 filed on December 24, 2021, the content of which is incorporated herein by reference in its entirety.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Instruments For Measurement Of Length By Optical Means (AREA)

Abstract

[Problem] To provide a three-dimensional image pickup device whereby detection of a spectral image and resolution in three dimensions of a subject can be realized simultaneously by a simple structure. [Solution] This three-dimensional image pickup device comprises: a light source that sweeps the frequency of light or the amplitude-modulated frequency of light to supply illumination light for illuminating a subject; an optical interferometer that multiplexes reference light and reflected light from the subject to generate an interference fringe; a two-dimensional detection mechanism that detects the interference fringe as an electrical signal in a two-dimensional position through use of any of a two-dimensional array of light-receiving elements, a combination of a one-dimensional array of light-receiving elements and one-dimensional scanning, and a combination of a simple light-receiving element and one-dimensional scanning; and an optical path difference calculation means that calculates, for each pixel in three dimensions, the optical path difference between the reflected light and the reference light in a two-dimensional detection position of the two-dimensional detection mechanism, the subject being resolved in three dimensions by processing using the interference fringe and optical path difference information.

Description

3次元撮像装置3D imaging device
 本発明は3次元撮像装置に関する。特に、光干渉法によって反射光の振幅と位相を検出し、検出結果を使用した電気的な処理によって3次元の解像を行い、3次元の画素ごとに、合焦と、光波面の乱れで劣化した解像度の回復と、スペクトル解析ができる3次元撮像装置に関する。 The present invention relates to a three-dimensional imaging device. In particular, the amplitude and phase of reflected light are detected by optical interferometry, three-dimensional resolution is performed by electrical processing using the detection results, and focusing and disturbance of the light wavefront are performed for each three-dimensional pixel. The present invention relates to a three-dimensional imaging apparatus capable of recovering degraded resolution and performing spectral analysis.
 被写体を3次元に解像し、その情報から、被写体の表面や内部の画像(断面像、断層像、透過像)を検出したり、3次元の被写体の形状を測定したりするニーズが、工業用途、医療用途を問わず強い。そのため、後述するように、様々な撮像技術や測定技術が存在している。 The need for three-dimensional resolution of a subject and the detection of images of the subject's surface and interior (cross-sectional images, tomographic images, transmission images) and the measurement of the three-dimensional shape of the subject based on that information is increasing in the industrial world. Strong regardless of application or medical application. Therefore, as will be described later, various imaging techniques and measurement techniques exist.
 さらに近年、上述の3次元の撮像と同時に、3次元の画素ごとの組成成分を、反射スペクトルの解析によって非接触に分析したいというニーズが、工業用途、医療用途を問わず強くなっている(例えば、特許文献1参照)。 Furthermore, in recent years, at the same time as the above-mentioned three-dimensional imaging, there is a strong need for non-contact analysis of the composition of each three-dimensional pixel by analysis of the reflection spectrum, regardless of industrial or medical applications (for example, , see Patent Document 1).
 (3次元形状検出技術)
 非接触に3次元形状を測定する技術として、焦点移動方式、共焦点移動方式、光干渉方式、フリンジ投影法式などが知られている。
(Three-dimensional shape detection technology)
Techniques for non-contact three-dimensional shape measurement include a focus movement method, a confocal movement method, an optical interference method, and a fringe projection method.
 (分光画像検出技術)
 分光画像検出技術として、ライン分光方式を用いるハイパースペクトルカメラが知られている。
(Spectral image detection technology)
As a spectral image detection technique, a hyperspectral camera using a line spectral method is known.
特開2006―153654号公報JP-A-2006-153654 特開2011-110290号公報Japanese Patent Application Laid-Open No. 2011-110290
 このように、反射スペクトルの解析によって被写体の組成成分を分析するために、分光画像を検出する技術も、様々な撮像技術や測定技術が存在する。 In this way, there are various imaging technologies and measurement technologies for detecting spectral images in order to analyze the composition of the subject by analyzing the reflection spectrum.
 ところが、これら3次元形状検出技術と分光画像検出技術を組合せて3次元の解像と分光画像の検出を同時に行える撮像装置を構成しようとすると、いずれの組合せにおいても、以下の不具合を生じるため、実現性が困難である。
・組み合わせることが原理的に不可能であること、
・ハードウエアの規模が大きくなること、構造が複雑化すること、
・処理時間が膨大になること、
・検出精度が大幅に低下すること。
However, if an attempt is made to configure an imaging device that can perform three-dimensional resolution and spectral image detection at the same time by combining these three-dimensional shape detection technology and spectral image detection technology, the following problems occur in any combination. Feasibility is difficult.
・It is impossible in principle to combine
・The scale of hardware increases, the structure becomes more complicated,
・Excessive processing time
・The detection accuracy is greatly reduced.
 このため、精度の高い3次元の解像と分光精度の高い分光画像の検出を同時に達成する撮像装置は現存しない。 For this reason, there is currently no imaging device that achieves high-accuracy three-dimensional resolution and detection of spectral images with high spectral accuracy at the same time.
 本発明は、かかる事情に鑑みてなされたものである。すなわち、本発明は、簡易な構造で、被写体に対し3次元の解像と分光画像の検出を同時に実現できる3次元撮像装置を提供することを目的とする。 The present invention has been made in view of such circumstances. In other words, it is an object of the present invention to provide a three-dimensional image pickup apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection of a subject with a simple structure.
 上述した課題を解決し、目的を達成するために、本発明の3次元撮像装置の第1の態様は、光の周波数、もしくは、光の振幅変調の周波数を掃引して被写体を照明する照明光を供給する光源と、
 前記被写体からの反射光と参照光を合波して干渉縞を発生させる光干渉計と、
 2次元配列される受光素子により、又は1次元配列される前記受光素子が1次元走査されることにより、又は単体の前記受光素子が2次元走査されることにより、前記干渉縞を2次元の検出位置で干渉縞信号として検出する2次元検出機構と、
 前記被写体の3次元に分布する反射点を反射する前記反射光の、前記光源から前記2次元検出機構の2次元の検出位置までの光路長と、前記光源により出射される前記参照光の、前記光源から前記2次元検出機構の2次元の検出位置までの光路長との光路差を、解像する全ての前記反射点について、前記2次元の検出位置ごとに算出する光路差算出手段と、
 前記干渉縞信号の周波数を検出することで、前記2次元の検出位置ごとに受光方向を解像し3次元データ列を取得する検出部と、
 前記検出部により取得される前記3次元データ列と、前記光路差算出手段により算出される前記光路差の情報と、を使用し、電気的な処理による結像を行うことで、前記受光方向と交差する面の解像を行う2次元フィルタ処理部と、を備え、
 前記被写体の3次元に分布する反射点を3次元に解像する。
In order to solve the above problems and achieve the object, a first aspect of the three-dimensional imaging device of the present invention provides illumination light that sweeps the frequency of light or the frequency of amplitude modulation of light to illuminate a subject. a light source that provides
an optical interferometer that combines the reflected light from the object and the reference light to generate interference fringes;
Two-dimensional detection of the interference fringes by two-dimensionally arranged light receiving elements, one-dimensional scanning of the one-dimensionally arranged light receiving elements, or two-dimensional scanning of a single light receiving element. a two-dimensional detection mechanism that detects as an interference fringe signal at a position;
the optical path length from the light source to the two-dimensional detection position of the two-dimensional detection mechanism of the reflected light reflected by the three-dimensionally distributed reflection points of the subject; an optical path difference calculating means for calculating an optical path difference between an optical path length from a light source to a two-dimensional detection position of the two-dimensional detection mechanism for each of the two-dimensional detection positions for all the reflection points to be resolved;
a detection unit that obtains a three-dimensional data string by detecting the frequency of the interference fringe signal to resolve the light receiving direction for each of the two-dimensional detection positions;
The light receiving direction and the a two-dimensional filtering unit for resolving intersecting planes,
The three-dimensionally distributed reflection points of the object are three-dimensionally resolved.
 また、本実施形態の第2の好ましい態様によれば、第1の態様に係る3次元撮像装置であって、前記検出部は、前記干渉縞信号をフーリエ変換することにより前記3次元データ列を検出し、前記3次元データ列は、振幅と位相の複素信号であり、
 前記2次元フィルタ処理部は、前記3次元データ列から、前記結像の開口に相当するデータ列を選択し、前記選択された前記データ列から、前記2次元の検出位置から前記反射点までの光路長に一致するデータを、前記光路差の情報を用いて抽出し、前記光路差から算出したフィルタ係数を乗算し、加算することで、前記結像を行って前記反射点を解像し、同様に、解像する全ての前記反射点について前記フィルタ係数を重畳積分することで、前記受光方向と交差する面を解像する。
Further, according to a second preferable aspect of the present embodiment, in the three-dimensional imaging device according to the first aspect, the detection section converts the three-dimensional data string by Fourier transforming the interference fringe signal. the three-dimensional data train is a complex signal of amplitude and phase;
The two-dimensional filter processing unit selects a data string corresponding to the imaging aperture from the three-dimensional data string, and selects a data string from the selected data string to the two-dimensional detection position to the reflection point. Data corresponding to the optical path length is extracted using the information of the optical path difference, multiplied by the filter coefficient calculated from the optical path difference, and added to perform the imaging to resolve the reflection point, Similarly, by convolutively integrating the filter coefficients for all the reflection points to be resolved, the surface intersecting with the light receiving direction is resolved.
 また、本実施形態の第3の好ましい態様によれば、第2の態様に係る3次元撮像装置であって、前記3次元データ列を記憶する記憶部と、
 前記記憶部から、前記2次元検出機構の検出位置から解像する前記反射点までの光路長に一致する前記データを読み出すためのアドレスを、前記光路差を用いて生成するアドレス生成部と、
 前記アドレスを用いて、前記データを読み出し、前記受光方向のデータ補間と、初期位相の整合と、前記結像の開口の重みづけを行う前記フィルタ係数を生成するフィルタ係数生成部と、を備え、
 前記2次元フィルタ処理部が前記フィルタ係数を前記複素信号のデータに重畳積分する。
Further, according to a third preferred aspect of the present embodiment, the three-dimensional imaging device according to the second aspect, comprising: a storage unit for storing the three-dimensional data string;
an address generation unit that generates an address for reading out the data that matches the optical path length from the detection position of the two-dimensional detection mechanism to the reflection point to be resolved from the storage unit, using the optical path difference;
a filter coefficient generation unit that reads the data using the address and generates the filter coefficients for data interpolation in the light receiving direction, initial phase matching, and weighting of the imaging aperture;
The two-dimensional filter processor superimposes and integrates the filter coefficient on the data of the complex signal.
 また、本実施形態の第4の好ましい態様によれば、第3の態様に係る3次元撮像装置であって、前記2次元フィルタ処理を行う前記結像の開口を複数のブロックに分割し、前記ブロックごとに、前記2次元フィルタ処理と同じ処理によって、解像する前記反射点を中心とした近傍の反射点の解像を行い、各前記ブロックで得た前記近傍の反射点の複素信号のデータの相互相関演算から、光波面の乱れを検出し、前記アドレス生成部による前記アドレスの生成に反映させることで、前記光波面の乱れを補正する補正手段を備える。 Further, according to a fourth preferred aspect of the present embodiment, in the three-dimensional imaging apparatus according to the third aspect, the imaging aperture for performing the two-dimensional filtering is divided into a plurality of blocks, and the For each block, the same process as the two-dimensional filtering process is performed to resolve reflection points in the vicinity centering on the reflection point to be resolved, and complex signal data of the reflection points in the vicinity obtained in each block. correction means for detecting disturbance of the optical wavefront from the cross-correlation calculation, and reflecting the disturbance in the generation of the address by the address generation unit to correct the disturbance of the optical wavefront.
 また、本実施形態の第5の好ましい態様によれば、第1~第4の態様の何れかに係る3次元撮像装置であって、前記光源の周波数掃引の歪みと変動を検出し、前記歪みによって生じる干渉縞の周波数成分の分散を、位相整合フィルタによって補正する補正手段を備える。 Further, according to a fifth preferred aspect of the present embodiment, in the three-dimensional imaging device according to any one of the first to fourth aspects, the distortion and fluctuation of the frequency sweep of the light source are detected, and the distortion and correction means for correcting the dispersion of the frequency components of the interference fringes caused by the phase matching filter.
 また、本実施形態の第6の好ましい態様によれば、第1~第5の態様の何れかに係る3次元撮像装置であって、クラスタが既知の被写体の反射スペクトルから、フィッシャーレシオが大きい順にスペクトル成分を算出し、前記スペクトル成分を用い、クラスタが未知の被写体の反射スペクトルから被写体の識別を行う識別手段を備える。 Further, according to a sixth preferred aspect of the present embodiment, in the three-dimensional imaging apparatus according to any one of the first to fifth aspects, from the reflection spectrum of the subject whose cluster is known, An identification means is provided for calculating a spectral component and using the spectral component to identify the object from the reflectance spectrum of the object whose cluster is unknown.
 また、本実施形態の第7の好ましい態様によれば、第6の態様に係る3次元撮像装置であって、前記識別手段は、ディープラーニングを実行するAIを用いる。 Further, according to a seventh preferred aspect of the present embodiment, in the three-dimensional imaging device according to the sixth aspect, the identification means uses AI that executes deep learning.
 また、本実施形態の第8の好ましい態様によれば、第1~第7の態様の何れかに係る3次元撮像装置であって、前記光源の代りに、低コヒーレンス光源と、分光器と、を備え、前記検出部と前記2次元フィルタ処理部によって3次元の解像を行う。 Further, according to an eighth preferred aspect of the present embodiment, in the three-dimensional imaging device according to any one of the first to seventh aspects, instead of the light source, a low coherence light source, a spectroscope, and three-dimensional resolution is performed by the detection unit and the two-dimensional filtering unit.
 また、本実施形態の第9の好ましい態様によれば、第1~第8の態様の何れかに係る3次元撮像装置であって、前記2次元検出機構で検出した干渉縞信号に、3次元の解像とスペクトルの解析に必要な情報を付加するデータフォーマット作成部と、前記3次元の解像と前記スペクトルの解析に必要な情報を付加した前記干渉縞信号を、RAWデータとして記憶する記憶部と、を備える。 Further, according to a ninth preferable aspect of the present embodiment, there is provided a three-dimensional imaging apparatus according to any one of the first to eighth aspects, wherein the interference fringe signal detected by the two-dimensional detection mechanism has a three-dimensional and a memory for storing the interference fringe signal to which the information necessary for the three-dimensional resolution and the spectrum analysis is added as RAW data. and
 また、本実施形態の第10の好ましい態様によれば、第9の態様に係る3次元撮像装置であって、前記光源のコヒーレンス度と周波数掃引の帯域特性(歪を含む)と指向性、前記2次元検出機構の検出位置の座標と前記受光素子の指向性、前記2次元検出機構の検出位置に対する前記照明光と前記参照光の出射位置の3次元座標、そして、被写体に関する情報などを備える。 Further, according to a tenth preferred aspect of the present embodiment, in the three-dimensional imaging device according to the ninth aspect, the degree of coherence of the light source, band characteristics (including distortion) and directivity of frequency sweep, the It includes the coordinates of the detection position of the two-dimensional detection mechanism and the directivity of the light receiving element, the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the detection position of the two-dimensional detection mechanism, and information on the object.
 上述した課題を解決し、目的を達成するために、本発明の3次元撮像装置の第11の態様は、光源から出射された光を分割し、照明光と参照光を生成する分割部と、
 被写体からの反射光と前記参照光を干渉させて干渉光を生成する合成部と、
 前記反射光を結像させる撮像光学系と、
 前記撮像光学系の結像面に設置されるスリットと、
 前記干渉光を、前記スリットの開口の長手方向に対し交差する交差方向に分光する分光部と、を備える。
In order to solve the above-described problems and achieve the object, the eleventh aspect of the three-dimensional imaging device of the present invention comprises a splitting unit that splits light emitted from a light source to generate illumination light and reference light;
a synthesizing unit that interferes the reflected light from the subject with the reference light to generate interference light;
an imaging optical system that forms an image of the reflected light;
a slit provided on the imaging plane of the imaging optical system;
a spectroscopic unit that disperses the interference light in a cross direction crossing the longitudinal direction of the opening of the slit.
 本発明の3次元撮像装置の第12の態様は、第11の態様に係る3次元撮像装置であって、前記交差方向に撮像範囲を走査するように前記被写体及び前記撮像装置の少なくとも一方を移動するための走査機構と、を備える。 A twelfth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein at least one of the subject and the imaging device is moved so as to scan the imaging range in the cross direction. and a scanning mechanism for scanning.
 本発明の3次元撮像装置の第13の態様は、第11の態様に係る3次元撮像装置であって、前記撮像光学系が、光軸に対して焦点位置が斜めに配置されるシリンドリカル光学系の要素を備える。 A thirteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein the imaging optical system is a cylindrical optical system in which the focal position is obliquely arranged with respect to the optical axis. with the elements of
 本発明の3次元撮像装置の第14の態様は、第11~第13の態様の何れかに係る3次元撮像装置であって、広帯域光又は広帯域の波長掃引光を生成する光源を備える。 A fourteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to thirteenth aspects, comprising a light source that generates broadband light or wideband wavelength swept light.
 本発明の3次元撮像装置の第15の態様は、第11~第14の態様の何れかに係る3次元撮像装置であって、前記干渉光から、所定の波長帯域成分を取り出し、フーリエ変換を行い、前記所定の波長帯域成分における画像を生成する信号処理部を備える。 A fifteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to fourteenth aspects, wherein a predetermined wavelength band component is extracted from the interference light, and Fourier transform is performed. and a signal processing unit configured to generate an image in the predetermined wavelength band component.
 本発明の3次元撮像装置の第16の態様は、第15の態様に係る3次元撮像装置であって、前記信号処理部は、前記干渉光から3原色に対応する波長帯域成分を取り出し、フーリエ変換を行い、3原色の画像信号を生成し、前記3原色の画像信号に基づき、RGB画像信号を生成する信号処理部を備える。 A sixteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the fifteenth aspect, wherein the signal processing section extracts wavelength band components corresponding to three primary colors from the interference light, A signal processing unit is provided for performing conversion, generating three primary color image signals, and generating RGB image signals based on the three primary color image signals.
 本発明の3次元撮像装置の第17の態様は、第11~第16の態様の何れかに係る3次元撮像装置であって、クラスタが既知の被写体の反射スペクトルからフィッシャー・レシオが大きい順に複数のスペクトル成分を算出し、前記スペクトル成分を用い、クラスタが未知の被写体の反射スペクトルから識別を行う識別手段を備える。 A seventeenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to sixteenth aspects, wherein a plurality of clusters are obtained in descending order of Fisher ratio from the reflectance spectrum of the subject whose clusters are known. , and uses the spectral components to discriminate from the reflectance spectrum of an object whose cluster is unknown.
 本発明の3次元撮像装置の第18の態様は、第13の態様に係る3次元撮像装置であって、前記スリットの代わりにピンホールを有する前記撮像装置を複数備え、隣り合う撮像装置による撮像の重複部分の画像をブロックに分割し、前記ブロック間の相関を取ることによりずれ量を検出し、前記画像を張り合わせる。 An eighteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the thirteenth aspect, comprising a plurality of the imaging devices having pinholes instead of the slits, and imaging by adjacent imaging devices. are divided into blocks, the amount of deviation is detected by taking the correlation between the blocks, and the images are pasted together.
 本発明の3次元撮像装置の第19の態様は、第18の態様に係る3次元撮像装置であって、前記複数の撮像装置は、ライン状に配置され、前記ライン状に隣り合う位置関係の撮像装置は、異なるタイミングで駆動される。 A nineteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eighteenth aspect, wherein the plurality of imaging devices are arranged in a line and have a positional relationship adjacent to each other in the line. The imaging device is driven at different timings.
 本発明によれば、簡易な構造で、被写体に対し3次元の解像と分光画像の検出を同時に実現できる3次元撮像装置を提供できる。 According to the present invention, it is possible to provide a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection for a subject with a simple structure.
実施形態に係る3次元撮像装置の構成を示す構成図である。1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment; FIG. 3次元撮像装置において干渉縞を生じる原理を説明する図である。It is a figure explaining the principle which produces an interference fringe in a three-dimensional imaging device. 3次元撮像装置において干渉縞を生じる原理を説明する他の図である。FIG. 11 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device; 反射点から2次元フィルタ処理までの構成を示す図である。It is a figure which shows the structure from a reflection point to a two-dimensional filter process. 2次元フィルタ処理の処理動作を説明する図である。It is a figure explaining the processing operation of two-dimensional filter processing. (a)-(g)は、受光素子の配列間隔と指向性を説明する図である。(a) to (g) are diagrams for explaining the arrangement interval and directivity of light receiving elements. 光波面の乱れによって劣化した解像度を、2次元フィルタ処理によって回復する構成を説明する図である。FIG. 4 is a diagram illustrating a configuration for recovering resolution deteriorated due to disturbance of an optical wavefront by two-dimensional filter processing; 干渉縞のRAWデータからRGB画像を生成する構成を示す図である。FIG. 4 is a diagram showing a configuration for generating an RGB image from RAW data of interference fringes; フーリエ変換によって生成する各種スペクトル画像の波長帯域を示す図である。It is a figure which shows the wavelength band of various spectral images produced|generated by a Fourier transform. AIによる特定する物質の非線形な切り分けを説明する図である。FIG. 4 is a diagram for explaining non-linear cutting of a substance to be specified by AI; レーザー光源の周波数掃引の直線性に歪みがある場合を説明する図である。It is a figure explaining the case where the linearity of the frequency sweep of a laser light source is distorted. 周波数掃引の直線性の歪みを検出する構成について説明する図である。FIG. 4 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep; 本実施形態の応用例の構成を示す図である。It is a figure which shows the structure of the application example of this embodiment. (a)は、反射点の結像位置が、撮像素子より前にあるときの結像光束を示し、(b)は、反射点の結像位置が、撮像素子より後にあるときの結像光束を示す図である。(a) shows the image forming light flux when the image forming position of the reflection point is in front of the imaging element, and (b) shows the image forming light flux when the image forming position of the reflection point is behind the image sensor. It is a figure which shows. 本実施形態において低コヒーレンス光源を用いた構成を示す図である。It is a figure which shows the structure using the low coherence light source in this embodiment. (a)、(b)、(c)は、それぞれ反射点の反射光を、紙面の縦方向に走査しながら検出するときの光路を示す図である。(a), (b), and (c) are diagrams each showing an optical path when detecting reflected light from a reflection point while scanning in the vertical direction of the paper surface. 冠状動脈を診断する場合に必要な合焦範囲を示す図である。FIG. 4 is a diagram showing a necessary focusing range when diagnosing a coronary artery; 本実施形態を血管内OCT装置に適用した構成を示す図である。It is a figure which shows the structure which applied this embodiment to the intravascular OCT apparatus. 血管内OCT装置を使用して冠状動脈の画像を検出する方法を説明する図である。FIG. 1 illustrates a method of detecting images of coronary arteries using an intravascular OCT device; 本実施形態をX線撮像やγ線撮像に応用する構成を説明する図である。It is a figure explaining the structure which applies this embodiment to X-ray imaging and gamma-ray imaging. (a)は、反射点から2次元フィルタ処理までの構成を示す他の図である。(b)は、光軸と垂直な面を結像レンズで解像し、受光方向をフーリエ変換処理によって解像する3次元の解像処理を示す図である。(a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process. (b) is a diagram showing three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing. 本発明の実施例1に係る撮像装置の構成を示す構成図である。1 is a configuration diagram showing the configuration of an imaging device according to Example 1 of the present invention; FIG. 図22の撮像装置により撮像処理を説明するための説明図である。23 is an explanatory diagram for explaining imaging processing by the imaging device of FIG. 22; FIG. RGBとスペクトル画像の検出を示すブロック図である。FIG. 2 is a block diagram illustrating detection of RGB and spectral images; フーリエ変換の範囲を説明する図である。It is a figure explaining the range of a Fourier transform. FS(Foley Sammon)変換を説明する図である。FIG. 4 is a diagram for explaining FS (Foley Sammon) transformation; 本発明の実施例2に係る撮像装置の構成を示す構成図である。FIG. 5 is a configuration diagram showing the configuration of an imaging device according to Example 2 of the present invention; 観察画像を説明するための図である。It is a figure for demonstrating an observation image. 本発明の実施例3に係る撮像装置の構成を示す構成図である。FIG. 11 is a configuration diagram showing the configuration of an imaging device according to Example 3 of the present invention;
 実施例の説明に先立ち、本発明のある態様にかかる実施形態の作用効果を説明する。なお、本実施形態の作用効果を具体的に説明するに際しては、具体的な例を示して説明することになる。しかし、後述する実施例の場合と同様に、それらの例示される態様はあくまでも本発明に含まれる態様のうちの一部に過ぎず、その態様には数多くのバリエーションが存在する。したがって、本発明は例示される態様に限定されるものではない。 Before describing the examples, the effects of embodiments according to certain aspects of the present invention will be described. It should be noted that when specifically explaining the effects of the present embodiment, a specific example will be shown and explained. However, as with the examples described later, the illustrated aspects are only a part of the aspects included in the present invention, and there are many variations of the aspects. Accordingly, the invention is not limited to the illustrated embodiments.
(実施形態)
 本発明の実施形態に係る3次元撮像装置は、光干渉法によって反射光の振幅と位相を検出し、それを使用した電気的な処理によって3次元の解像を行う。そして、3次元撮像装置は、3次元の画素ごとに、合焦と、光波面の乱れで劣化した解像度の回復と、スペクトルの解析を行う。
(embodiment)
A three-dimensional imaging apparatus according to an embodiment of the present invention detects the amplitude and phase of reflected light by optical interferometry, and performs three-dimensional resolution by electrical processing using them. Then, the three-dimensional imaging apparatus performs focusing, recovery of resolution deteriorated due to disturbance of the optical wavefront, and spectrum analysis for each three-dimensional pixel.
 3次元撮像装置は、光干渉計によって生じた反射光の干渉縞を2次元に検出する。次に、後述するフーリエ変換処理によって、2次元に検出した各位置において、受光方向を解像する。そして、フーリエ変換処理によって得た反射光の振幅と位相(以下、複素信号)と反射点までの光路長を使用して、後述する2次元フィルタ処理によって、受光方向と交差する面の解像を行う。3次元撮像装置は、この2つの処理によって被写体を3次元に解像する。 A three-dimensional imaging device two-dimensionally detects interference fringes of reflected light generated by an optical interferometer. Next, the light-receiving direction is resolved at each two-dimensionally detected position by Fourier transform processing, which will be described later. Then, using the amplitude and phase of the reflected light (hereafter referred to as a complex signal) obtained by the Fourier transform process and the optical path length to the reflection point, the resolution of the plane intersecting the light receiving direction is obtained by the two-dimensional filtering process described later. conduct. The three-dimensional imaging apparatus three-dimensionally resolves the subject through these two processes.
 上記の2次元フィルタ処理によって、画素ごとの合焦(ダイナミックフォーカシング)と、後述する光波面の乱れで劣化した解像度の回復を行う。また、受光方向の解像処理に使用する照明光の周波数掃引を利用して、反射光のスペクトルの解析を行い、画素ごとに被写体の組成の識別を行う。 The two-dimensional filter processing described above performs focusing (dynamic focusing) for each pixel and restores the resolution that has been deteriorated due to the disturbance of the optical wavefront, which will be described later. Also, the spectrum of the reflected light is analyzed using the frequency sweep of the illumination light used for the resolution processing in the direction of light reception, and the composition of the subject is identified for each pixel.
 また、本実施形態は、可視光帯に限らず、結像光学系が存在しない構成、結像光学系が存在していても高価であったりする電磁波の波長帯、例えば、赤外光、テラヘルツ波、ミリ波、X線、γ線などにも適用することができる。 In addition, the present embodiment is not limited to the visible light band, and includes a configuration in which an imaging optical system does not exist, a wavelength band of electromagnetic waves that are expensive even if an imaging optical system exists, such as infrared light, terahertz light, etc. It can also be applied to waves, millimeter waves, X-rays, γ-rays, and the like.
 図1を用いて、3次元撮像装置の基本構成を説明する。
 図1は、実施形態に係る3次元撮像装置の構成を示す構成図である。
 光源1は、撮像時間内に周波数掃引された光を出射する。光源1から出射された照明光である掃引光は、光干渉計13のビームスプリッタ2で分離される。分割面で反射した一方の掃引光は被写体3を照明する。分割面を透過した他方の掃引光はミラー4で反射される。ミラー4で反射した参照光は、ビームスプリッタ2で被写体3からの反射光7と合波され、干渉縞を生じる。
A basic configuration of a three-dimensional imaging apparatus will be described with reference to FIG.
FIG. 1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment.
The light source 1 emits light whose frequency is swept within the imaging time. Swept light, which is illumination light emitted from the light source 1 , is separated by the beam splitter 2 of the optical interferometer 13 . One sweeping light reflected by the split surface illuminates the subject 3 . The other sweeping light transmitted through the split surface is reflected by the mirror 4 . The reference light reflected by the mirror 4 is combined with the reflected light 7 from the subject 3 by the beam splitter 2 to generate interference fringes.
 生じた干渉縞は、2次元配列の受光素子8(以下、「撮像素子」という。)で受光される。干渉縞信号を撮像素子8で検出する方法は後述する。 The generated interference fringes are received by a two-dimensional array of light receiving elements 8 (hereinafter referred to as "imaging element"). A method of detecting the interference fringe signal with the imaging element 8 will be described later.
 撮像素子8で受光した干渉縞信号は、メモリ5にRAWデータとして記憶される。そして、メモリ5から解像に必要な分の干渉縞信号が読み出され、フーリエ変換処理(検出部)11によって受光方向を解像する。なお、本実施形態並びに後述の他の実施形態及び実施例では、2次元の検出位置おのおのについて受光方向に関しフーリエ変換によりデータ解析を行う検出部の例を説明している。しかし、データ解析を行う手法としては、フーリエ変換に限定されず、短時間フーリエ変換やウェーブレット変換等、種々の時間周波数解析手法を利用できる。 The interference fringe signal received by the imaging device 8 is stored in the memory 5 as RAW data. Then, the interference fringe signals necessary for resolution are read out from the memory 5, and the light receiving direction is resolved by the Fourier transform processing (detector) 11. FIG. In this embodiment and other embodiments and examples to be described later, an example of a detection unit that performs data analysis by Fourier transform with respect to the light receiving direction for each two-dimensional detection position is described. However, the data analysis method is not limited to Fourier transform, and various time-frequency analysis methods such as short-time Fourier transform and wavelet transform can be used.
 その後、後述する2次元フィルタ処理12によって光軸9と垂直な面の解像を行い、反射点6を3次元に検出する。フーリエ変換処理11と2次元フィルタ処理12に関しては、後述する。 After that, the surface perpendicular to the optical axis 9 is resolved by a two-dimensional filtering process 12, which will be described later, and the reflection point 6 is detected three-dimensionally. Fourier transform processing 11 and two-dimensional filter processing 12 will be described later.
 被写体3の反射点6から反射した反射光は、ビームスプリッタ2によって参照光と合波されると、後述するように、反射点6の反射光と参照光の光路差に比例した周波数の干渉縞を生じる。 When the reflected light reflected from the reflection point 6 of the subject 3 is combined with the reference light by the beam splitter 2, as will be described later, interference fringes of a frequency proportional to the optical path difference between the reflected light from the reflection point 6 and the reference light are formed. produces
 上記の光路差は、光源1から出射された照明光がビームスプリッタ2を介して反射点6で反射され、撮像素子8の各受光素子に受光されるまでの反射光の光路長と、光源1からビームスプリッタ2を透過して反射ミラー4とビームスプリッタ2を介し、撮像素子8の各受光素子に受光されるまでの参照光の光路長の差である。 The above optical path difference is the optical path length of the illumination light emitted from the light source 1 that is reflected at the reflection point 6 via the beam splitter 2 and is received by each light receiving element of the image sensor 8, and the light path length of the light source 1 is the difference in the optical path length of the reference light from the reference light through the beam splitter 2, through the reflecting mirror 4 and the beam splitter 2, to the light receiving elements of the imaging element 8. FIG.
 ゆえに、反射点6の位置と撮像素子8の各受光素子の位置に対応した周波数の干渉縞が生じる。これを利用して、後述するフーリエ変換処理11と、2次元フィルタ処理12によって、反射点6を3次元に解像することができる。 Therefore, interference fringes of frequencies corresponding to the position of the reflection point 6 and the position of each light receiving element of the imaging element 8 are generated. Using this, the reflection point 6 can be three-dimensionally resolved by a Fourier transform process 11 and a two-dimensional filter process 12, which will be described later.
 ここで、光源1は、光軸9と垂直な面の解像に必要な空間的コヒーレンス(点光源性)を有し、周波数掃引は、受光方向の解像に必要な直線性と周波数帯域と時間的コヒーレンスを有している。これらの条件を満足する光源として、MEMS(Micro Electro-Mechanical system)やKTN(タンタル酸ニオブ酸カリウム)などの偏向素子と分光器を用いた、周波数掃引型レーザー光源などを用いることができる。 Here, the light source 1 has the spatial coherence (point light source property) required for resolution on a plane perpendicular to the optical axis 9, and the frequency sweep has the linearity and frequency band required for resolution in the light receiving direction. It has temporal coherence. As a light source that satisfies these conditions, a frequency-swept laser light source using a deflection element such as MEMS (Micro Electro-Mechanical System) or KTN (potassium tantalate niobate) and a spectroscope can be used.
 または、光源1は、光軸9と垂直な面の解像に必要な空間的コヒーレンス(点光源性)を有するインコヒーレント光源(パーシャルコヒーレント光源)であって、出射光の振幅に周波数掃引の変調がなされている構成でも良い。 Alternatively, the light source 1 is an incoherent light source (partial coherent light source) having the spatial coherence (point light source property) required for resolution of a plane perpendicular to the optical axis 9, and the amplitude of the emitted light is modulated by the frequency sweep. may be used.
 前者の光源は、コヒーレンス長の点から、比較的近距離にある小さな被写体を、高解像度を以って3次元に解像するときに使用される。後者の光源は、遠距離にある大きな被写体を、3次元に解像するときに使用される。 The former light source is used to three-dimensionally resolve a small subject at a relatively short distance with high resolution in terms of coherence length. The latter light source is used when three-dimensionally resolving a large object at a long distance.
 図1では、干渉縞を2次元に検出する機構として撮像素子8を示している。しかしながら、これに限られず、1次元配列の受光素子と1次元走査の組合せ、もしくは、単一の受光素子と2次元走査の組合せによって、干渉縞を2次元に検出する機構であっても良い。 In FIG. 1, an imaging device 8 is shown as a mechanism for two-dimensionally detecting interference fringes. However, the present invention is not limited to this, and may be a mechanism that two-dimensionally detects interference fringes by a combination of a one-dimensional array of light-receiving elements and one-dimensional scanning, or a combination of a single light-receiving element and two-dimensional scanning.
 また、コヒーレンスが損なわれずに光源1から各受光素子までの反射光と参照光の光路長が算出できれば、照明光、反射光、参照光の各光路中に光学系が配置されても良い。 Also, if the optical path lengths of the reflected light and the reference light from the light source 1 to each light-receiving element can be calculated without impairing the coherence, an optical system may be arranged in each optical path of the illumination light, the reflected light, and the reference light.
 また、光干渉計13は、受光路上であれば、どこに配置されても良く、ビームスプリッタ2を、参照波の合波用と照明光の分離用に分けて配置しても良い。 Also, the optical interferometer 13 may be placed anywhere on the light receiving path, and the beam splitter 2 may be separately placed for combining the reference wave and for separating the illumination light.
 図1の光干渉計13は、原理説明のために基本構成を示している。これに限られず、光干渉計は、様々な方式があり、用途に応じて方式を選択できる。例えば、構成寸法を小さくするために、Mirau方式のような光干渉計を使用しても良い。 The optical interferometer 13 in FIG. 1 shows a basic configuration for explaining the principle. The optical interferometer is not limited to this, and there are various methods, and the method can be selected according to the application. For example, an optical interferometer such as the Mirau method may be used to reduce the size of the structure.
 また、波長帯域を満足すれば、光の利用効率を上げるために、ファラデー回転子を用いた光サーキュレータを使用しても良い。 Also, if the wavelength band is satisfied, an optical circulator using a Faraday rotator may be used in order to increase the light utilization efficiency.
 ただし、いずれの構成の場合も、途中の光学系と、光干渉計13を構成するビームスプリッタ2やミラー4は、照明光、反射光、参照光のコヒーレンスが損なわれず、光源1から撮像素子8の各受光素子までの反射光と参照光の光路長が算出できる形状でなければならない。 However, in any configuration, the intermediate optical system and the beam splitter 2 and mirror 4 that constitute the optical interferometer 13 do not impair the coherence of the illumination light, the reflected light, and the reference light. The shape must be such that the optical path length of the reflected light and the reference light to each light receiving element can be calculated.
 例えば、ミラー4は、表面の精度が波長の1/16以下と充分に小さく、点反射体や平板に加えて、凹面、凸面、楕円面のように焦点を有し、光路長を算出しやすいものが使用される。 For example, the mirror 4 has a sufficiently small surface accuracy of 1/16 or less of the wavelength, and has a focal point such as a concave surface, a convex surface, and an ellipsoidal surface in addition to a point reflector or flat plate, so that the optical path length can be easily calculated. things are used.
 コヒーレンスが損なわれず、光源1から各受光素子8までの反射光と参照光の光路長が算出できれば、上記のいずれの構成の場合においても、後述するフーリエ変換処理11と2次元フィルタ処理12による3次元の解像を行うことができる。 If the coherence is not impaired and the optical path length of the reflected light and the reference light from the light source 1 to each light receiving element 8 can be calculated, then in any of the above configurations, 3 Dimensional resolution can be performed.
(フーリエ変換処理の説明)
 以下に、受光方向を解像する図1のフーリエ変換処理11について説明する。
 周波数と位相が異なる2つの光を合波すると、それらの差の周波数と差の位相からなる干渉縞を生じる。これを光ヘテロダイン検波と言う。
(Description of Fourier transform processing)
The Fourier transform processing 11 of FIG. 1 for resolving the direction of light reception will be described below.
When two lights with different frequencies and phases are combined, interference fringes are generated with the difference in frequency and the difference in phase. This is called optical heterodyne detection.
 光ヘテロダイン検波は、周波数が大変高い光のキャリアを周波数の低い干渉縞のキャリアに変換することができる。そして、光の振幅と位相の情報を保持した干渉縞を、受光素子によって電気信号に変換することができる。また、光ヘテロダイン検波は、振幅変調されたインコヒーレント光の振幅と位相の検出にも適用できる。 Optical heterodyne detection can convert a very high-frequency optical carrier into a low-frequency interference fringe carrier. Then, the interference fringes holding the information on the amplitude and phase of the light can be converted into electrical signals by the light receiving element. Optical heterodyne detection can also be applied to amplitude and phase detection of amplitude-modulated incoherent light.
 図2は、3次元撮像装置において干渉縞を生じる原理を説明する図である。
 受光方向を解像するフーリエ変換処理は、この光ヘテロダイン検波の原理に基づいている。図2に示すように、周波数掃引がなされた参照光18と反射光19の光路長の差から僅かな時間差(光路差)14が生じる。これによって、参照光18と反射光19の周波数と位相に僅かな時間差15が生じる。そして、差の周波数と差の位相からなる干渉縞を生じる。
FIG. 2 is a diagram for explaining the principle of generating interference fringes in a three-dimensional imaging device.
Fourier transform processing for resolving the direction of light reception is based on the principle of this optical heterodyne detection. As shown in FIG. 2, a slight time difference (optical path difference) 14 is generated from the optical path length difference between the frequency-swept reference light 18 and the reflected light 19 . This causes a slight time difference 15 between the frequencies and phases of the reference light 18 and the reflected light 19 . Then, an interference fringe is generated which consists of a difference frequency and a difference phase.
 図2から分かるように、周波数掃引の直線性が高ければ、差の周波数と差の位相が掃引時間にわたって一定になる。このため、反射光と参照光の周波数掃引に従い、一定の周波数の干渉縞を生じる。 As can be seen from FIG. 2, if the linearity of the frequency sweep is high, the difference frequency and the difference phase are constant over the sweep time. Therefore, interference fringes with a constant frequency are generated according to the frequency sweep of the reflected light and the reference light.
 そして、参照光18と反射光19の光路差14が、点線16に示すように大きくなると、干渉縞の周波数15も、点線17に示すように高くなる。 Then, when the optical path difference 14 between the reference light 18 and the reflected light 19 increases as indicated by the dotted line 16, the frequency 15 of the interference fringes also increases as indicated by the dotted line 17.
 図3は、3次元撮像装置において干渉縞を生じる原理を説明する他の図である。 
 また、図3に示すように、掃引する周波数掃引の帯域幅21を、点線22に示すように広くすると、光路差25が同じでも、干渉縞の周波数23が点線24に示すように高くなる。
FIG. 3 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device.
Also, as shown in FIG. 3, when the frequency sweep bandwidth 21 is widened as indicated by the dotted line 22, the interference fringe frequency 23 increases as indicated by the dotted line 24 even if the optical path difference 25 is the same.
 このような干渉縞信号をフーリエ変換すると、干渉縞の周波数が、周波数軸上のスペクトル(複素信号)として検出される。ここで、周波数軸上のスペクトルの位置が、図1における光源(点光源)1から受光素子8までの反射光と参照光の光路差に比例する。そして、撮像素子8の受光素子から反射点6までの距離を検出することができる。 When such an interference fringe signal is Fourier transformed, the frequency of the interference fringes is detected as a spectrum (complex signal) on the frequency axis. Here, the position of the spectrum on the frequency axis is proportional to the optical path difference between the reflected light and the reference light from the light source (point light source) 1 to the light receiving element 8 in FIG. Then, the distance from the light receiving element of the imaging element 8 to the reflection point 6 can be detected.
 そして、スペクトルの分解能(単スペクトルの幅)は、周波数掃引の包絡線をフーリエ変換した波形で決まる。図3の干渉縞の周波数掃引の帯域幅23を、点線24のように広くすると、光路差に対するフーリエ変換後のスペクトルの数が増えるため、受光方向の解像度を上げることができる。 And the resolution of the spectrum (the width of the single spectrum) is determined by the waveform obtained by Fourier transforming the envelope of the frequency sweep. When the frequency sweep bandwidth 23 of the interference fringes in FIG. 3 is widened as indicated by the dotted line 24, the number of spectra after Fourier transform with respect to the optical path difference increases, so the resolution in the light receiving direction can be increased.
 そして、上述の処理は、インコヒーレント光の振幅変調を周波数掃引したときにも、そのまま適用できる。
参照光Esと反射光Erは、それぞれ以下の式(1)、(2)として表すことができる。
 Es=As×cos{2π[f0+(Δf/2T)t]t+θ}       (1)
 Er=Ar×cos{2π[f0+(Δf/2T)(t-td)](t-td)+θ}  (2)
 ここで、
 Δfは、周波数掃引の帯域幅、
 Tは、掃引時間、
 f0は、掃引開始の周波数、
 θは、掃引開始の初期位相、
 tは、時間、
 tdは、参照光と反射光の時間差(光路差)、
 参照光の振幅をAs、
 反射光の振幅をAr、
である。
The above-described processing can also be applied as it is when amplitude modulation of incoherent light is frequency-swept.
The reference light Es and the reflected light Er can be represented by the following equations (1) and (2), respectively.
Es=As×cos{2π[f0+(Δf/2T)t]t+θ0 } (1)
Er=Ar×cos{2π[f0+(Δf/2T)(t-td)](t-td)+θ0 } (2)
here,
Δf is the frequency sweep bandwidth,
T is the sweep time;
f0 is the sweep start frequency;
θ 0 is the initial phase of the sweep start,
t is the time;
td is the time difference (optical path difference) between the reference light and the reflected light,
As the amplitude of the reference beam,
Ar is the amplitude of the reflected light,
is.
 参照光と反射光を合波し受光素子で受光すると、受光素子の周波数応答(LPF:ローパスフィルタ)から光の高い周波数を含む項が直流成分になり、低い周波数成分の干渉縞の項だけが残る。そして、三角関数の積和の式から、以下の式(3)が得られる。
 LPF[(Es+Er)]=A×cos{2π[f0+(Δf/2T)t]t-2π[f0+(Δf/2T)(t-td)](t-td)}+K   (3)
 ここで、
 AとKは、それぞれ参照光の振幅Asと反射光の振幅Arで決まる定数である。
When the reference light and the reflected light are combined and received by the light receiving element, the term including the high frequency of the light becomes the DC component from the frequency response (LPF: low pass filter) of the light receiving element, and only the term of the interference fringes of the low frequency component becomes remain. Then, the following formula (3) is obtained from the sum-of-products formula of trigonometric functions.
LPF[(Es+Er) 2 ]=A×cos{2π[f0+(Δf/2T)t]t-2π[f0+(Δf/2T)(t-td)](t-td)}+K (3)
here,
A and K are constants determined by the amplitude As of the reference light and the amplitude Ar of the reflected light, respectively.
 式(3)から、さらに定数AとKを除いて整理すると、以下の式(4)となる。
 cos{2π[2(Δf/2T)td]t+2π[(Δf/2T)td+f0×td]}   (4)
 
Further excluding the constants A and K from the equation (3), the following equation (4) is obtained.
cos{2π[2(Δf/2T)td]t+2π[(Δf/2T)td2+ f0 ×td]} (4)
 式(4)の第1項から、2(Δf/2T)tdは、干渉縞信号の周波数であり、時間差(光路差)tdが変化すると干渉縞の周波数が直線的に変化することが分かる。
 また、式(4)の第2項から、2π[(Δf/2T)td+f0×td]は、干渉縞信号の初期位相であり、tdに対して初期位相が放物線状に変化することが分かる。
From the first term of equation (4), 2(Δf/2T)td is the frequency of the interference fringe signal, and it can be seen that the frequency of the interference fringes changes linearly as the time difference (optical path difference) td changes.
Further, from the second term of equation (4), 2π[(Δf/2T)td 2 +f0×td] is the initial phase of the interference fringe signal, and the initial phase can change parabolically with respect to td. I understand.
 そして、周波数掃引の包絡線が矩形波の場合、スペクトル分解能は、矩形波をフーリエ変換したsinc関数((sinx)/x)の半値全幅=1/Tになる。このため、スペクトル分解能に相当する時間差tdを求めると、2(Δf/2T)td=1/Tから、td=1/Δfとなる。ゆえに、受光方向の分解能ρは解像度であり、伝播媒体中の光速をCとし、反射が往復であることを考慮すると、ρ=C/2Δfとなる。 When the envelope of the frequency sweep is a rectangular wave, the spectral resolution is the full width at half maximum of the sinc function ((sinx)/x) obtained by Fourier transforming the rectangular wave=1/T. Therefore, when the time difference td corresponding to the spectral resolution is obtained, td=1/Δf from 2(Δf/2T)td=1/T. Therefore, the resolution ρ in the light-receiving direction is the resolution, and considering that the speed of light in the propagation medium is C 0 and the reflection is round trip, ρ=C 0 /2Δf.
 以上に述べた原理は、インコヒーレント光源を用い、その振幅変調について周波数掃引を行った場合も、そのまま適用できる。 The principle described above can be applied as it is even when an incoherent light source is used and frequency sweep is performed on its amplitude modulation.
 包絡線を矩形波にすると、矩形波をフーリエ変換したsinc関数のサイドローブが発生する。包絡線をガウシアン(ガウス関数)にすると、サイドローブを抑えることができるが、解像度が多少落ちるため、掃引の帯域幅をその分増やして対応する。 When the envelope is a square wave, side lobes of the sinc function, which is the Fourier transform of the square wave, are generated. If the envelope is Gaussian (Gaussian function), the side lobes can be suppressed, but the resolution is slightly lowered, so the sweep bandwidth is increased accordingly.
 図1に示す単一の受光素子mによって反射点6の干渉縞信号を検出し、フーリエ変換したときの3次元点像分布関数(3次元PSF(Point Spread Function))は、図1の曲線10に示すように、受光方向7の点像分布関数を受光素子mの指向性の範囲に球面状に広げた形となる。 The three-dimensional point spread function (three-dimensional PSF (Point Spread Function)) when the interference fringe signal of the reflection point 6 is detected by the single light receiving element m shown in FIG. , the point spread function of the light receiving direction 7 is spherically extended to the directivity range of the light receiving element m.
 本願の説明文中で「受光方向を解像する」という表記は、この3次元点像分布関数の複素信号を受光方向7のデータ列として検出することを示している。 The notation "resolve the light receiving direction" in the description of the present application indicates that the complex signal of the three-dimensional point spread function is detected as a data string of the light receiving direction 7.
 各受光素子で得られる干渉縞信号、もしくは、それをフーリエ変換して解像した複素信号に、光源のコヒーレンス度と周波数掃引の帯域特性と指向性、受光素子の指向性と素子数と配列間隔、撮像素子の受光面に対する照明光と参照光の出射位置の3次元座標、そして、被写体の情報を付加し、RAWデータとしてアーカイブすれば、後から位相情報を利用した様々な処理を行うことができる。 Interference fringe signal obtained by each light receiving element, or complex signal obtained by Fourier transforming it, the coherence degree of the light source, the band characteristics and directivity of the frequency sweep, the directivity of the light receiving element, the number of elements, and the array spacing , the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the light receiving surface of the image pickup device, and the information of the subject are added, and if it is archived as RAW data, various processing can be performed later using the phase information. can.
 反射光が、反射点から受光素子に到達する僅かな時間を検出することは、技術的に困難である。しかしながら、上述したように、参照光との干渉縞を検出してフーリエ変換を行なえば、僅かな時間差(光路差)を、干渉縞の周波数の違いに変換して検出することができる。  It is technically difficult to detect the minute time it takes the reflected light to reach the light receiving element from the reflection point. However, as described above, if the interference fringes with the reference light are detected and Fourier transformed, a slight time difference (optical path difference) can be converted into a frequency difference of the interference fringes and detected.
 反射光には、光干渉法によって生じた干渉縞が、被写体の反射点の数だけ重畳されているので、フーリエ変換を行なえば、受光方向を解像することができる。フーリエ変換処理による受光方向の解像は、レーダーのパルス圧縮の解像と原理は同じである。即ち、反射光の周波数成分の位相を整合して加算した(位相整合フィルタを通した)ことになり、あたかも、ミクロンオーダーの光のパルスをレーダーのように送受信して、受光方向を解像したことになる。 Since the number of interference fringes generated by the optical interferometry is superimposed on the reflected light by the number of reflection points on the object, the light receiving direction can be resolved by performing a Fourier transform. The principle of the resolution of the light receiving direction by Fourier transform processing is the same as that of pulse compression of radar. In other words, the phases of the frequency components of the reflected light are matched and added (passed through a phase-matched filter), and it is as if micron-order light pulses are transmitted and received like a radar, and the light receiving direction is resolved. It will be.
(干渉縞信号の検出の説明)
 次に、干渉縞信号を図1の撮像素子8で検出する方法について説明する。
 「(フーリエ変換処理の説明)」欄で上述したように、干渉縞信号をフーリエ変換すると受光方向の点像分布関数が得られ、その半値全幅が受光方向の解像度となる。標本化定理に従い、受光方向の標本化の間隔を解像度より小さく設定する。このため、受光方向の解像範囲を標本化間隔で除したものが受光方向の画素数になる。
(Description of detection of interference fringe signal)
Next, a method of detecting an interference fringe signal with the imaging device 8 of FIG. 1 will be described.
As described above in the section "(Description of Fourier Transform Processing)", the Fourier transform of the interference fringe signal yields the point spread function in the light receiving direction, and the full width at half maximum of the point spread function is the resolution in the light receiving direction. According to the sampling theorem, the sampling interval in the light receiving direction is set smaller than the resolution. Therefore, the number of pixels in the light-receiving direction is obtained by dividing the resolution range in the light-receiving direction by the sampling interval.
 照明光の周波数掃引の間に、撮像素子8の撮像を、受光方向の画素数と同じ回数だけ繰り返すことにより、フーリエ変換対の関係から、干渉縞信号を、標本化定理を満たして検出できる。検出時間を短くするときは、周波数掃引時間を短くし、それに合わせてフレームレートの高い撮像素子8を使用する。撮像素子8は、基本的にグローバルシャッタ動作が可能なものが使用される。 By repeating the imaging of the imaging element 8 the same number of times as the number of pixels in the light receiving direction during the frequency sweep of the illumination light, the interference fringe signal can be detected from the relationship of the Fourier transform pair while satisfying the sampling theorem. When the detection time is shortened, the frequency sweep time is shortened, and an imaging device 8 with a high frame rate is used accordingly. The imaging device 8 is basically capable of global shutter operation.
 受光方向の画素数を1000画素とし、一般的な60フレーム/秒の撮像素子を使用して干渉縞信号を標本化する場合、干渉縞信号の検出に要する時間は1000÷60=16.7秒になる。この場合、光源1の掃引時間は16.7秒に設定される。検出時間の長さから静止物の3次元の解像や形状測定などに応用される。 If the number of pixels in the light receiving direction is 1000 pixels and the interference fringe signal is sampled using a general 60 frame/second image sensor, the time required to detect the interference fringe signal is 1000÷60=16.7 seconds. become. In this case, the sweep time for light source 1 is set to 16.7 seconds. Due to the long detection time, it is applied to three-dimensional resolution and shape measurement of stationary objects.
 市販されている高速撮像素子(2000×1000画素、1000フレーム/秒)を使用した場合の検出時間は、1秒となる。 The detection time when using a commercially available high-speed imaging device (2000 x 1000 pixels, 1000 frames/second) is 1 second.
 現存する最高速の撮像素子(2000×1000画素、20000フレーム/秒)を使用する場合、撮像時間は50msとなる。このため、動く被写体への用途が広がる。また、多板プリズムによって複数の撮像素子のタイミングをずらして撮像すると、撮像時間を更に縮めることができる。 When using the fastest existing imaging device (2000 x 1000 pixels, 20000 frames/second), the imaging time is 50 ms. For this reason, the application to a moving subject is expanded. In addition, the imaging time can be further shortened by imaging with a plurality of imaging elements at different timings by means of a multi-plate prism.
 また 、撮像素子の撮像時間を短くしていくと、感度が得られなくなるように感ずる。しかしながら、図1のフーリエ変換処理11を行うと、受光方向の画素数分だけ感度が向上する、換言すると、単スペクトルのSN比が、画素数の平方根分向上する。 Also, if the imaging time of the imaging device is shortened, it seems that sensitivity cannot be obtained. However, when the Fourier transform process 11 in FIG. 1 is performed, the sensitivity is improved by the number of pixels in the light receiving direction, in other words, the SN ratio of the single spectrum is improved by the square root of the number of pixels.
 加えて、図1の2次元フィルタ処理12を行うと、図4の仮想レンズ35の受光素子の数だけ感度が向上する。このため、結果として、光学系を使用した撮像装置のシャッタ動作と同程度の感度になり、問題は生じない。 In addition, when the two-dimensional filtering process 12 of FIG. 1 is performed, the sensitivity is improved by the number of light receiving elements of the virtual lens 35 of FIG. Therefore, as a result, the sensitivity becomes almost the same as the shutter operation of an imaging device using an optical system, and no problem occurs.
 シャッタ動作(1ms以下)を行う場合は、干渉縞を各受光素子で受光し、並列にデジタル化し、並列にメモリに記憶できる撮像素子が必要になる。並列処理に必要な配線数と、そのストレーキャパシティによる電力消費を考えると、並列処理回路は撮像素子に内装されないと実現性がない。 In the case of shutter operation (1 ms or less), an image sensor that can receive interference fringes with each light receiving element, digitize them in parallel, and store them in memory in parallel is required. Considering the number of wires required for parallel processing and the power consumption due to its stray capacity, the parallel processing circuit is not feasible unless it is built into the imaging device.
 進捗が著しい半導体の多層構造化技術を使用すれば、その実現性は高い。例えば、画素ごとに記録用の転送メモリ部を設けた200万画素、250万フレーム/秒の裏面照射型撮像素子の開発報告が知られている。多層構造化技術によって同様にメモリを構成すれば、0.4msのシャッタ動作が可能になる。 The feasibility is high if we use the multi-layer semiconductor technology, which is progressing remarkably. For example, there is known a report on the development of a 2-million-pixel, 2.5-million-frame/second back-illuminated imaging device in which a transfer memory section for recording is provided for each pixel. If the memory is similarly constructed by the multi-layer structure technique, a shutter operation of 0.4 ms becomes possible.
(2次元フィルタ処理の説明)
 次に、フーリエ変換処理11(図1)で得た反射光の複素信号を使用し、光軸と垂直な面を解像する図1の2次元フィルタ処理12について説明する。
(Description of two-dimensional filtering)
Next, the two-dimensional filter processing 12 in FIG. 1, which uses the complex signal of the reflected light obtained in the Fourier transform processing 11 (FIG. 1) and resolves the plane perpendicular to the optical axis, will be described.
 図4は、反射点から2次元フィルタ処理までの構成を示す図である。
 図4に示すように、合波部32において参照光と合波されて生じた干渉縞は、撮像素子の受光素子33-1~33-nで受光される。検出した干渉縞信号は、メモリ5(図1)に記憶される。その後、仮想レンズ35の開口に相当する干渉縞信号をメモリから読み出し、フーリエ変換処理34(図1の11)を行う。これにより、各受光素子33-1~33-nの受光方向を解像し、受光方向の複素信号の3次元のデータ列36-1~36-nを得る。
FIG. 4 is a diagram showing a configuration from reflection points to two-dimensional filtering.
As shown in FIG. 4, the interference fringes generated by combining the reference light and the reference light in the combining unit 32 are received by the light receiving elements 33-1 to 33-n of the imaging device. The detected interference fringe signal is stored in memory 5 (FIG. 1). After that, the interference fringe signal corresponding to the aperture of the virtual lens 35 is read out from the memory, and Fourier transform processing 34 (11 in FIG. 1) is performed. As a result, the light receiving direction of each of the light receiving elements 33-1 to 33-n is resolved, and three-dimensional data strings 36-1 to 36-n of complex signals in the light receiving direction are obtained.
 そして、反射点31を焦点とする仮想レンズ(コリメートレンズ)35で集光したかのように、2次元フィルタ処理37によって、受光方向の複素信号の3次元データ列36-1~36-nから、反射点31から各受光素子33-1~33-nまでの光路長に一致する画素の複素信号を抽出する。そして、結像の開口の中心位置の複素信号に位相を整合させて加算すると、反射点31を解像することができる。被写体空間の全ての反射点(画素)に対して、この処理を行ない、被写体を3次元に解像する。 Then, as if the light was condensed by a virtual lens (collimating lens) 35 whose focus is the reflection point 31, the three-dimensional data trains 36-1 to 36-n of the complex signals in the light-receiving direction are processed by the two-dimensional filtering process 37. , the complex signals of the pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted. Then, the reflection point 31 can be resolved by matching the phase to the complex signal at the center position of the imaging aperture and adding it. This processing is performed for all reflection points (pixels) in the object space, and the object is three-dimensionally resolved.
 図4の処理の構成を、図21(a)に示す。
 図21(a)は、反射点から2次元フィルタ処理までの構成を示す他の図である。
The configuration of the processing in FIG. 4 is shown in FIG. 21(a).
FIG. 21(a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process.
Figure JPOXMLDOC01-appb-M000001
 一つの反射点からの反射光P1~Pnは、上記の式(5)で表すことができる。 
 ここで、
 Rは、参照光、
 Lpは、受光素子によるローパスフィルタの係数、
 F(カリグラフィ書体)は、受光方向のフーリエ変換、である。
Figure JPOXMLDOC01-appb-M000001
Reflected light P1 to Pn from one reflection point can be expressed by the above equation (5).
here,
R is the reference beam;
Lp is the coefficient of the low-pass filter by the light receiving element;
F (Calligraphy typeface) is the Fourier transform of the light receiving direction.
Figure JPOXMLDOC01-appb-M000002
 式(5)を分配則に従って整理すると、上記の式(6)で表すことができる。
Figure JPOXMLDOC01-appb-M000002
By rearranging the equation (5) according to the distributive law, the above equation (6) can be obtained.
 式(6)は、図21(b)に示すように、光軸と垂直な面を結像レンズで解像し、受光方向をフーリエ変換処理によって解像する3次元の解像処理を表している。 Expression (6), as shown in FIG. 21(b), represents three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing. there is
 上述の「フーリエ変換処理の説明」欄で述べたように、光路差tdが変化すると、干渉縞の周波数が直線的に変化する。
 そして、干渉縞信号の初期位相は、
 2π[(Δf/T)td+f0×td]
の式となって、tdに対して放物線状に変化する。
As described in the "Description of Fourier Transform Processing" section above, when the optical path difference td changes, the frequency of the interference fringes changes linearly.
And the initial phase of the fringe signal is
2π[(Δf/T)td 2 +f0×td]
and changes parabolically with respect to td.
 反射点31から各受光素子33-1~33-nまでの光路長に一致する画素の複素信号を抽出し加算を行う際に、この初期位相の整合が行われる。初期位相の整合は、後述する図5のローパスフィルタ42-1~42-nによって、データ補間の処理と合わせて行われる。後述する図5のフィルタ係数生成部50において、データ補間用の係数と、位相整合用の複素信号の係数の乗算が行われ、ローパスフィルタ42-1~42-nの係数47-1~47-nが生成される。 This initial phase matching is performed when complex signals of pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted and added. Initial phase matching is performed together with data interpolation processing by low-pass filters 42-1 to 42-n shown in FIG. 5, which will be described later. In the filter coefficient generator 50 shown in FIG. 5, which will be described later, the data interpolation coefficient and the complex signal coefficient for phase matching are multiplied, and the coefficients 47-1 to 47- of the low-pass filters 42-1 to 42-n are obtained. n is generated.
(2次元フィルタ処理37の処理動作の説明)
 図5は、2次元フィルタ処理37の処理動作を説明する図である。
 図4の受光方向の複素信号のデータ列36-1~36-nを、図5のラインメモリ41-1~41-nに記憶する。ラインメモリ41-1~41-nから、アドレス44-1~44-nによって、結像する開口に相当するデータ列の中から、図4の反射点31から各受光素子33-1~33-nまでの光路長に一致するアドレスに記憶されている複素信号48-1~48-nを読み出す。
(Description of the processing operation of the two-dimensional filter processing 37)
FIG. 5 is a diagram for explaining the processing operation of the two-dimensional filtering process 37. As shown in FIG.
The data strings 36-1 to 36-n of the complex signals in the direction of light reception in FIG. 4 are stored in the line memories 41-1 to 41-n in FIG. From the line memories 41-1 to 41-n, the respective light receiving elements 33-1 to 33-n from the reflection point 31 in FIG. The complex signals 48-1 to 48-n stored at addresses corresponding to optical path lengths up to n are read.
 そして、光路長の量子化誤差の影響を抑えるために、ローパスフィルタ42-1~42-nによって、複素信号48-1~48-nの受光方向のデータ補間と、上述した位相の整合を行ってから加算器49で加算を行う。 In order to suppress the influence of the quantization error of the optical path length, the low-pass filters 42-1 to 42-n perform data interpolation in the light receiving direction of the complex signals 48-1 to 48-n and the above-described phase matching. Then, the adder 49 performs addition.
 データ補間の精度は、受光方向の解像度の1/16以下が望ましい。データ補間は、スプライン補間が望ましい。ただし、近傍データを使用した直線補間でも十分である。反射点31から各受光素子33-1~33-nまでの光路長に一致する複素信号の近傍のデータが、ラインメモリ41-1~41-nから読み出される。読みだされたデータは、ローパスフィルタ42-1~42-nに入力されてデータ補間がなされる。 The accuracy of data interpolation should be 1/16 or less of the resolution in the light receiving direction. Data interpolation is preferably spline interpolation. However, linear interpolation using neighboring data is also sufficient. Data near the complex signal that matches the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are read out from the line memories 41-1 to 41-n. The read data are input to low-pass filters 42-1 to 42-n for data interpolation.
 データ補間と位相整合用のフィルタの係数47-1~47-nは、アドレス44-1~44-pに従い、フィルタ係数生成部50で生成される。サイドローブを抑えるために、補正用の重み係数を乗じてから加算を行っても良い。重み係数の乗算は、フィルタ係数生成部50において、ローパスフィルタ42-1~42-nのフィルタ係数47-1~47-nに乗ずることで行われる。 The coefficients 47-1 to 47-n of the filters for data interpolation and phase matching are generated by the filter coefficient generator 50 according to addresses 44-1 to 44-p. In order to suppress side lobes, addition may be performed after multiplying by a weighting factor for correction. The multiplication of the weight coefficients is performed by the filter coefficient generator 50 by multiplying the filter coefficients 47-1 to 47-n of the low-pass filters 42-1 to 42-n.
 光路長が一致する複素信号48-1~48-nをラインメモリ41-1~41-nから読み出すためのアドレス44-1~44-nと、それにデータ補間用の下位アドレスを加えたアドレス44-1~44-pが、アドレス生成部45で生成される。 Addresses 44-1 to 44-n for reading out the complex signals 48-1 to 48-n having the same optical path length from the line memories 41-1 to 41-n, and an address 44 obtained by adding a lower address for data interpolation to the addresses 44-1 to 44-n. -1 to 44-p are generated by the address generator 45 .
 これらのアドレス44-1~44-pは、計算によって生成すること、もしくは、事前に算出したものをルックアップテーブルに記憶しておくこと、または、計算時間とメモリ規模のバランスを考慮して、それらの複合方式によること、により生成される。 These addresses 44-1 to 44-p are generated by calculation, or stored in a lookup table as calculated in advance, or considering the balance between calculation time and memory size. generated by a combination of these methods.
 また、反射光と参照光の光路長は、図4の各受光素子33-1~33-nの位置を含めて、光路中に配置された光学系や、反射ミラーの形状や位置によって、光源から各受光素子33-1~33-nまでの光路長に違いが生じる。このため、反射光と参照光の光路長がアドレス生成部45において正確に算出され、アドレス44-1~44-pに反映される。 The optical path lengths of the reflected light and the reference light depend on the optical system arranged in the optical path, including the positions of the light receiving elements 33-1 to 33-n in FIG. 4, and the shape and position of the reflecting mirror. to each of the light receiving elements 33-1 to 33-n. Therefore, the optical path lengths of the reflected light and the reference light are accurately calculated in the address generator 45 and reflected in the addresses 44-1 to 44-p.
 例えば、図1の構成の反射光と参照光の光路長を算出する。
 撮像素子8の受光面の中心の位置を3次元座標の原点(0,0,0)とし、
 紙面と垂直の方向をX軸、縦方向をY軸、光軸9の方向をZ軸とする。
For example, the optical path lengths of the reflected light and the reference light in the configuration of FIG. 1 are calculated.
The position of the center of the light receiving surface of the image sensor 8 is the origin (0, 0, 0) of the three-dimensional coordinates,
The direction perpendicular to the paper surface is the X-axis, the vertical direction is the Y-axis, and the direction of the optical axis 9 is the Z-axis.
 反射光の光路長は、光源1の位置をビームスプリッタ2の反射面で折り返し、そのときの光軸9上の光源1の位置(0,0,s)から反射点6の位置(x,y,z)までの光路長に、反射点6の位置(x,y,z)から撮像素子8の各受光素子の位置(dx,dy,0)までの光路長を加えることになる。 The optical path length of the reflected light is obtained by folding the position of the light source 1 on the reflecting surface of the beam splitter 2, and from the position (0, 0, s) of the light source 1 on the optical axis 9 at that time to the position (x, y , z), the optical path length from the position (x, y, z) of the reflection point 6 to the position (dx, dy, 0) of each light receiving element of the imaging device 8 is added.
 反射光の光路長は、以下の式(7)で表される。
 [x+y+(z-s)]1/2+[(x-dx) 2+(y-dy)2 +z] 1/2   (7)
The optical path length of reflected light is represented by the following formula (7).
[x 2 +y 2 +(zs) 2 ] 1/2 + [(x-dx) 2 +(y-dy) 2 +z 2 ] 1/2 (7)
 また、参照光の光路長は、光源1の位置を反射ミラー4の反射面で折り返し、更に、それをビームスプリッタ2の反射面で折り返し、そのときの光源1の光軸9上の位置(0,0,r)から撮像素子8の各受光素子の位置(dx,dy,0)までの光路長となる。 Further, the optical path length of the reference light is determined by reflecting the position of the light source 1 on the reflecting surface of the reflecting mirror 4 and further by reflecting on the reflecting surface of the beam splitter 2, and the position of the light source 1 on the optical axis 9 at that time (0 , 0, r) to the position (dx, dy, 0) of each light receiving element of the imaging device 8 .
 参照光の光路長は、以下の式(8)で表される。
 [dx+dy2 +r] 1/2   (8)
The optical path length of the reference light is represented by Equation (8) below.
[dx 2 +dy 2 +r 2 ] 1/2 (8)
 このように図1の場合は、反射光と参照光の光路長が容易に算出できるため、反射光と参照光の光路差が算出できる。そして、反射光と参照光の光路差を受光方向の標本化間隔で除した値が、フーリエ変換処理11によって受光方向を解像したときの画素のアドレスに対応する。このようにして、アドレス44-1~44-nを生成することができる。 Thus, in the case of FIG. 1, the optical path lengths of the reflected light and the reference light can be easily calculated, so the optical path difference between the reflected light and the reference light can be calculated. A value obtained by dividing the optical path difference between the reflected light and the reference light by the sampling interval in the light receiving direction corresponds to the pixel address when the light receiving direction is resolved by the Fourier transform processing 11 . In this way, addresses 44-1 to 44-n can be generated.
 また、2次元フィルタ処理を行う際に、ローパスフィルタ42-1~42-nのデータ補間によって、立方等配列の3次元画素へ変換することができる。このため、アドレス44-1~44-pは、それに対応して生成される。 Also, when performing two-dimensional filter processing, data interpolation by the low-pass filters 42-1 to 42-n can be used to convert the pixels into three-dimensional pixels in a cubic or equal arrangement. Addresses 44-1 to 44-p are thus generated accordingly.
 また、3次元の解像は、被写体空間からの反射光を3次元にフーリエ変換すれば良い。フーリエ変換は、フーリエ変換後に乗算するフィルタの特性が一定(スペースインバリアントフィルタ)の場合に、バタフライ演算の効果によってトータルの乗算数を大きく減らすことができる。 Also, the three-dimensional resolution can be achieved by Fourier transforming the reflected light from the object space into three dimensions. The Fourier transform can greatly reduce the total number of multiplications due to the effect of the butterfly operation when the characteristics of the filter multiplied after the Fourier transform are constant (space invariant filter).
 ところが、実際は、画素ごとに開口の大きさと焦点位置を変える(フィルタの特性を変える)必要がある。さらに、後述する光波面の乱れの補正では、光波面の乱れに合わせて画素ごとにフィルタの係数を変える必要がある。このため、本実施形態は、受光方向の1次元フーリエ変換処理に、3次元の画素ごとにフィルタの係数を最適化して重畳積分を行う2次元フィルタ処理12を組合せている。 However, in reality, it is necessary to change the size of the aperture and the focal position (change the filter characteristics) for each pixel. Furthermore, in the correction of the disturbance of the optical wavefront, which will be described later, it is necessary to change the coefficient of the filter for each pixel according to the disturbance of the optical wavefront. For this reason, in this embodiment, the one-dimensional Fourier transform processing in the light receiving direction is combined with the two-dimensional filter processing 12 in which the filter coefficient is optimized for each three-dimensional pixel and the convolution integration is performed.
 また、2次元フィルタ処理の原理を説明する都合上、受光方向の複素信号36-1~36-nに対して並列に処理するシステム構成を中心に説明したが、実際は、回路規模との兼ね合いで、直列と並列の処理を適宜織り交ぜることになる。 Also, for the convenience of explaining the principle of two-dimensional filter processing, the system configuration for parallel processing of the complex signals 36-1 to 36-n in the direction of light reception has been mainly described, but in reality, there is a balance with the circuit scale. , serial and parallel processing are interwoven as appropriate.
 また、上述したように、2次元フィルタ処理12はパラメータを適応的に切換えることが多いため、高速のCPUによるソフトウエア処理や、高速並列処理が得意で汎用言語によるプログラミングが可能なGPU(Graphics Processing Unit)などを組み合わせ、回路規模と処理時間のバランスをとり、システム構成の変更や、機能のバージョンアップをやり易くすることが望ましい。 In addition, as described above, since the parameters of the two-dimensional filter processing 12 are often adaptively switched, a GPU (Graphics Processing Unit), which is good at high-speed parallel processing and capable of programming in a general-purpose language, is used. Unit), etc., to balance the circuit scale and processing time, and to make it easier to change the system configuration and upgrade the function.
(2次元フィルタ処理に必要な受光素子の配列間隔と指向性についての説明)
 次に、図1の2次元フィルタ処理12に必要な受光素子の配列間隔と指向性について説明する。2次元フィルタ処理12は、反射光を2次元フーリエ変換することと同じである。
(Description of array spacing and directivity of light receiving elements required for two-dimensional filtering)
Next, the arrangement interval and directivity of the light receiving elements required for the two-dimensional filtering 12 of FIG. 1 will be described. Two-dimensional filtering 12 is the same as two-dimensional Fourier transforming the reflected light.
 図6(a)-(g)は、受光素子の配列間隔と指向性を説明する図である。
 説明を簡素化するために、図6(a)-(g)に、反射光を1次元配列の受光素子で受光し、配列方向についてフーリエ変換したフーリエ変換対を示し、受光素子の配列間隔と指向性について説明を行う。y軸は、配列方向の位置を示し、Y軸は、y軸をフーリエ変換した焦点面の位置を示す。
FIGS. 6A to 6G are diagrams for explaining the arrangement interval and directivity of the light receiving elements.
In order to simplify the explanation, FIGS. 6(a) to 6(g) show a Fourier transform pair in which reflected light is received by a one-dimensional array of light receiving elements and Fourier transform is performed in the array direction. Directivity will be explained. The y-axis indicates the position in the arrangement direction, and the Y-axis indicates the position of the focal plane obtained by Fourier transforming the y-axis.
 図6(a)は、光軸上にある反射点からの反射光を、開口52によって受光したときの受光感度分布51を示している。受光感度分布51は、設定した開口52と受光素子単体の指向性(焦点面上の受光感度分布)53の積となる。ゆえに、開口52は、指向性53の範囲を超えて設定しても意味がない。 FIG. 6(a) shows a light receiving sensitivity distribution 51 when reflected light from a reflecting point on the optical axis is received by an opening 52. FIG. The light sensitivity distribution 51 is the product of the set aperture 52 and the directivity of the single light receiving element (the light sensitivity distribution on the focal plane) 53 . Therefore, setting the opening 52 beyond the range of the directivity 53 is meaningless.
 言い換えると、検出できる最大の解像度は、受光素子の指向性53で決まる。指向性53は、受光素子単体の開口とマイクロレンズによって形成される。図6(a)に示した受光感度分布51の波形は、開口52が指向性53より小さい場合を示している。また、受光素子単体の指向性53が常に光軸方向にあるため、検出する反射点が光軸から離れると、受光感度分布51の波形が変わる。図6(a)の波形は、反射点が光軸上にある場合を示している。 In other words, the maximum detectable resolution is determined by the directivity 53 of the light receiving element. The directivity 53 is formed by the aperture of the single light receiving element and the microlens. The waveform of the light receiving sensitivity distribution 51 shown in FIG. Further, since the directivity 53 of the single light receiving element is always in the direction of the optical axis, the waveform of the light receiving sensitivity distribution 51 changes when the reflection point to be detected moves away from the optical axis. The waveform in FIG. 6(a) shows the case where the reflection point is on the optical axis.
 図6(b)は、間隔Pの受光素子配列を示している。図6(c)は、受光素子単体の受光面上の感度分布を示している。 FIG. 6(b) shows a light-receiving element arrangement with an interval P. FIG. FIG. 6(c) shows the sensitivity distribution on the light receiving surface of the single light receiving element.
 図6(a)の関数をフーリエ変換すると図6(d)の関数になる。図6(b)の関数をフーリエ変換すると図6(e)の関数になる。図6(c)の関数をフーリエ変換すると図6(f)の関数になる。 Fourier transforming the function in FIG. 6(a) results in the function in FIG. 6(d). Fourier transforming the function of FIG. 6(b) results in the function of FIG. 6(e). Fourier transforming the function of FIG. 6(c) results in the function of FIG. 6(f).
 重畳積分の定理に従い、フーリエ変換により乗算は重畳積分(コンボリューション)に置き換わり、重畳積分は乗算に置き換わっている。 According to the convolution theorem, Fourier transform replaces multiplication with convolution, and convolution replaces multiplication.
 図6(d)は、受光感度分布51をフーリエ変換した焦点面上の点像分布関数(半値全幅が解像度)54を示す。図6(e)は、受光素子の配列によって起きる回折の極を示す。極の間隔は1/Pとなる。図6(f)は、マイクロレンズで形成される(マイクロレンズのフーリエ変換で形成される)受光素子単体の焦点面上の指向性(受光感度分布)53を示す。ちなみに、Y軸上の実際の数値は、焦点距離の逆数と中心波長に比例した係数を乗じた値になるが、本項の説明と直接の関りがないため、図は省略する。 FIG. 6(d) shows a point spread function (resolution is full width at half maximum) 54 on the focal plane obtained by Fourier transforming the light sensitivity distribution 51 . FIG. 6(e) shows diffraction poles caused by the arrangement of the light receiving elements. The pole spacing is 1/P. FIG. 6(f) shows the directivity (light sensitivity distribution) 53 on the focal plane of a single light receiving element formed of microlenses (formed by Fourier transform of the microlenses). Incidentally, the actual numerical value on the Y-axis is the value obtained by multiplying the reciprocal of the focal length by a coefficient proportional to the center wavelength, but the figure is omitted because it is not directly related to the description of this section.
 図6(d)の関数と図6(e)の関数を重畳積分し、さらに図6(f)の関数を乗算すると、図6(g)に示す波形となる。解像度57は、フーリエ変換対の関係から、受光感度分布51の開口τの逆数、1/τに比例する。上述したように、合成できる解像度(開口数)は受光素子の指向性53で決まる。目的とする解像度に応じてマイクロレンズの指向性が設定される。 When the function in FIG. 6(d) and the function in FIG. 6(e) are superposed and integrated, and the function in FIG. 6(f) is multiplied, the waveform shown in FIG. 6(g) is obtained. The resolution 57 is proportional to 1/τ, which is the reciprocal of the aperture τ of the light sensitivity distribution 51 from the Fourier transform pair relationship. As described above, the resolution (numerical aperture) that can be synthesized is determined by the directivity 53 of the light receiving element. The directivity of the microlens is set according to the desired resolution.
 そして、図6(g)の波形から分かるように、受光素子単体の指向性53が乗ぜられることで、第2主極55以上の回折(ゴースト画像の要因になる )が除去される。回折の極の間隔1/Pを、指向性53がnull(0)になる位置56より大きく設定しなくてはならない。言い換えると、受光素子の配列間隔Pは、解像度より小さく設定する必要がある。 Then, as can be seen from the waveform in FIG. 6(g), the directivity 53 of the single light receiving element is multiplied, thereby eliminating the diffraction from the second main pole 55 or higher (causing a ghost image). The diffraction pole spacing 1/P must be set greater than the position 56 where the directivity 53 becomes null (0). In other words, the array interval P of the light receiving elements must be set smaller than the resolution.
  例えば、図1の2次元フィルタ処理12によって1μの解像度を得るとき、受光素子の配列間隔は1μm以下が必要になる。ちなみに、撮像素子の画素間隔の製造限界は、現状、1μmを僅かに下回っている。また、マイクロレンズの指向性も、製造プロセスにおいて制御することが可能である。 For example, when obtaining a resolution of 1 μm by the two-dimensional filter processing 12 in FIG. 1, the array interval of the light receiving elements must be 1 μm or less. Incidentally, the manufacturing limit of the pixel interval of the imaging device is currently slightly below 1 μm. Also, the directivity of the microlenses can be controlled in the manufacturing process.
 画角の端にある反射点を検出する場合、±第2主極55の位置が光軸に最も近づく。これに対して、受光素子の指向性53は常に光軸方向にある。このため、±第2主極55が受光素子の指向性53内に入らないように、配列間隔Pを小さ目に設定するか、もしくは、画角を狭める必要がある。 When detecting a reflection point at the end of the angle of view, the positions of the ±second main poles 55 are closest to the optical axis. In contrast, the directivity 53 of the light receiving element is always in the optical axis direction. Therefore, it is necessary to set the array interval P to a small value or narrow the angle of view so that the ±second main poles 55 do not enter the directivity 53 of the light receiving element.
 そこで、フーリエ変換処理11(図1)は、干渉縞信号を周波数成分ごとに直交検波をするのと同じである。フーリエ変換処理により、干渉縞のキャリア(搬送波成分)が消え、受光方向の点像分布関数の複素信号が得られる。 Therefore, the Fourier transform processing 11 (Fig. 1) is the same as orthogonally detecting the interference fringe signal for each frequency component. By Fourier transform processing, the carrier (carrier wave component) of the interference fringes disappears, and a complex signal of the point spread function in the light receiving direction is obtained.
 複素信号という2チャンネルの信号になるが、干渉縞のキャリアが無くなる分、その周波数帯域は狭くなり、点像分布関数の包絡線の帯域幅となる。これによって、光を複素信号に変換してから行なう2次元フィルタ処理12(図1)に必要な受光素子の配列間隔は、解像度の1/2以下であるミクロンオーダーで良いことになる。 Although it becomes a two-channel signal called a complex signal, the frequency band becomes narrower as the interference fringe carrier disappears, and becomes the bandwidth of the envelope of the point spread function. As a result, the arrangement interval of the light-receiving elements required for the two-dimensional filtering 12 (FIG. 1) that is performed after converting the light into a complex signal can be on the order of microns, which is less than half the resolution.
 これに対して、結像レンズの場合は、光の周波数をキャリアとするフーリエ変換であるため、その表面精度は光の波長の1/16以下(数10ナノオーダー)という高い精度が要求される。 On the other hand, in the case of an imaging lens, since Fourier transform is performed using the frequency of light as a carrier, the surface accuracy is required to be as high as 1/16 or less of the wavelength of light (on the order of several tens of nanometers). .
 ただし、結像レンズは、結像を瞬時に行なえる大変優れた2次元フーリエ変換器であり、2次元フィルタ処理のように処理時間を必要としない。ただし、焦点位置、開口、倍率などの変更や光波面の乱れの補正を行う場合、複雑な光学系と機構が必要となり、その切換に時間を要する。 However, the imaging lens is a very good two-dimensional Fourier transformer that can instantly form an image, and does not require processing time like two-dimensional filtering. However, when changing the focal position, aperture, magnification, etc., or correcting the disturbance of the optical wavefront, a complicated optical system and mechanism are required, and it takes time to switch between them.
 その点、2次元フィルタ処理は、それらの切換を電気的に行なうこと、画素ごとに最適化が図れること、劣化した解像度を回復すること、高解像を以って被写界深度を拡大することなどが可能になる。 On the other hand, the two-dimensional filter processing is to electrically switch between them, to optimize for each pixel, to restore the deteriorated resolution, and to expand the depth of field with high resolution. etc. becomes possible.
 そこで、光学系と2次元フィルタ処理を組合せ、それぞれの特徴を活かすことで、用途と目的に応じたシステムの最適化を図ることができる。また、応用例は、後述する。 Therefore, by combining the optical system and two-dimensional filter processing and making use of the characteristics of each, it is possible to optimize the system according to the application and purpose. Further, application examples will be described later.
 反射光の通過媒体中に、屈折率の異なる組成の混在や途中光学系の収差などがあると、光波面に乱れを生じ、解像度が劣化する。次に、光波面の乱れによって劣化した解像度を、2次元フィルタ処理によって回復する構成、方法を説明する。 If there is a mixture of compositions with different refractive indices in the medium through which the reflected light passes, or if there are aberrations in the optical system on the way, the light wavefront will be disturbed and the resolution will be degraded. Next, the configuration and method for recovering the resolution deteriorated by the disturbance of the optical wavefront by two-dimensional filtering will be described.
(2次元フィルタ処理により解像度を回復する説明)
 図7は、光波面の乱れによって劣化した解像度を、2次元フィルタ処理によって回復する構成を説明する図である。
(Description of restoring resolution by two-dimensional filtering)
FIG. 7 is a diagram for explaining a configuration for recovering the resolution deteriorated by the disturbance of the optical wavefront by two-dimensional filtering.
 図7に示すように、開口を複数のブロック61-1~61-m~61-nに分割する。各ブロックに相当する干渉縞信号をメモリ5(図1)から読みだし、フーリエ変換62-1~62-nを行う。 As shown in FIG. 7, the opening is divided into a plurality of blocks 61-1 to 61-m to 61-n. Interference fringe signals corresponding to each block are read out from the memory 5 (FIG. 1) and subjected to Fourier transforms 62-1 to 62-n.
 受光方向の複素信号を得た後、ブロックごとに2次元フィルタ処理63-1~63-nを行って、各ブロックの主光線67の方向に、反射点66の画素の前後数画素、例えば、計5画素の複素信号を検出する。 After obtaining the complex signal in the light receiving direction, two-dimensional filtering 63-1 to 63-n is performed for each block, and several pixels before and after the pixel of the reflection point 66 are filtered in the direction of the principal ray 67 of each block. Complex signals of a total of 5 pixels are detected.
 次に、各ブロックで検出した5画素の複素信号と、中央のブロック61-mの5画素の複素信号の相互相関処理64-1~64-nを行い、光路長のずれを検出する。相互相関処理64-1~64-nは、中央ブロック61-mの5画素の複素共役信号を、他のブロックの5画素の複素信号に重畳積分することで行われる。 Next, cross-correlation processing 64-1 to 64-n is performed on the 5-pixel complex signal detected in each block and the 5-pixel complex signal in the central block 61-m to detect the optical path length deviation. The cross-correlation processes 64-1 to 64-n are performed by superimposing the complex conjugate signals of five pixels of the central block 61-m on the complex signals of five pixels of other blocks.
 そのとき、重畳積分は、光路長のずれを示すピーク値の検出精度が、受光方向の解像度の1/16以下になるように、5画素についてデータ補間がなされて行われる。 At that time, the superposition integration is performed by interpolating data for 5 pixels so that the detection accuracy of the peak value indicating the deviation of the optical path length is 1/16 or less of the resolution in the light receiving direction.
 光波面の乱れが大きい場合は、相互相関処理を行う5画素の数を増やすことで対応する。光波面の乱れの空間周波数が高い場合は、標本化数を増やすためにブロック数を増やして対応する。ブロックの受光素子の出力にガウシアンの重み付けを施し、開口を重ねることでブロック数を2倍に増やしても良い。 If the disturbance of the optical wavefront is large, it is dealt with by increasing the number of 5 pixels for cross-correlation processing. When the spatial frequency of the disturbance of the optical wavefront is high, the number of blocks is increased in order to increase the number of samples. The number of blocks may be doubled by applying Gaussian weighting to the outputs of the light receiving elements of the blocks and overlapping the apertures.
 相互相関処理64-1~64-nで検出した中央ブロックと各ブロックとの光路長のずれが、光波面の乱れを示す。このブロックごとの光路長のずれを、データ補間部66によって、図4の受光素子33-1~33-nに対応するようにデータの補間を行う。そして、図5に示す光路長整合のアドレス発生部45に送ってアドレス46に反映させる(アドレス46に加算する)。これによって、光波面の乱れを補正した2次元フィルタ処理ができることになる。 The deviation of the optical path length between the central block and each block detected by the cross-correlation processing 64-1 to 64-n indicates the disturbance of the optical wavefront. The data interpolating unit 66 interpolates the data so that the deviation of the optical path length for each block corresponds to the light receiving elements 33-1 to 33-n in FIG. Then, it is sent to the address generator 45 for optical path length matching shown in FIG. 5 and reflected in the address 46 (added to the address 46). As a result, two-dimensional filter processing can be performed in which the disturbance of the optical wavefront is corrected.
 また、顕微鏡撮像のように開口数が大きいと、中央ブロックと開口端のブロックの相関性が薄れる。相関性が薄れを回避する方法を説明する。先ず、中央の1番目のブロックと相関性が高い隣の2番目のブロックとの相関処理を行う。次に、2番目のブロックと3番目のブロックとのあいだで相関処理を行う。これを順次外側に移行しながら繰り返し、光路長のずれを累積することで検出しても良い。 Also, when the numerical aperture is large, as in microscopic imaging, the correlation between the central block and the blocks at the ends of the aperture weakens. Explain how to avoid fading correlations. First, correlation processing is performed between the central first block and the adjacent second block that has a high correlation. Next, correlation processing is performed between the second block and the third block. This may be detected by accumulating deviations in the optical path length by repeating this while sequentially shifting to the outside.
 検出誤差も累積されるが、受光方向の解像度がミクロンオーダーで、そのSN比が40db以上ある場合、受光方向の複素信号をデータ補間して相互相関処理を行うと、ずれの検出精度は、十数ナノオーダーという高い精度が得られる。このため、累積誤差を無視できる。開口数とSN比を鑑みて、相関処理の方法を選択することが望ましい。 Although detection errors are also accumulated, when the resolution in the light receiving direction is on the order of microns and the SN ratio is 40 db or more, if the complex signal in the light receiving direction is subjected to data interpolation and cross-correlation processing is performed, the deviation detection accuracy is sufficiently high. A high accuracy of several nano-orders can be obtained. Therefore, the accumulated error can be neglected. It is desirable to select a correlation processing method in consideration of numerical aperture and SN ratio.
 途中の光学系の収差で生じる光波面の乱れは、緩やかであり、光波面における空間周波数成分は低い。それに比して、通過媒体中に屈折率の異なる組成が解像度並みの大きさで混在する場合、光波面の乱れによる空間周波数成分が高くなる。 The disturbance of the optical wavefront caused by the aberration of the optical system on the way is gentle, and the spatial frequency component in the optical wavefront is low. On the other hand, when compositions with different refractive indices are mixed in the medium through which the medium passes, the spatial frequency component increases due to disturbance of the light wavefront.
 空間周波数成分が高い場合、標本化定理に従ってブロック数を増やすことになる。ブロック数とブロックの開口数(NA)はトレードオフの関係にある。このため、ブロック数を増やすと相互相関の精度が落ちる。 If the spatial frequency component is high, the number of blocks will be increased according to the sampling theorem. The number of blocks and the numerical aperture (NA) of the blocks are in a trade-off relationship. Therefore, increasing the number of blocks reduces the accuracy of cross-correlation.
 そこで、被写体ごとに光波面の乱れを統計的に把握し、それとSN比を拘束条件とする。次に、ブロックの数と開口数、および、相互相関の画素範囲の組合せを、組合せ最適化問題によって被写体ごとに事前に解いておく。そして、被写体ごとに最適なバランスに切換えてから補正を行う。 Therefore, the disturbance of the optical wavefront for each subject is statistically grasped, and the S/N ratio is used as a constraint. Next, combinations of the number of blocks, the numerical aperture, and the cross-correlation pixel range are pre-solved for each subject by a combinatorial optimization problem. Then, after switching to the optimum balance for each subject, correction is performed.
 また、ブロック数と開口数と相互相関の範囲、SN比を変数として、2次元フィルタ処理後の画像のOTFの伸びを指標としたアニーリングやイテレーションによって、被写体ごとに最適な組み合わせを検出しておく。そして、被写体ごとに最適なバランスに切換えて補正を行っても良い。 Also, with the number of blocks, the numerical aperture, the range of cross-correlation, and the SN ratio as variables, the optimal combination for each subject is detected by annealing and iteration using the extension of the OTF of the image after two-dimensional filtering as an index. . Then, correction may be performed by switching to an optimum balance for each object.
 または、それらの変数に、統計的に把握した通過媒体中の屈折率の混在状況をパラメータとして加え、2次元フィルタ処理後のOTFの伸びを教師情報として、AIに最適なバランスを学習させる。そして、最適な組み合わせを判定、被写体ごとに最適なバランスに切換えて補正を行っても良い。 Alternatively, to those variables, add the statistically grasped mixed state of the refractive index in the passing medium as a parameter, and let the AI learn the optimal balance by using the elongation of the OTF after the two-dimensional filtering process as teaching information. Then, the optimum combination may be determined, and correction may be performed by switching to the optimum balance for each subject.
 ちなみに、本実施形態の光波面の乱れを補正する原理は、天文学などで行われる補償光学と基本は同じである。ただし、補償光学の場合は、光波面の乱れを検出するために被写体空間にガイド星(点像)を設定する必要がある。高度90kmにあるナトリウム原子層にレーザー光を照射し,ナトリウムを励起して光らせることでガイド星を設定する。 By the way, the principle of correcting the disturbance of the optical wavefront of this embodiment is basically the same as that of adaptive optics used in astronomy. However, in the case of adaptive optics, it is necessary to set a guide star (point image) in the object space in order to detect the disturbance of the light wavefront. A guide star is set by irradiating a sodium atomic layer at an altitude of 90km with a laser beam to excite the sodium to make it glow.
 これと同様に、赤外光などで被写体の表面に点像を設定しても良いが、本実施形態の方法は、被写体の信号を使用した相互相関処理によって光波面の乱れを検知する。このため、被写体空間にガイド星と同じ点像を設定する必要がない。 Similarly, a point image may be set on the surface of the subject using infrared light or the like, but the method of the present embodiment detects disturbance of the light wavefront by cross-correlation processing using the signal of the subject. Therefore, it is not necessary to set the same point image as the guide star in the object space.
 また、電気的な処理のため、補償光学の波面センサーや波面制御器に相当する図7のブロック61-1~61-nの数と大きさを、用途に応じて適宜設定することができる。そして、それらのバランスを最適化問題などの処理を活用して最適化することができる。 Also, due to electrical processing, the number and size of blocks 61-1 to 61-n in FIG. 7, which correspond to wavefront sensors and wavefront controllers of adaptive optics, can be appropriately set according to the application. Then, the balance between them can be optimized using processing such as an optimization problem.
 また、更に精度の高い光波面の乱れの補正方法を説明する。各ブロック61-1~61-m~61-nの2次元フィルタ処理によって、検出点を中心とした5×5×5画素の3次元の複素信号データを検出する。その3次元の複素信号を使用した6軸(x,y,z,xθ,yθ,zθ)の相互相関処理をブロック間で行う。その結果から、光波面の乱れの補正に加えて、受光素子の受光位置の補正を行う。その後、2次元フィルタ処理を行う。 Also, a method for correcting the disturbance of the optical wavefront with higher accuracy will be described. Three-dimensional complex signal data of 5×5×5 pixels centered on the detection point is detected by two-dimensional filter processing of each block 61-1 to 61-m to 61-n. Six-axis (x, y, z, , , ) cross-correlation processing using the three-dimensional complex signal is performed between blocks. Based on the result, the light receiving position of the light receiving element is corrected in addition to the correction of the disturbance of the light wavefront. After that, two-dimensional filtering is performed.
 これによって、光波面の乱れの補正に留まらず、機械走査によって起きるブレを高い精度で補正しながら、2次元フィルタ処理を行うことができる。ただし、この補正方法は処理数が膨大となるため、処理時間が許される用途での適用となる。 As a result, it is possible not only to correct the disturbance of the optical wavefront, but also to perform the two-dimensional filter processing while correcting the blurring caused by mechanical scanning with high accuracy. However, since this correction method requires a huge number of processes, it is applied to applications where processing time is allowed.
 次に、図8を用いて、図1のメモリ5に記憶した干渉縞のRAWデータからRGB画像を生成する構成、処理について説明する。 Next, using FIG. 8, the configuration and processing for generating an RGB image from the RAW data of the interference fringes stored in the memory 5 of FIG. 1 will be described.
 図8に示すように、図1のメモリ5から開口に相当する干渉縞信号71を順次、または、並列に読み出す。読みだされた信号は、FFT72によって、図9に示す可視光帯域81aのフーリエ変換が行われる。そして、受光方向が解像されたW(White)の複素信号が生成される。図9は、フーリエ変換によって生成する各種スペクトル画像の波長帯域を示す図である。 As shown in FIG. 8, the interference fringe signals 71 corresponding to the aperture are read out sequentially or in parallel from the memory 5 in FIG. The read signal is Fourier-transformed by the FFT 72 in the visible light band 81a shown in FIG. Then, a W (White) complex signal in which the direction of light reception is resolved is generated. FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
 被写体が生体などで、その内層を含めて透過像を検出する場合は、図9に示す生体透過性の高い近赤外領域82を含めてWの帯域とし、その範囲についてフーリエ変換を行ってWを生成しても良い。 When the object is a living body and a transmission image including its inner layer is to be detected, the band W includes the near-infrared region 82 where the living body is highly transparent shown in FIG. may be generated.
 Wの複素信号の生成に並行して、FFT73によって、図9に示すR帯域83のフーリエ変換が行われ、Rの複素信号が生成される。同じく、図8に示すFFT74によって、図9に示すB帯域84のフーリエ変換が行なわれ、Bの複素信号が生成される。
 図9は、フーリエ変換によって生成する各種スペクトル画像の波長帯域を示す図である。
In parallel with the generation of the W complex signal, the FFT 73 performs the Fourier transform of the R band 83 shown in FIG. 9 to generate the R complex signal. Similarly, the FFT 74 shown in FIG. 8 performs the Fourier transform of the B band 84 shown in FIG. 9 to generate the B complex signal.
FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
 そして、以上のW、R、Bの複素信号について、それぞれ2次元フィルタ処理を行い、W、R、Bの3次元の解像を行う。光路中に光学系を併用する場合、R、Bの2次元フィルタ処理の際に色収差(光路長のずれ)を補正することができる。また、前述したように、2次元フィルタ処理を行う際に、立方等配列の画素に変換しても良い。 Then, the W, R, and B complex signals are each subjected to two-dimensional filter processing, and three-dimensional resolution of W, R, and B is performed. When an optical system is also used in the optical path, chromatic aberration (deviation in optical path length) can be corrected during R and B two-dimensional filtering. Further, as described above, when performing the two-dimensional filtering process, the pixels may be converted into a cubic array of pixels.
 図8に示した各FFTと各2次元フィルタ処理は、図1のフーリエ変換処理11と2次元フィルタ12で説明したのと同じ機能を受け持つ。 Each FFT and each two-dimensional filter processing shown in FIG. 8 have the same functions as those described for the Fourier transform processing 11 and two-dimensional filter 12 in FIG.
 次に、図8のマトリクス変換器75でマトリクス変換を行なって、3次元に解像されたRGB信号を生成する。そして、表面画像、断面像、透過像、CGによる3次元構築画像など、目的に応じた画像を表示する。 Next, matrix conversion is performed by the matrix converter 75 in FIG. 8 to generate three-dimensionally resolved RGB signals. Then, images are displayed according to the purpose, such as surface images, cross-sectional images, transmission images, and three-dimensional constructed images by CG.
 ちなみに、図9のR信号83とB信号84は、W信号よりも波長帯域幅が狭く、中心波長も異なる。このため、R信号83とB信号84の受光方向の解像度は、W信号の1/3程度になる。しかしながら、RとBに対する人の目の解像度が同じく1/3程度のため問題はない。 Incidentally, the R signal 83 and the B signal 84 in FIG. 9 have narrower wavelength bandwidths than the W signal and different center wavelengths. Therefore, the resolution of the light receiving direction of the R signal 83 and the B signal 84 is about 1/3 of that of the W signal. However, since the resolution of the human eye for R and B is also about 1/3, there is no problem.
 可視光81aの帯域を分割したR帯域とB帯域の干渉縞信号をフーリエ変換すると、R信号83とB信号84の複素信号が生成できる理由を述べる。広帯域の掃引光は、R、G、B、赤外など、複数の掃引光の線形和から成り立っていると考えられる。 The reason why complex signals of the R signal 83 and the B signal 84 can be generated by Fourier transforming the interference fringe signals of the R band and the B band obtained by dividing the band of the visible light 81a will be explained. Broadband swept light is considered to consist of a linear sum of a plurality of swept lights such as R, G, B, and infrared.
 照明、反射、参照光との干渉、干渉縞の検出、フーリエ変換など、すべての処理は線形処理である。このため、重ね合わせの原理から、干渉縞信号からRやBの帯域に相当する周波数掃引の部分を抜き出して、フーリエ変換を行うことで、RやBの掃引光源を単独に用いて光干渉解像処理を行ったのと同じ結果が得られる。 All processing is linear processing, including illumination, reflection, interference with reference light, detection of interference fringes, and Fourier transform. For this reason, from the principle of superposition, by extracting the swept frequency portion corresponding to the R and B bands from the interference fringe signal and performing the Fourier transform, the optical interference solution can be obtained by using the R and B swept light sources independently. The same result as image processing is obtained.
 この原理から、正確な色再現が必要なときは、掃引された可視光帯の干渉縞信号にXYZ等色関数を乗じてフーリエ変換することで、XYZの複素信号を得ることができる。 Based on this principle, when accurate color reproduction is required, an XYZ complex signal can be obtained by multiplying the swept visible light band interference fringe signal by an XYZ color matching function and performing a Fourier transform.
(特徴軸へ正射影した受光方向の複素信号を得る構成の説明)
 また、同じ原理から、特徴軸に直交変換する係数を干渉縞信号に乗じてフーリエ変換することで、特徴軸へ正射影した受光方向の複素信号を得ることができる。
(Description of the configuration for obtaining a complex signal in the light-receiving direction orthogonally projected onto the characteristic axis)
Further, based on the same principle, by multiplying the interference fringe signal by a coefficient for orthogonal transformation on the characteristic axis and performing Fourier transformation, a complex signal in the light receiving direction orthographically projected onto the characteristic axis can be obtained.
 図1のメモリ5に記憶した干渉縞のRAWデータを使用して行なうマルチスペクトルの解析について説明する。 A multispectral analysis performed using the RAW data of the interference fringes stored in the memory 5 of FIG. 1 will be described.
 可視光帯の反射スペクトルを例にすると、主に、原子の外殻電子を励起する波長の吸収や、分子振動やスピンや分子間振動を励起する波長の吸収、屈折率の配列による回折散乱などによって、スペクトル成分が変化する。 Taking the reflection spectrum in the visible light band as an example, it mainly includes absorption of wavelengths that excite outer-shell electrons of atoms, absorption of wavelengths that excite molecular vibrations, spins, and intermolecular vibrations, and diffraction scattering due to the arrangement of refractive indices. changes the spectral components.
 これらによるスペクトル成分の変化は、物質の組成や構造と高い相関性を有しているが、多重的に生じることと、また、回折散乱などのように、照明や撮像の角度によってもある程度変化する。このため、スペクトル成分が類似している物質ごとのクラスタとして扱うことになる。 Changes in spectral components due to these have a high correlation with the composition and structure of the material, but they also occur in multiple ways, and also change to some extent depending on the angle of illumination and imaging, such as diffraction scattering. . For this reason, each substance having similar spectral components is treated as a cluster.
 このようなクラスタを識別する方法として、統計的な解析、例えば、多変量解析や、深層学習型のAIなどを使用すると、良い結果が得られる。このような方法によって2つのクラスタを識別する手順を以下に述べる。 Good results can be obtained by using statistical analysis, such as multivariate analysis or deep learning AI, as a method of identifying such clusters. The procedure for identifying two clusters by such a method is described below.
 先ず、本実施形態の3次元撮像装置によって識別する2つの物質の干渉縞信号を、出来るだけ数多く取得する。次に、図8に示すデータフォーマット作成部81にて、付加情報70を付加し、RAWデータとして外部のコンピュータに送る。そして、記録装置にアーカイブする。 First, obtain as many interference fringe signals as possible for two substances to be identified by the three-dimensional imaging device of this embodiment. Next, additional information 70 is added by the data format creating unit 81 shown in FIG. 8, and the data is sent to an external computer as RAW data. Then, it is archived in a recording device.
 付加情報70には、識別するクラスタの分散を小さくして、クラスタ間を識別しやすくする正規化処理の情報と、識別する物質を画像から切出すアドレスと、画像を生成するのに必要な諸々の情報とが含まれる。 The additional information 70 includes normalization processing information for reducing the variance of the clusters to be identified and making it easier to distinguish between the clusters, an address for extracting the substance to be identified from the image, and other information necessary for generating the image. includes information about
 正規化処理の情報は、照明光の輝度、照明光の波長帯域特性などである。切り出しアドレスは、物質を識別できる専門家が、RGB画像や、後述するスペクトルの解析画像を観察してマウスなどで指定する。外部のコンピュータで画像を生成してから指定しても良い。画像を生成するのに必要な情報は、周波数掃引の帯域、直線性、受光素子の配列間隔と指向性などに対応する。 The information for the normalization process is the brightness of the illumination light, the wavelength band characteristics of the illumination light, and so on. An expert who can identify a substance observes an RGB image or a spectral analysis image, which will be described later, and designates the extraction address using a mouse or the like. An image may be generated by an external computer and then specified. The information necessary to generate an image corresponds to the frequency sweep band, linearity, arrangement interval and directivity of the light receiving elements, and the like.
 記録装置から干渉縞信号を読み出し、コンピュータによって、取得したデータの正規化処理を行う。その後、フーリエ変換処理と2次元フィルタ処理を行って3次元画像を生成する。 The interference fringe signal is read out from the recording device, and the acquired data is normalized by the computer. After that, Fourier transform processing and two-dimensional filtering processing are performed to generate a three-dimensional image.
 次に、切出しアドレスに従って、3次元画像から識別する物質の画像部分を切り出す。このとき、切り出した画素の受光方向の複素信号は、被写体表面からの反射光の複素信号が主となる。特に、可視光の場合は伝播減衰が大きいため、被写体表面の反射のみとなる。透明の被写体の場合は、目的とする受光方向の画素データを3次元に切り出す。 Next, the image portion of the substance to be identified is cut out from the 3D image according to the cutout address. At this time, the complex signal in the light receiving direction of the extracted pixel is mainly the complex signal of the reflected light from the object surface. In particular, in the case of visible light, propagation attenuation is large, so only reflection from the surface of the object is performed. In the case of a transparent object, the pixel data in the target light receiving direction is three-dimensionally cut out.
 切り出した画素の受光方向の複素信号の逆フーリエ変換を行う。逆フーリエ変換の結果から振幅信号を取り出すと、切り出した部分のマルチスペクトルデータを検出することができる。  Perform the inverse Fourier transform of the complex signal in the light receiving direction of the extracted pixel. By extracting the amplitude signal from the result of the inverse Fourier transform, it is possible to detect the multispectral data of the clipped portion.
 ちなみに、被写体が生体などの場合、透過性の高い近赤外帯を使用すれば、生体中の反射光を得ることができる。しかしながら、生体中を伝播するときの減衰が波長によって大きく異なり。また、伝播経路中にある組織の減衰が重畳される。 By the way, if the subject is a living body, it is possible to obtain the reflected light inside the living body by using a highly transmissive near-infrared band. However, the attenuation when propagating through the living body varies greatly depending on the wavelength. Also, the attenuation of tissue in the propagation path is superimposed.
 このため、クラスタの分散が大きくなり過ぎて、定量的なスペクトルの解析が難しい。従って、スペクトルの解析は、透明度の高い被写体の場合以外は、主として被写体表面の画像に対して行なうことになる。 For this reason, the dispersion of clusters becomes too large, making quantitative spectral analysis difficult. Therefore, the spectrum analysis is mainly performed on the image of the surface of the object except for the object with high transparency.
 次に、コンピュータによって、各スペクトル成分を直交軸とした多次元座標空間で、識別する2つ物質の大量のマルチスペクトルデ-タについて、FS(Foley-Sammon)変換を行なう。 Next, a computer performs FS (Foley-Sammon) transformation on a large amount of multispectral data of the two substances to be identified in a multidimensional coordinate space with each spectral component as an orthogonal axis.
 そして、2つのクラスタのフィッシャーレシオが大きい特徴軸を絞り込み、その特徴軸に射影変換するマトリクス変換係数、図8の77-1~77-nを取得する。FS変換は、2つのクラスタのフィッシャーレシオが大きくなる特徴軸を、大きい順に算出する直交変換である。データ圧縮と同様に、累積寄与率や経験上から、多くても5~6の特徴軸に絞り込むことができる。 Then, the feature axes with large Fisher ratios of the two clusters are narrowed down, and matrix transformation coefficients 77-1 to 77-n in FIG. 8 for projective transformation to the feature axes are acquired. The FS transform is an orthogonal transform that calculates the feature axes that increase the Fisher ratio of two clusters in descending order. Similar to data compression, it is possible to narrow down to at most 5 to 6 feature axes based on cumulative contribution rates and experience.
 次に、マルチスペクトルデータを特徴軸に射影変換した大量のデータを、図8のAI80と同じ構成のコンピュータ上のAIに学習させる。そして、学習済みの係数76(ニューロンの係数)を取得する。 Next, let the AI on the computer with the same configuration as the AI 80 in FIG. Then, the learned coefficients 76 (neuron coefficients) are acquired.
 AIの入力端子数は、FS変換によって絞り込んだ特徴軸の5~6となる。このため、AIの規模が、層数を含めて遥かに小さくなる。AIによる識別の特徴は、図10に示すように、非線形な切り分けZよる識別が可能になるところにある。 The number of AI input terminals is 5 to 6 on the characteristic axis narrowed down by FS conversion. Therefore, the scale of AI, including the number of layers, becomes much smaller. A feature of identification by AI is that identification by nonlinear segmentation Z becomes possible, as shown in FIG.
 そして、図8に示すように、コンピュータから制御部78に送られ記憶されたマトリクス変換係数77-1~77-nを、乗算器79-1~79-nによって干渉縞信号71に乗ずる。これにより、干渉縞信号71の特徴軸EU1~EU6への射影変換がなされる。 Then, as shown in FIG. 8, the interference fringe signal 71 is multiplied by the matrix conversion coefficients 77-1 to 77-n sent from the computer to the control unit 78 and stored by the multipliers 79-1 to 79-n. As a result, projective transformation of the interference fringe signal 71 to the characteristic axes EU1 to EU6 is performed.
 図10は、AIによる特定する物質の非線形な切り分けを説明する図である。
 特徴軸EU1~EU6へ射影変換された干渉縞信号は、フーリエ変換処理と2次元フィルタ処理によって、特徴軸EU1~EU6の画像であるスペクトルの解析画像が生成される。
10A and 10B are diagrams for explaining non-linear segmentation of a substance to be specified by AI.
The interference fringe signals projectively transformed onto the characteristic axes EU1 to EU6 are subjected to Fourier transform processing and two-dimensional filtering to generate spectral analysis images, which are images of the characteristic axes EU1 to EU6.
 スペクトルの解析画像の上位EU1、EU2、EU3を、視覚感度の大きい順のYIQ(NTSCの内部処理で使用されるコンポーネント方式)に割り当てて、RGBへマトリクス変換を行ってから表示しても良い。後は、観察者の視覚脳が非線形な識別を行う。 The top EU1, EU2, and EU3 of the spectral analysis images may be assigned to YIQ (component method used in NTSC internal processing) in descending order of visual sensitivity, and may be displayed after performing matrix conversion to RGB. After that, the observer's visual brain performs non-linear discrimination.
 または、スペクトルの解析画像を画素ごとにAI80に入力し、2つの物質の識別を画素ごとに行っても良い。このとき、AIのニューロン係数76は、コンピュータから制御部78を介してAI80に概にローディングされている。 Alternatively, the spectrum analysis image may be input to the AI 80 for each pixel, and the two substances may be identified for each pixel. At this time, AI neuron coefficients 76 have generally been loaded into AI 80 via controller 78 from the computer.
 AI80で識別した結果は、識別した物質の画素部分を疑似カラー化して、RGB画像と融合表示しても良い。 The results identified by AI80 may be displayed in a fusion with the RGB image by pseudo-coloring the pixel portions of the identified substances.
 上述したFS変換による識別は、あくまでも2つのクラスタを識別する方式である。複数の物質を特定する場合は、その都度、特徴軸を切換えることになる。その切換えをツリー状に組合せて複数回行なったとしても、特徴軸が5~6に絞り込まれているのと、AI80の回路規模が小さいため、ツリー状の識別動作を高速に行なうことができる。 The above-described identification by FS conversion is just a method for identifying two clusters. When specifying a plurality of substances, the characteristic axis is switched each time. Even if the switching is performed multiple times in a tree-like combination, the tree-like identification operation can be performed at high speed because the characteristic axes are narrowed down to 5 to 6 and the circuit scale of the AI 80 is small.
 または、特定する複数の物質のマルチスペクトル波形を、直接、AIに学習(教師あり)させて、複数の物質を特定しても良い。ただし、そのときのAIの入力端子の数がマルチスペクトルの数だけ必要になる。このため、AIの規模が大きくなる。 Alternatively, multispectral waveforms of multiple substances to be specified may be directly learned (supervised) by AI to specify multiple substances. However, the number of input terminals of AI at that time is required as many as the number of multi-spectrum. Therefore, the scale of AI increases.
 次に、光源の周波数掃引に歪みがある場合、もしくは時間や温度で変動する場合の補正方法について説明する。周波数掃引の直線性と変動が補償されたレーザー光源は少なく、また、高価であるため、電気的に補正ができれば本実施形態の用途が広がる。 Next, we will explain the correction method when the frequency sweep of the light source is distorted or fluctuates with time or temperature. There are few laser light sources in which the linearity and variation of the frequency sweep are compensated, and they are expensive.
 図11は、レーザー光源の周波数掃引の直線性に歪みがある場合を説明する図である。
周波数掃引の直線性に歪みがあると、図11に示すように、反射光101と参照光102の光路差103によって、干渉縞信号に周波数変調(周波数分散)104が生ずる。これにより、フーリエ変換後のスペクトル幅が広がり、解像度が落ちる。そして、光路差103が変わると、その周波数変調104も変化する。
FIG. 11 is a diagram for explaining a case where the linearity of the frequency sweep of the laser light source is distorted.
When the linearity of the frequency sweep is distorted, frequency modulation (frequency dispersion) 104 occurs in the interference fringe signal due to the optical path difference 103 between the reflected light 101 and the reference light 102, as shown in FIG. This widens the spectral width after the Fourier transform and reduces the resolution. Then, when the optical path difference 103 changes, the frequency modulation 104 also changes.
 このような掃引の歪みで起きる周波数変調104は、フーリエ変換後のスペクトル分散に位相整合のフィルタリングを行うことで、補正することができる。位相整合は、図5に示すように、FIRフィルタ(Finite  Impulse  Response  Filter)86-1~86-nを使用し、周波数変調104をフーリエ変換して生成した複素共役信号を重畳積分することで行なう。FIRフィルタの長さは、フーリエ変換後の周波数分散の範囲を見越して設定される。 The frequency modulation 104 caused by such sweep distortion can be corrected by performing phase-matched filtering on the spectral dispersion after the Fourier transform. Phase matching is performed by using FIR filters (Finite Impulse Response Filters) 86-1 to 86-n as shown in FIG. . The length of the FIR filter is set to allow for the range of frequency dispersion after Fourier transformation.
 上述したように、図11に示す受光方向の光路差103が変わると、干渉縞の周波数変調が変わる。このため、受光方向の画素ごとに位相整合フィルタの係数87-1~87-n(図5)を切換えて乗算する。 As described above, when the optical path difference 103 in the light receiving direction shown in FIG. 11 changes, the frequency modulation of the interference fringes changes. Therefore, the coefficients 87-1 to 87-n (FIG. 5) of the phase matching filter are switched and multiplied for each pixel in the light receiving direction.
 ただし、周波数の掃引時間に対して、光路差103の変化による時間の変化は極わずかとなるため、フーリエ変換後のスペクトル分散は殆んど変化がない。周波数掃引の時間と帯域幅と歪の大きさによってスペクトル分散の変化が変わるため、それに応じて位相整合フィルタの係数87-1~87-n(図5)の切換えの必要性を判断する。 However, since the time change due to the change in the optical path difference 103 is very slight with respect to the frequency sweep time, the spectral dispersion after the Fourier transform hardly changes. Since the change in spectral dispersion changes depending on the frequency sweep time, bandwidth, and magnitude of distortion, the necessity of switching the phase-matched filter coefficients 87-1 to 87-n (FIG. 5) is determined accordingly.
 位相整合フィルタの係数87-1~87-n(図5)は、図11に示した周波数変調104をフーリエ変換して生成した複素共役信号が使用される。光源周波数掃引の直線性の歪みの変化を適切な時間周期で検出し、図5のFIR係数発生部88において位相整合フィルタの係数87-1~87-nを算出し、FIR係数発生部88(図5)内のメモリに保存して使用する。 A complex conjugate signal generated by Fourier transforming the frequency modulation 104 shown in FIG. 11 is used for the coefficients 87-1 to 87-n (FIG. 5) of the phase-matched filter. Changes in distortion of the linearity of the light source frequency sweep are detected at an appropriate time period, the FIR coefficient generator 88 in FIG. (Fig. 5).
 また、位相整合フィルタの係数87-1~87-nは、図8の制御部78を介して、付加情報70に追加され、データフォーマット作成部81に送られる。 Also, the phase-matched filter coefficients 87-1 to 87-n are added to the additional information 70 via the control section 78 in FIG.
 次に、周波数掃引の直線性の歪みを検出する方法について説明する。分光器を使用して周波数掃引を検知する。図12に示すように、周波数掃引点光源1(図1)からの出射光をビームスプリッタ111によって分割し、コリメート光学系112によって平行光に変換する。
 図12は、周波数掃引の直線性の歪みを検出する構成について説明する図である。
Next, a method for detecting linearity distortion of a frequency sweep will be described. A spectrometer is used to detect the frequency sweep. As shown in FIG. 12, emitted light from the frequency sweeping point light source 1 (FIG. 1) is split by a beam splitter 111 and converted into parallel light by a collimating optical system 112 .
FIG. 12 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep.
 その後、分光器113によって波長成分に分光し、結像光学系115によって分光方向に配置された1次元配列の受光素子(ラインセンサー)114に結像させ、受光する。光源1からの出射光はスポット状に結像され、周波数掃引に従って1次元受光素子114上を移動する。 After that, the light is split into wavelength components by a spectroscope 113, and an image is formed on a one-dimensionally arrayed light receiving element (line sensor) 114 arranged in the spectroscopic direction by an imaging optical system 115 to receive the light. The light emitted from the light source 1 is imaged in a spot shape and moves on the one-dimensional light receiving element 114 according to the frequency sweep.
 光源の周波数掃引の間に、1次元受光素子114の読出しを複数回繰返し、スポット光の移動を検出し、掃引周波数の歪みを検出する。ピーク値検出回路115によって、受光素子の画素データを補間してピーク値を検出し、スポット光の位置の精度を高める。 During the frequency sweep of the light source, the reading of the one-dimensional light receiving element 114 is repeated multiple times to detect the movement of the spot light and the distortion of the sweep frequency. A peak value detection circuit 115 interpolates the pixel data of the light receiving element to detect the peak value, thereby improving the accuracy of the position of the spot light.
 周波数変調104(図11)の位相は、FIR係数発生部88(図5)において、FM変調の位相を計算するときの時間積分の式を用いて算出される。 The phase of the frequency modulation 104 (Fig. 11) is calculated in the FIR coefficient generator 88 (Fig. 5) using the time integration formula used when calculating the phase of FM modulation.
 1次元受光素子114で検出されたスポット光の位置が、メモリ116に一旦記憶され、周波数変調の波形にされた後、図5のFIR係数発生部88に送られ、直線性の歪みを補正するFIRフィルタの係数87-1~87-nが計算によって生成され、FIRフィルタ86-1~86-nに送られて補正が行われる。 The position of the spot light detected by the one-dimensional light receiving element 114 is temporarily stored in the memory 116, converted into a frequency-modulated waveform, and sent to the FIR coefficient generator 88 in FIG. 5 to correct linearity distortion. FIR filter coefficients 87-1 to 87-n are generated by computation and sent to FIR filters 86-1 to 86-n for correction.
 1次元受光素子114の読み出しの繰返し周波数は、光源の周波数掃引の歪みが緩やかであるため、多く値を必要としない。データ補間で周波数掃引の特性を再現できる。周波数掃引の歪みの検出精度は、受光方向の解像度に対応した値が必要とされる。 The reading repetition frequency of the one-dimensional light receiving element 114 does not require a large value because the distortion of the frequency sweep of the light source is moderate. Data interpolation can reproduce the characteristic of frequency sweep. The distortion detection accuracy of the frequency sweep requires a value corresponding to the resolution in the light receiving direction.
 または、上述したように、光路差103に対するフーリエ変換後のスペクトル分散の変化を無視できる場合、位相整合フィルタの係数(図5の87-1~87-n)を切換える必要がない。このため、基準反射点からの反射光の複素信号を、光干渉計を利用して検出し、それをフーリエ変換した複素共役信号を位相整合フィルタの係数(図5の87-1~87-n)として使用しても良い。 Alternatively, as described above, if the change in spectral dispersion after Fourier transform with respect to the optical path difference 103 is negligible, there is no need to switch the phase-matched filter coefficients (87-1 to 87-n in FIG. 5). For this reason, the complex signal of the reflected light from the reference reflection point is detected using an optical interferometer, and the complex conjugate signal obtained by Fourier transforming it is the coefficient of the phase matching filter (87-1 to 87-n in FIG. 5). ) can be used as
 (応用例) 次に、本実施形態の応用例について説明する。
(第1の応用例)
 撮像素子8(図1)の代わりに単一の受光素子を使用し、2次元の走査機構によって、被写体からの反射光を2次元に検出し、フーリエ変換処理と2次元フィルタ処理によって3次元の解像を行う。単一の受光素子は、超高感度のものや、可視光以外の特殊な波長帯域を検出できるものがあるため、そのような波長帯域を利用する3次元撮像装置や検査装置に応用できる。
(Application Example) Next, an application example of the present embodiment will be described.
(First application example)
A single light-receiving element is used instead of the image sensor 8 (FIG. 1), and a two-dimensional scanning mechanism detects the reflected light from the subject in two dimensions, and Fourier transform processing and two-dimensional filter processing perform three-dimensional detection. perform resolution. Some single photodetectors have ultra-high sensitivity and some can detect special wavelength bands other than visible light, so they can be applied to three-dimensional imaging devices and inspection devices that use such wavelength bands.
(第2の応用例)
 また、撮像素子8(図1)の代わりに、1次元配列の受光素子(ラインセンサー)を使用し、配列と交差する方向に1次元走査機構によって走査を行ない、被写体からの反射光を2次元に検出し、フーリエ変換処理と2次元フィルタ処理によって3次元の解像を行う。
(Second application example)
A one-dimensional array of light-receiving elements (line sensors) is used in place of the imaging element 8 (FIG. 1), and scanning is performed by a one-dimensional scanning mechanism in a direction intersecting the array, so that the reflected light from the object is captured two-dimensionally. , and three-dimensional resolution is performed by Fourier transform processing and two-dimensional filter processing.
 ラインセンサーは、画素数の多いものや、高感度のもの、特殊な波長帯域を検出できるものがあり、3次元計測とスペクトルの解析による組成識別のできるFA( Factory Automation)用の検査装置や、FA用ロボットの視覚センサーなどに応用することができる。 Line sensors include those with a large number of pixels, those with high sensitivity, and those that can detect special wavelength bands. It can be applied to a visual sensor for FA robots and the like.
(第3の応用例)
 本実施形態は、結像光学系を使用せずとも3次元を解像することができる。しかしながら、上述したように、本実施形態と結像光学系と組み合わせることで、2次元フィルタの処理数を減らすことができる。
(Third application example)
This embodiment can resolve three dimensions without using an imaging optical system. However, as described above, by combining this embodiment with the imaging optical system, the number of processes of the two-dimensional filter can be reduced.
 図13は、本実施形態の応用例の構成を示す図である。
 図13に示すように、主光線123の方向の解像をフーリエ変換処理によって行う。光軸121と垂直な面の解像は、結像光学系122によって行う。得られた3次元の複素信号を使用して、結像光学系122の被写界深度の拡大と、光波面の乱れによる解像度劣化の回復を、2次元フィルタ処理によって行うことができる。光波面の乱れの補正によって光学系の収差も補正できる。4aは、反射ミラーである。
FIG. 13 is a diagram showing the configuration of an application example of this embodiment.
As shown in FIG. 13, the direction of the chief ray 123 is resolved by Fourier transform processing. The resolution of the plane perpendicular to the optical axis 121 is performed by the imaging optical system 122 . Using the obtained three-dimensional complex signal, it is possible to extend the depth of field of the imaging optical system 122 and recover resolution degradation due to disturbance of the optical wavefront by two-dimensional filtering. By correcting the disturbance of the optical wavefront, the aberration of the optical system can also be corrected. 4a is a reflecting mirror.
 図14(a)は、反射点124(図13)の結像位置が、撮像素子125(図13)より前(被写体側)にあるときの結像光束126を示す。図14(b)は、撮像素子より後(像側)にあるときの結像光束127を示す。 FIG. 14(a) shows the imaging light flux 126 when the imaging position of the reflection point 124 (FIG. 13) is in front of the imaging device 125 (FIG. 13) (on the subject side). FIG. 14(b) shows the imaging light flux 127 behind (on the image side) the imaging element.
 点線の光束128-1と128-2は、2次元フィルタ処理による仮想レンズ129-1と129-2によって、それぞれ再結像したときの光束を示している。2次元フィルタ処理を主光線123の方向の画素に対して行うことで、被写界深度の拡大と、上述した光波面の乱れで劣化した解像度の回復を行うことができる。 Dotted light beams 128-1 and 128-2 indicate light beams re-imaged by virtual lenses 129-1 and 129-2 by two-dimensional filtering, respectively. By performing two-dimensional filtering on the pixels in the direction of the principal ray 123, it is possible to extend the depth of field and restore the resolution that has deteriorated due to the disturbance of the light wavefront described above.
 このように結像光学系と2次元フィルタ処理を組み合わせると、2次元フィルタ処理を行う範囲を小さくすることができ、処理数が大きく減る。このため、実時間での処理が可能になる。 By combining the imaging optical system and the two-dimensional filter processing in this way, it is possible to reduce the range of the two-dimensional filter processing and greatly reduce the number of processes. Therefore, real-time processing becomes possible.
 これにより、3次元形状測定装置やロボットの視覚センサーなどに応用できる。白色干渉顕微鏡と比較して、水平分解能、測定スピード、角度特性に優れ、スペクトルの解析が同時にできる3次元形状測定が可能になる。 As a result, it can be applied to 3D shape measurement devices and visual sensors for robots. Compared to a white light interference microscope, it has excellent horizontal resolution, measurement speed, and angular characteristics, and enables three-dimensional shape measurement that allows simultaneous spectral analysis.
 また、本実施形態を眼底撮像装置に応用すれば、対物光学系や眼球光学系の表面の不要反射や、硝子体の濁りによる不要反射を、光路長の違いから除去することができる。また、従来、眼球光学系からの不要反射を避けるために、照明用と撮像用にリング分割していた眼球光学系の開口を、全て使用できるようになる。このため、高精細で高コントラストの眼底撮像を高速に行うことができ、網膜の断層像を3次元で検出することができる。 Also, if this embodiment is applied to a fundus imaging device, unnecessary reflections on the surfaces of the objective optical system and eyeball optical system and unnecessary reflections due to turbidity of the vitreous body can be removed due to the difference in optical path length. In addition, the aperture of the eyeball optical system, which has conventionally been divided into rings for illumination and imaging in order to avoid unnecessary reflection from the eyeball optical system, can now be used entirely. Therefore, high-definition, high-contrast fundus imaging can be performed at high speed, and a three-dimensional tomographic image of the retina can be detected.
 図13において、撮像機構を1次元に走査して断層像を検出し、フーリエ変換処理と2次元フィルタ処理によって光軸方向121のみの解像と被写界深度の拡大を行う。これにより、被写界深度の拡大に必要な撮像素子の素子数は100画素以下で済む。このため、2次元フィルタの処理数を更に大きく減らすことができ、水平解像度が高く被写界深度の深い断層像を実時間で検出できるようになる。 In FIG. 13, the imaging mechanism is scanned one-dimensionally to detect a tomographic image, and resolution and depth of field are expanded only in the optical axis direction 121 by Fourier transform processing and two-dimensional filter processing. As a result, the number of image pickup elements required to expand the depth of field is 100 pixels or less. For this reason, the number of processes of the two-dimensional filter can be further reduced, and a tomogram with a high horizontal resolution and a deep depth of field can be detected in real time.
 光波面の乱れの補正に時間を要する場合は、静止画をフリーズしたときに解像度劣化の回復を行うことが望ましい。 If it takes time to correct the disturbance of the optical wavefront, it is desirable to recover the resolution deterioration when the still image is frozen.
 また、周波数掃引光源を低コヒーレンス光源(例えば、SLD:Super Luminescent Diode)に置き換えて、干渉縞信号を分光器で分光する。これにより、周波数掃引光源で得たのと同じように、周波数掃引された干渉縞信号を得ることができ、これをフーリエ変換処理と2次元フィルタ処理することで、3次元の解像を行うことができる。 Also, the frequency swept light source is replaced with a low coherence light source (for example, SLD: Super Luminescent Diode), and the interference fringe signal is separated by a spectroscope. As a result, a frequency-swept interference fringe signal can be obtained in the same way as obtained with a frequency-swept light source, and by performing Fourier transform processing and two-dimensional filter processing on this, three-dimensional resolution can be performed. can be done.
 図15は、本実施形態において低コヒーレンス光源を用いた構成を示す図である。
 図15に示すように、低コヒーレンス光源(SLD)131から出射された広帯域光は、ビームスプリッタ132で反射され、被写体133を照明する。反射点130からの反射光は、対物光学系134によって結像され、結像面に設置されたスリット135を介して不要光が除去される。
FIG. 15 is a diagram showing a configuration using a low coherence light source in this embodiment.
As shown in FIG. 15 , broadband light emitted from a low coherence light source (SLD) 131 is reflected by a beam splitter 132 and illuminates an object 133 . Reflected light from the reflection point 130 is imaged by an objective optical system 134, and unwanted light is removed through a slit 135 provided on the imaging plane.
 その後、コリメート光学系136によって平行光に変換され、分光器137に入光される。スリット135の開口の中心が、対物光学系134の光軸上に位置している。反射光は、分光器137で分光された後、結像光学系138によって撮像素子139上に結像される。 After that, it is converted into parallel light by the collimating optical system 136 and enters the spectroscope 137 . The center of the aperture of the slit 135 is positioned on the optical axis of the objective optical system 134 . The reflected light is split by a spectroscope 137 and then imaged on an imaging device 139 by an imaging optical system 138 .
 ビームスプリッタ132で分離された一方の光は、シリンドリカル光学系140によって反射ミラー141上に線分142として結像される。線分142から反射した光は、シリンドリカル光学系140と対物光学系134と、スリット135と、コリメート光学系136を介して平行光に変換される。 One of the lights separated by the beam splitter 132 is imaged as a line segment 142 on the reflecting mirror 141 by the cylindrical optical system 140 . Light reflected from line segment 142 is converted into parallel light through cylindrical optical system 140 , objective optical system 134 , slit 135 , and collimator optical system 136 .
 平行光は、分光器137で分光された後、結像光学系138によって撮像素子139上に結像される。そして、撮像素子139の受光面上で反射光と参照光の合波がなされ、生じた干渉縞は撮像素子によって電気信号に変換される。 After the parallel light is split by the spectroscope 137 , an image is formed on the imaging device 139 by the imaging optical system 138 . Then, the reflected light and the reference light are combined on the light receiving surface of the imaging element 139, and the generated interference fringes are converted into electrical signals by the imaging element.
 このとき、ビームスプリッタ132の反射面で折り返したシリンドリカル光学系140の光軸が、対物光学系134の光軸と一致し、かつ線分142とスリット135の開口が光学的に共役になるように、シリンドリカル光学系140と反射ミラー141が配置されている。 At this time, the optical axis of the cylindrical optical system 140 folded by the reflecting surface of the beam splitter 132 is aligned with the optical axis of the objective optical system 134, and the line segment 142 and the opening of the slit 135 are optically conjugate. , a cylindrical optical system 140 and a reflecting mirror 141 are arranged.
 そして、分光器137によって分光する方向とその範囲が、撮像素子139の画素の縦配列の方向とその範囲に一致するように、分光器137と撮像素子139と結像光学系138が配置されている。 The spectroscope 137, the image sensor 139, and the imaging optical system 138 are arranged so that the direction and range of the light split by the spectroscope 137 match the direction and range of the vertical arrangement of the pixels of the image sensor 139. there is
 以上の構成により、対物光学系134によってスリット135の開口に結像された被写体像が、撮像素子139の横の画素配列に結像される。そして、横の配列画素ごとに分光され、生じた干渉縞信号が、縦の画素配列から検出される。 With the above configuration, the subject image formed on the aperture of the slit 135 by the objective optical system 134 is formed on the horizontal pixel array of the imaging device 139 . Then, the interference fringe signals generated by the horizontal array pixels are detected from the vertical pixel array.
 撮像素子139から干渉縞信号が順次読み出され、メモリ(不図示)に記憶される。そして、走査機構143によって、被写体133、もしくは、光軸144を、図15の縦方向130に走査し、干渉縞信号を2次元に取得してメモリに記憶する。 Interference fringe signals are sequentially read out from the imaging device 139 and stored in a memory (not shown). Then, the scanning mechanism 143 scans the object 133 or the optical axis 144 in the vertical direction 130 in FIG. 15 to acquire the interference fringe signal two-dimensionally and store it in the memory.
 そして、メモリから干渉縞信号を読み出し、フーリエ変換処理によって受光方向を解像し、3次元の解像を得る。そして、2次元フィルタ処理によって被写界深度の拡大と光波面の乱れの補正を行う。 Then, the interference fringe signal is read out from the memory, the light receiving direction is resolved by Fourier transform processing, and a three-dimensional resolution is obtained. Then, the depth of field is expanded and the disturbance of the light wavefront is corrected by two-dimensional filtering.
 図16(a)、(b)、(c)は、対物光学系134(図15)の合焦位置145から外れた位置にある反射点146の反射光を、光軸144を紙面の縦方向に走査しながら検出するときの反射光の光路147-1、147-m、147-nを示す。 16(a), (b), and (c) show the reflected light from a reflection point 146 located out of the focus position 145 of the objective optical system 134 (FIG. 15), with the optical axis 144 in the vertical direction of the paper surface. shows optical paths 147-1, 147-m, and 147-n of reflected light when detecting while scanning.
 図16(a)、(b)、(c)は、反射点146が光軸144上にある場合と、受光束148の両端にある場合の3つのパターン147-1、147-m、147-nを示している。 16(a), (b), and (c) show three patterns 147-1, 147-m, and 147-m when the reflection point 146 is on the optical axis 144 and when it is on both ends of the received light beam 148. n.
  図16(a)、(b)、(c)から分かるように、反射点146が合焦位置145から離れていても、縦方向に走査を行なって検出すると、受光束148の範囲において反射光147-1~147-nを検出することができる。 As can be seen from FIGS. 16A, 16B, and 16C, even if the reflection point 146 is distant from the focus position 145, the reflected light is 147-1 to 147-n can be detected.
 これらの反射光147-1~147-nを使用し、受光方向のフーリエ変換処理と2次元フィルタ処理によって、反射点146を解像することができる。反射点146が、合焦位置145より後側に外れた場合も、同様に解像することができる。   Using these reflected lights 147-1 to 147-n, the reflection point 146 can be resolved by Fourier transform processing and two-dimensional filter processing in the light receiving direction. Even when the reflection point 146 is behind the focus position 145, it can be similarly resolved.  
 反射点146が、対物光学系134(図15)の合焦位置145から外れると、スリット149によって光束が蹴られて感度が落ちるが、2次元フィルタ処理によって再結像すると、2次元フィルタ処理を行う反射光147-1~147-nの振幅が加算されるため、反射点146が合焦位置145にあるときと同じ感度が得られる。 When the reflection point 146 deviates from the focus position 145 of the objective optical system 134 (FIG. 15), the light beam is kicked by the slit 149 and the sensitivity is lowered. Since the amplitudes of the reflected lights 147-1 to 147-n are added, the same sensitivity as when the reflection point 146 is at the focus position 145 can be obtained.
(血管内OCT装置への応用)
 次に、本発明を血管内OCT装置に応用する例について説明する。
 血管内OCT(Optical Coherence Tomography)装置は、脚の付け根や腕、手首などの血管から、X線透視下にて経皮的にガイドワイヤーを心臓の冠状動脈まで挿入する。そして、そのガイドワイヤーに沿って1mmφのOCT用カテーテルを挿入し、回転させて、冠状動脈(2~4mmφ、長さ15cm)の断層像を検出する装置である。
(Application to intravascular OCT apparatus)
Next, an example in which the present invention is applied to an intravascular OCT apparatus will be described.
An intravascular OCT (optical coherence tomography) apparatus percutaneously inserts a guide wire into a coronary artery of the heart from a blood vessel such as the root of the leg, arm, or wrist under X-ray fluoroscopy. An OCT catheter of 1 mmφ is inserted along the guide wire and rotated to detect a tomographic image of a coronary artery (2 to 4 mmφ, length 15 cm).
 血管の狭窄状態や、狭窄を広げるために設置したステントの設置状況、また、その後の再狭窄の状況などを診断する装置である。経皮的冠動脈インターベンション治療(PCI)の増加に伴い、血管内OCTの需要が増えている。 It is a device that diagnoses the stenosis of blood vessels, the installation status of stents installed to widen the stenosis, and the status of subsequent restenosis. With the increase in percutaneous coronary intervention (PCI), the demand for intravascular OCT is increasing.
 血管内OCT装置の課題は、血管内の狭窄を起こす物質、プラーク(脂肪やコレステロールの塊り)や血栓や石灰化などを識別し、それらの危険度のグレードを見極める質的診断にある。 The challenge with intravascular OCT devices is in qualitative diagnosis, which identifies substances that cause stenosis in blood vessels, such as plaque (mass of fat and cholesterol), thrombus, and calcification, and assesses their risk grades.
 現状は、断層像の形態情報(形状やテクスチャー、輝度の濃淡)から質的なものを診断しているが、高度な経験が必要とされる。狭窄を起こしている物質と、そのグレードによって治療方法が変わる。特に、油質のプラークは、剥がれると細い血管を詰まらせ、狭心症や心筋梗塞のもとになるため、質的診断が重要である。  Currently, qualitative diagnosis is made from the morphological information (shape, texture, brightness density) of the tomogram, but a high level of experience is required. Treatment methods vary depending on the material causing the stenosis and its grade. In particular, qualitative diagnosis is important because oily plaque, when detached, clogs small blood vessels and causes angina pectoris and myocardial infarction.
 また、冠状動脈の15cmに渡って起因物質の存在状況とグレードが分かれば、治療はもとより、狭心症や心筋梗塞などについて様々な予防措置を施すことができる。  In addition, if the existence and grade of the causative substance are known over 15 cm of the coronary artery, various preventive measures can be taken not only for treatment but also for angina pectoris and myocardial infarction. 
 質的診断を行うには、起因物質と相関性が高い反射スペクトルを解析するのが有効であるが、断層像(OCT)は、生体中を伝播するときの減衰が、波長によって大きく異なる。また、伝播経路中に存在する組織の減衰が重畳(積分)されるため、定量的なスペクトルの解析が難しい。また、OCTに使用される近赤外帯においては、プラークなど起因物質の特徴を示すスペクトル成分が明確になっていない。   In order to make a qualitative diagnosis, it is effective to analyze the reflection spectrum, which is highly correlated with the causative substance, but in tomographic images (OCT), the attenuation when propagating through the living body varies greatly depending on the wavelength. In addition, since the attenuation of tissue existing in the propagation path is superimposed (integrated), quantitative spectral analysis is difficult. In addition, in the near-infrared band used for OCT, spectral components that characterize causative substances such as plaque have not been clarified. 
 これに対して、ファイバー血管内視鏡の可視光画像によって、プラークや血栓や石灰化が色で判別できることが明確になっている。たとえば、プラークは、黄色系、血栓は、フィブリンとの混合で決まる赤色系、石灰化と正常粘膜は、ともに白色系であるが、透明度を含めて色調がわずかに異なる。 In contrast, it has become clear that plaques, thrombi, and calcifications can be distinguished by color using visible light images from fiber angioscopes. For example, plaque is yellowish, thrombus is reddish determined by mixing with fibrin, and calcified and normal mucosa are both whiteish, but the color tone including transparency is slightly different.
 可視光帯のスペクトルについて、上述したようにスペクトルの解析を行い、起因物質を識別するのに最適な特徴スペクトルを抽出し、強調すれば、起因物質の特定とグレードの判定精度を高めることができる。 By analyzing the spectrum in the visible light band as described above and extracting and enhancing the optimum characteristic spectrum for identifying the causative substance, it is possible to increase the accuracy of identifying the causative substance and determining the grade. .
 このことから、血管内OCT装置は、断層像による形態診断に加え、血管壁のスペクトルの解析による質的診断が同時に行なえることが望ましい。そして、これらの診断が、冠状動脈の長さ15cmにわたってできることが望ましい。 For this reason, it is desirable that an intravascular OCT apparatus can simultaneously perform qualitative diagnosis by analyzing the spectrum of the vessel wall in addition to morphological diagnosis using tomographic images. And it is desirable that these diagnoses can be made over the 15 cm length of the coronary artery.
 図17は、冠状動脈を診断する場合に必要な合焦範囲を示す図である。
 また、血管内OCT装置のもう一つの課題は、図17に示すように、最大径4mmの冠状動脈を診断する場合に必要な合焦範囲150が、1mm~4mmと広いため、断層像の水平解像度を高く設定することができない。水平解像度が低いと、水平方向の反射が重畳されて、深さ方向の解像度も結果的に落ちることになる。
FIG. 17 is a diagram showing the focusing range required for diagnosing coronary arteries.
Another problem with the intravascular OCT apparatus is that, as shown in FIG. 17, the focal range 150 required for diagnosing a coronary artery with a maximum diameter of 4 mm is as wide as 1 mm to 4 mm. Cannot set high resolution. If the horizontal resolution is low, horizontal reflections are superimposed, resulting in poor depth resolution.
 更にもう一つの課題は、現状、OCTカテーテルを高速回転させながらプルバックを行い、15cmの冠状動脈の血管壁の画像を検出している。プルバックは、X線透視下で光学的に透明な造影剤をフラッシュしながら行なうため、造影剤フラッシュの制限時間2~3秒(生体の安全性による推奨時間)の間に、OCTカテーテルを高速回転させても、血管壁の画像は、ミリオーダーの解像度しか得られない。また、上述したように、近赤外帯では、起因物質の特徴を示すスペクトルが、可視光帯ほど明確になっていない。 Yet another issue is that currently, the OCT catheter is rotated at high speed and pulled back to detect an image of the 15 cm coronary artery wall. Since the pullback is performed while flashing an optically transparent contrast agent under X-ray fluoroscopy, the OCT catheter should be rotated at high speed during the contrast agent flushing time limit of 2 to 3 seconds (recommended time for biological safety). Even if it is, the image of the blood vessel wall can only be obtained with a resolution of millimeter order. In addition, as described above, in the near-infrared band, the spectrum indicating the characteristics of the causative substance is not as clear as in the visible light band.
 本実施形態を血管内OCT装置に適用することで、以上の課題を解決し、高解像で被写界深度の深い断層像、および、血管壁の画像の検出が可能になり、血管壁画像のスペクトルの解析によって質的診断の精度を向上させることができる。以下に、その応用例を説明する。 By applying the present embodiment to an intravascular OCT apparatus, the above problems can be solved, and it becomes possible to detect a high-resolution tomographic image with a deep depth of field and an image of a blood vessel wall. can improve the accuracy of qualitative diagnosis. An application example will be described below.
 図18は、本実施形態を血管内OCT装置に適用した構成を示す図である。
 図18の撮像カテーテル151は、ガイドワイヤーを介して下肢大動脈から冠状動脈に挿入されたシース内で回転とプルバックがなされる。ガイドワイヤーとシースは、現存する治療器具のため、図示していない。
FIG. 18 is a diagram showing a configuration in which this embodiment is applied to an intravascular OCT apparatus.
The imaging catheter 151 of FIG. 18 is rotated and pulled back within a sheath inserted from the aorta of the lower extremity into the coronary artery via a guidewire. The guidewire and sheath are not shown due to existing therapeutic equipment.
 コネクタ152は、撮像カテーテル151の着脱に加え、撮像カテーテル151をローター部153に固定(チャッキング)する役目を有する。 The connector 152 has a role of fixing (chucking) the imaging catheter 151 to the rotor section 153 in addition to attaching and detaching the imaging catheter 151 .
 コネクタ152は、撮像カテーテル151をローター部153と一緒に回転させることと、撮像カテーテル151に内装された1次元配列のファイバー列154とラインセンサー155の画素配列が、テレセントリック光学系168を介して一対一に対応するように固定する。 The connector 152 rotates the imaging catheter 151 together with the rotor portion 153, and the one-dimensional array of the fiber array 154 and the pixel array of the line sensor 155 incorporated in the imaging catheter 151 are paired via the telecentric optical system 168. It is fixed so that it corresponds to one.
 撮像カテーテル151とローター部153には、以下に説明する機構が内装されている。
 装置本体に設置された周波数掃引光源(不図示)から、可視から近赤外まで周波数掃引がなされた光が出射される。出射された光は、光ロータリージョイント156を介し、ファイバー157によってファイバーカップラ158に導光され、照明光と参照光に分離される。
The imaging catheter 151 and the rotor section 153 incorporate mechanisms described below.
A frequency-swept light source (not shown) installed in the device main body emits light whose frequency is swept from visible to near-infrared. The emitted light passes through an optical rotary joint 156 and is guided to a fiber coupler 158 by a fiber 157, where it is separated into illumination light and reference light.
 照明光はファイバー159で導光され、コリメート光学系160によって平行光に変換され、シリンドリカル光学系161を介し、ビームスプリッタ162で反射され、1次元に配列にされた100本程のファイバー列154の端163に集光される。 The illumination light is guided by a fiber 159, converted into parallel light by a collimator optical system 160, passed through a cylindrical optical system 161, reflected by a beam splitter 162, and passed through about 100 one-dimensionally arranged fiber arrays 154. It is focused at edge 163 .
 シリンドリカル光学系161のNA(開口数)は、ファイバー列154のNAに一致するように設定されている。ファイバー列154の配列を、1次元の千鳥配列にして、配列数を200本に増やしても良い。 The NA (numerical aperture) of the cylindrical optical system 161 is set to match the NA of the fiber array 154 . The arrangement of the fiber arrays 154 may be arranged in a one-dimensional staggered arrangement to increase the number of arrangements to 200.
 そして、ファイバー列154で導光された照明光は、ファイバー列154の端164から出射され、対物光学系165とミラー166を介して血管内を照明する。対物光学系165は、像側テレセントリック系で、その焦点は、図17の150に示した範囲の中心に設定されている。そして、血管内と血管壁、及び血管壁内層からの反射光が、対物光学系165によってファイバー列154の端164に結像され、ローター部153へ導光される。 The illumination light guided by the fiber array 154 is emitted from the end 164 of the fiber array 154 and illuminates the inside of the blood vessel via the objective optical system 165 and the mirror 166 . The objective optical system 165 is an image-side telecentric system, and its focal point is set at the center of the range indicated by 150 in FIG. Reflected light from the inside of the blood vessel, the blood vessel wall, and the inner layer of the blood vessel wall is imaged on the end 164 of the fiber array 154 by the objective optical system 165 and guided to the rotor section 153 .
 ローター部153において、ファイバー列154の端163から出射された反射光は、ビームスプリッタ162によって参照光と合波され干渉縞を生じる。ファイバーカップラ158から参照光を導光するファイバー167の長さは、ファイバー列154の往復の長さに対応している。 In the rotor section 153, the reflected light emitted from the end 163 of the fiber array 154 is combined with the reference light by the beam splitter 162 to generate interference fringes. The length of fiber 167 that guides the reference light from fiber coupler 158 corresponds to the round trip length of fiber array 154 .
 干渉縞は、テレセントリック光学系168によってラインセンサー155上に結像される。テレセントリック光学系168は、ファイバー列154の端163の像を拡大し、かつ、ファイバー列154のNAと1次元受光素子155の指向性が一対一に対応するように、両側テレセントリックの光学系になっている。 The interference fringes are imaged on the line sensor 155 by the telecentric optical system 168. The telecentric optical system 168 magnifies the image of the end 163 of the fiber train 154 and is a double-telecentric optical system so that the NA of the fiber train 154 and the directivity of the one-dimensional light receiving element 155 correspond one-to-one. ing.
 そして、1次元受光素子155を高速駆動することによって、1次元受光素子155の各素子で受光した干渉縞信号が標本化される。1次元受光素子155の画素が100個と少ないため、高速駆動が充分に達成できる。 Then, by driving the one-dimensional light receiving element 155 at high speed, the interference fringe signal received by each element of the one-dimensional light receiving element 155 is sampled. Since the number of pixels of the one-dimensional light receiving element 155 is as small as 100, high-speed driving can be sufficiently achieved.
 また、高速駆動のため、一見、1次元受光素子155の感度に余裕がないようにみえるが、干渉縞をフーリエ変換処理した後の振幅(SN比)は、フーリエ変換の位相整合によって、受光方向の画素数倍に(単スペクトルの帯域幅のSN比に)なるため、問題は生じない。 In addition, because of the high-speed drive, the sensitivity of the one-dimensional light receiving element 155 seems to have no margin at first glance, but the amplitude (SN ratio) after Fourier transform processing of the interference fringes can be changed by the phase matching of the Fourier transform. (to the SN ratio of the monospectral bandwidth), no problem arises.
 そして、回転とプルバックを行う駆動系169によって、ローター部153と一体に撮像カテーテル151の回転とプルバックなされ、冠状動脈15cmに渡って干渉縞信号が順次検出される。干渉縞信号は、ロータリートランス170を介して装置本体に送られ、装置本体にあるメモリ(図示にない)に記憶される。 Then, the imaging catheter 151 is rotated and pulled back integrally with the rotor section 153 by the drive system 169 that performs rotation and pullback, and the interference fringe signal is sequentially detected over 15 cm of the coronary artery. The interference fringe signal is sent to the main body of the apparatus via the rotary transformer 170 and stored in a memory (not shown) in the main body of the apparatus.
 ロータリートランス170の代りに、光ロータリージョイント156を多チャンネルにして、その他のコントロール信号を含め光変調を行い、装置本体とインタフェースを行っても良い。電源についてはスリップリングを使用する。 Instead of the rotary transformer 170, the optical rotary joint 156 may be multi-channeled, optically modulated including other control signals, and interfaced with the apparatus main body. A slip ring is used for the power supply.
 装置本体のメモリから干渉縞信が読み出され、可視光帯と近赤外帯の部分の干渉縞信号に分けられて、フーリエ変換が行われる。そして、フーリエ変換によって得られた複素信号を使用して、図14(a)、(b)で説明した被写界深度の拡大と、光波面の乱れの補正を、2次元フィルタ処理によって行う。 The interference fringe signal is read out from the memory of the main body of the device, divided into interference fringe signals in the visible light band and the near-infrared band, and Fourier transform is performed. Then, using the complex signal obtained by the Fourier transform, the extension of the depth of field and the correction of the disturbance of the optical wavefront described with reference to FIGS. 14A and 14B are performed by two-dimensional filtering.
 これにより、血管壁のRGB画像と3次元の形状測定、および、スペクトルの解析画像の検出が可能になり、同時に、近赤外による血管内、および、血管壁内の3次元画像や断層像の検出が可能になる。 This makes it possible to detect RGB images and three-dimensional shape measurements of blood vessel walls, and to detect spectrum analysis images. detection becomes possible.
 CG技術による自由な視点からの3D画像の構築や、透過像、断層像の表示ができ、上述したように、それらの画素ごとのスペクトルの解析によって、起因物質の質的診断を行うことができる。 It is possible to construct 3D images from a free viewpoint using CG technology, display transmission images and tomographic images, and as described above, analyze the spectrum of each pixel to perform qualitative diagnosis of the causative agent. .
 実時間で画像を観察するときは、多くの処理時間を必要とする被写界深度の拡大と光波面の乱れの補正を、静止画表示のときにだけ行なうようにして、焦点の移動をマニュアルで行えば、実時間の観察ができる。 When observing an image in real time, the expansion of the depth of field and correction of the disturbance of the optical wavefront, which require a lot of processing time, should be performed only when displaying a still image, and the focus should be moved manually. , real-time observation is possible.
 次に、本実施形態が適用された血管内OCT装置を使用して冠状動脈15cmの画像を検出する方法を、図19を用いて説明する。図18の撮像カテーテル151に最も近い撮像範囲171(図19の画像の幅181に相当する)が、1.5mmになるように、対物光学系165の画角を設定する。そして、撮像カテーテル151とローター部153を、75回転/秒の速度で回転させながら、15cmの冠状動脈の中を2秒間プルバックして3次元の画像を取得する。 Next, a method for detecting an image of a 15 cm coronary artery using the intravascular OCT apparatus to which this embodiment is applied will be described using FIG. The angle of view of the objective optical system 165 is set so that the imaging range 171 (corresponding to the width 181 of the image in FIG. 19) closest to the imaging catheter 151 in FIG. 18 is 1.5 mm. Then, while the imaging catheter 151 and the rotor section 153 are rotated at a speed of 75 rotations/second, a 15 cm coronary artery is pulled back for 2 seconds to obtain a three-dimensional image.
 図19は、プルバックで取得して切り開いた血管壁の画像を示している。重複部分182を除いた幅1mmの血管壁の画像が、血管長15cmに渡って150枚検出される。重複部分182の幅は、血管壁までの距離によって変わる。 FIG. 19 shows an image of a blood vessel wall that has been cut open by pulling back. 150 images of the blood vessel wall with a width of 1 mm excluding the overlapping portion 182 are detected over the blood vessel length of 15 cm. The width of overlapping portion 182 varies with the distance to the vessel wall.
 検出された3次元画像の画素の位置データから、CGの技術によって、重複部分182の画素の位置と倍率を補正し、プルバックの方向に対して画像の振幅強度をスムージングさせて加算すると、画像を貼り合わせることができる。 From the position data of the detected pixels of the three-dimensional image, the position and magnification of the pixels of the overlapping portion 182 are corrected by CG technology, and the amplitude intensity of the image is smoothed in the direction of the pullback and added to obtain the image. Can be pasted together.
 冠状動脈15cmに渡って3次元に解像した画像データを、自由な視点と倍率を以って表示することが可能である。そして、上述したスペクトルの解析画像によって、質的診断の精度を上げることができる。 It is possible to display three-dimensionally resolved image data over 15 cm of the coronary artery with a free viewpoint and magnification. The analysis image of the spectrum described above can improve the accuracy of qualitative diagnosis.
 また、フーリエ変換処理によって得られる受光方向の解像度は、ファイバーの配列間隔で決まる水平解像度より高くなる。このため、ミラー166(図18)の角度を調整して、血管壁を斜めに照射して撮像するようにすると、血管壁の画像の解像度を上げることができる。 In addition, the resolution in the light receiving direction obtained by Fourier transform processing is higher than the horizontal resolution determined by the arrangement interval of the fibers. Therefore, if the angle of the mirror 166 (FIG. 18) is adjusted so that the vascular wall is obliquely illuminated and imaged, the resolution of the image of the vascular wall can be increased.
 また、図18の説明では、ファイバー列155に、可視光帯と近赤外帯の両方を導光する広帯域光ファイバーを使用しているが、可視光帯用と近赤外帯用のファイバー列を縦に並列に配置し、処理回路を2系統用意するようにしても良い。また、周波数掃引光源も可視光帯用と近赤外帯用に分けて用意しても良い。 Further, in the description of FIG. 18, the fiber array 155 uses a broadband optical fiber that guides both the visible light band and the near infrared band. They may be arranged vertically in parallel, and two systems of processing circuits may be prepared. Also, the frequency swept light source may be separately prepared for the visible light band and for the near infrared band.
 また、近赤外光によって断層像を検出する代わりに、撮像カテーテル151の先端に超音波振動子を併設し、超音波によって断層像を検出する機構と、上述した血管壁の画像検出とスペクトルの解析の機構を組合せても良い。超音波断層は、近赤外に比べると解像度が落ちるが、断層像の検出深度が深い。また、形態診断においてもそれぞれ独自の特徴を有している。 In addition, instead of detecting a tomographic image with near-infrared light, an ultrasonic transducer is provided at the tip of the imaging catheter 151, and a mechanism for detecting a tomographic image with ultrasonic waves, the above-described image detection of the blood vessel wall, and spectrum detection. Analysis mechanisms may be combined. Ultrasonic tomography has lower resolution than near-infrared, but the detection depth of the tomographic image is deep. They also have their own characteristics in morphological diagnosis.
 次に、図20は、本実施形態をX線撮像やγ線撮像に応用する例を説明する図である。
 図20のX線源191から出射されるX線は、検出する解像度に見合った周波数掃引とコヒーレンスを有している。または、X線の振幅に、検出する解像度に見合った周波数掃引の振幅変調がなされている。
Next, FIG. 20 is a diagram for explaining an example in which the present embodiment is applied to X-ray imaging and γ-ray imaging.
X-rays emitted from the X-ray source 191 in FIG. 20 have a frequency sweep and coherence that match the resolution to be detected. Alternatively, the amplitude of the X-ray is amplitude-modulated with a frequency sweep corresponding to the resolution to be detected.
 X線源191から照射されたX線は、X線用のビームスプリッタ192を通過して被写体193を照射する。ビームスプリッタ192の形状は、楕円球面であって、焦点の一つはX線源191の出射口に位置し、もう一つの焦点は反射体194の反射面に位置している。 X-rays emitted from an X-ray source 191 pass through a beam splitter 192 for X-rays and irradiate an object 193 . The shape of the beam splitter 192 is an elliptical sphere, one of the focal points is located at the exit of the X-ray source 191 and the other is located on the reflecting surface of the reflector 194 .
 X線源191から照射されたX線がX線用のビームスプリッタ192よって一部が反射され、さらに、反射体194によって反射され、参照用のX線として、2次元受光素子195面を照射する。 The X-rays emitted from the X-ray source 191 are partly reflected by the X-ray beam splitter 192, further reflected by the reflector 194, and irradiated onto the two-dimensional light receiving element 195 as reference X-rays. .
 X線用のビームスプリッタ192は、表面がEEM(Elastic Emission Machining)方式によって研磨された±1~2nmの非常に高い表面精度を有するX線専用のミラーである。近年、このようなX線専用のミラーが市販されるようになっている。X線用ミラーの設置する角度を調整することで、反射率と透過率を調整し、ビームスプリッタ192として使用する。 The beam splitter 192 for X-rays is an X-ray-only mirror whose surface is polished by the EEM (Elastic Emission Machining) method and has a very high surface accuracy of ±1 to 2 nm. In recent years, mirrors dedicated to such X-rays have become commercially available. By adjusting the angle at which the X-ray mirror is installed, the reflectance and transmittance are adjusted and used as the beam splitter 192 .
 被写体から反射(後方散乱)したX線は、ビームスプリッタ192によって、反射体194から反射した参照用X線と合波され、干渉縞を生じる。干渉縞は、CMOSやCCD撮像素子、もしくは、フラットパネルディテクタ(FPD)などの2次元受光素子195によって電気信号に変換される。   The X-rays reflected (backscattered) from the subject are combined with the reference X-rays reflected from the reflector 194 by the beam splitter 192 to generate interference fringes. The interference fringes are converted into electric signals by a two-dimensional light receiving element 195 such as a CMOS or CCD imaging element or a flat panel detector (FPD).  
 X線の掃引時間内に2次元受光素子195の撮像を高速に繰り返すことで、検出方向におけるX線の干渉縞を検出し、フーリエ変換によって検出方向を解像する。そして、前述した2次元フィルタ処理によって検出方向と垂直な面の解像を行って、被写体を3次元に解像する。 By repeating the imaging of the two-dimensional light receiving element 195 at high speed within the X-ray sweep time, X-ray interference fringes in the detection direction are detected, and the detection direction is resolved by Fourier transform. Then, a plane perpendicular to the detection direction is resolved by the two-dimensional filtering process described above, and the subject is resolved three-dimensionally.
 最新鋭のCTでは、3次元データを取得する時間が、1秒程度に早くなっている。しかし、あくまでも位相情報を使用せずに、吸収情報のみで解像するため、解像度がミリオーダーに留まっている。 With the latest CT, the time to acquire 3D data has been shortened to about 1 second. However, the resolution remains in the order of millimeters because the resolution is based on only the absorption information without using the phase information.
 それに比して、本実施形態の撮像方式によれば、3次元データを取得する時間は、シャッタ動作と同じ数msにできる可能性があるのと、また、位相情報を使用して解像を行うため、2次元受光素子195の画素配列の間隔から、ミクロンオーダーの解像度を得ることができる。そして、目的応じて画角や拡大倍率、解像度を自由に設定できる。装置の規模もCTに比して簡素な構成で済む。 In contrast, according to the imaging method of the present embodiment, the time to acquire three-dimensional data can be reduced to several milliseconds, which is the same as the shutter operation. Therefore, resolution on the order of microns can be obtained from the intervals of the pixel array of the two-dimensional light receiving element 195 . The angle of view, magnification, and resolution can be freely set according to the purpose. The scale of the apparatus can also be simplified compared to CT.
 また、X線の波長は可視光に比べて二桁ほど短いため、ミクロンオーダーの解像度を得るのに必要な周波数掃引は、ごく狭い比帯域幅で済み、X線蛍光などの非線形な散乱が起きる波長を避けて設定することができる。また、撮像素子195の開口を小さくすることができるため、3次元フィルタ処理の処理数を少なくすることができる。周波数掃引ができるX線源の発表も近年盛んになってきている。 In addition, since the wavelength of X-rays is about two orders of magnitude shorter than that of visible light, the frequency sweep required to obtain micron-order resolution requires only a very narrow fractional bandwidth, causing non-linear scattering of X-ray fluorescence and the like. It can be set to avoid wavelengths. In addition, since the aperture of the imaging element 195 can be made small, the number of three-dimensional filtering processes can be reduced. Announcements of X-ray sources capable of frequency sweeping have also become popular in recent years.
 上述したように、撮像素子の配列間隔を製造限界の1μmとした場合、数μmオーダーの3次元の解像が可能になり、撮像倍率や、それに応じた解像度の設定が自由にでき、目的に応じて、透過像、断面像、3次元構築した画像の表示ができる。また、物質の特徴が表れるスペクトル吸収帯域に周波数掃引を合わせれば、スペクトルの解析による物質特定の可能性がある。 As described above, when the array interval of the imaging elements is set to the production limit of 1 μm, three-dimensional resolution on the order of several μm becomes possible, and the imaging magnification and the corresponding resolution can be freely set. Accordingly, transmission images, cross-sectional images, and images constructed in three dimensions can be displayed. In addition, if the frequency sweep is matched to the spectral absorption band where the characteristics of the substance appear, there is a possibility of specifying the substance by analyzing the spectrum.
実施例1.
 図22は、実施例1に係る撮像装置1001を示し、撮像装置1001は、3次元空間の解像と、その画素ごとにスペクトル解析が可能である。図23に、撮像装置1001の理解を助けるために、図22の撮像装置1001が立体的に示されている。
Example 1.
FIG. 22 shows an imaging device 1001 according to the first embodiment, and the imaging device 1001 is capable of three-dimensional space resolution and spectral analysis for each pixel. FIG. 23 stereoscopically shows the imaging device 1001 of FIG. 22 to facilitate understanding of the imaging device 1001 .
 点光源1039から出射された光はシリンドリカル光学系1017と第1のスリット1015を介し、第1のコリメート光学系1013及びミラー1031によって第1のビームスプリッタ1011に投入される。第1のビームスプリッタ1011で分離された照明光は、分割部である第2のビームスプリッタ1008を介して第1のコリメート光学系1007に投入され、第2のスリット1006を介して、対物光学系1005によって被写体1003を照明する。光路の途中、第2のビームスプリッタ1008で分離された照明光は、第2のコリメート光学系1009を介して反射板1010に照射され、参照光が生成される。 The light emitted from the point light source 1039 passes through the cylindrical optical system 1017 and the first slit 1015 and is introduced into the first beam splitter 1011 by the first collimating optical system 1013 and the mirror 1031 . The illumination light separated by the first beam splitter 1011 is input to the first collimating optical system 1007 via the second beam splitter 1008, which is a dividing section, and then passes through the second slit 1006 to the objective optical system. A subject 1003 is illuminated by 1005 . The illumination light separated by the second beam splitter 1008 in the middle of the optical path is applied to the reflector 1010 via the second collimating optical system 1009 to generate reference light.
 第1及び第2のスリット1006、1015は、図22の紙面と垂直の方向(X方向)にライン状(略矩形状)の開口1006aを備え、第1及び第2のスリット1015、1006の開口1006aを通る照明光は、被写体1003に対し線状に照射される。第2のスリット1006の光路上の位置は、対物光学系1005の結像位置にあり、第2のスリット1006の位置、反射板1010の位置、2次元受光センサ1019の受光面の位置は、共役である。 The first and second slits 1006 and 1015 have linear (substantially rectangular) openings 1006a in the direction (X direction) perpendicular to the plane of FIG. Illumination light passing through 1006a illuminates the subject 1003 in a linear fashion. The position of the second slit 1006 on the optical path is at the imaging position of the objective optical system 1005, and the position of the second slit 1006, the position of the reflector 1010, and the position of the light receiving surface of the two-dimensional light receiving sensor 1019 are conjugate is.
 本実施例は、照明光が第2のスリット1006を介し照射される構成であるので、撮像装置1001の用途に応じて、対物光学系1005、もしくは、第2のスリット1006の右側(光路における上流側)の撮像系の構成要素を交換できる。 In this embodiment, the illumination light is emitted through the second slit 1006. Therefore, depending on the application of the imaging device 1001, the objective optical system 1005 or the right side of the second slit 1006 (upstream in the optical path) side) can be exchanged.
 被写体1003からの反射光は、対物光学系1005と第2のスリット1006と第1のコリメート光学系1007を介し、第2のビームスプリッタ1008によって、反射板1010からの反射光である参照光に干渉される。ここで、図22の紙面に対し垂直方向(矢印X方向)に関する反射板1010の長さは、図22の紙面に対し垂直方向に関し第2のスリット1006の開口1006aのX方向の長さに対応している。また、第2のビームスプリッタ1008から反射板1010までの光路長は、第2のビームスプリッタ1008から第2のスリット1006までの光路長に対応している。 Reflected light from the subject 1003 passes through the objective optical system 1005, the second slit 1006, and the first collimating optical system 1007, and the second beam splitter 1008 interferes with the reference light, which is the reflected light from the reflector 1010. be done. 22 corresponds to the length of the opening 1006a of the second slit 1006 in the X direction in the direction perpendicular to the paper in FIG. is doing. Also, the optical path length from the second beam splitter 1008 to the reflector 1010 corresponds to the optical path length from the second beam splitter 1008 to the second slit 1006 .
 ちなみに、図22中の破線IIで示す照明を行う照明光学系と干渉を行う干渉光学系の構成要素は、分光器1014と被写体1003の間の光路上であれば、どこに設置されていてもよい。また、照明光学系の構成要素と干渉光学系の構成要素を分けて設置してもよい。干渉光学系の構成要素が第2のスリット1006より被写体1003側(光路上の下流側)に設置されると、第2のスリット1006を含め右側(光路上の上流側)に配置される構成要素は、既知のハイパースペクトルカメラを使用することができる。 Incidentally, the components of the interference optical system that interferes with the illumination optical system that performs illumination indicated by the dashed line II in FIG. . Also, the components of the illumination optical system and the components of the interference optical system may be installed separately. When the components of the interference optical system are placed on the object 1003 side (downstream side of the optical path) from the second slit 1006, the components including the second slit 1006 are placed on the right side (upstream side of the optical path). can use known hyperspectral cameras.
 参照光と干渉させられた反射光(干渉光)は、分光器1014に投入される。本実施例では、図22中の分光器1014は、小型化に有利な透過型の回折格子を利用しているが、反射型の分光器でもよい。分光器1014によって波長成分に分光された反射光は、結像光学系1016によって2次元受光センサ1019に結像される。2次元受光センサ1019としては、既知のCCDイメージセンサやグローバルシャッタ方式のCMOSイメージセンサを利用できる。 The reflected light (interference light) that interferes with the reference light is input to the spectroscope 1014. In this embodiment, the spectroscope 1014 in FIG. 22 uses a transmission type diffraction grating which is advantageous for miniaturization, but a reflection type spectroscope may be used. The reflected light split into wavelength components by the spectroscope 1014 is imaged on the two-dimensional light receiving sensor 1019 by the imaging optical system 1016 . As the two-dimensional light receiving sensor 1019, a known CCD image sensor or a global shutter CMOS image sensor can be used.
 図22の紙面に垂直な方向(図23のX方向参照。)の2次元受光センサ1019の素子列には、被写体1003に対し線状に照明された被写体部分が結像され、図22の縦方向(図23のY方向参照。)の2次元受光センサ1019の素子列には、その画素(図23のPX参照。)ごとに分光された波長成分が結像される。そして、この分光された波長成分の光には、対物光学系1005から被写体1003上の反射点までの光学距離に対応した周波数の干渉縞が発生していて、それが反射点の数だけ重畳されている。ゆえに、図23の縦方向(Y方向)の干渉縞の信号についてフーリエ変換を行なえば、OCT(Optical Coherence Tomography)の光干渉解像処理と同じ原理によって、光軸OA方向の解像を行うことができる。 22. On the element array of the two-dimensional light receiving sensor 1019 in the direction perpendicular to the paper surface of FIG. On the element array of the two-dimensional light receiving sensor 1019 in the direction (see Y direction in FIG. 23), wavelength components dispersed for each pixel (see PX in FIG. 23) are imaged. Interference fringes of frequencies corresponding to the optical distance from the objective optical system 1005 to the reflection points on the object 1003 are generated in the light of the spectrally separated wavelength components, and the interference fringes are superimposed by the number of reflection points. ing. Therefore, if Fourier transform is performed on the interference fringe signal in the vertical direction (Y direction) in FIG. can be done.
 光が被写体1003の各画素(図23のpx参照。)に到達する僅かな時間差(位相差)を電気的に検出することは困難であるが、反射光に参照光を加え、分光を行って生じた干渉縞をフーリエ変換すると、光学距離差に応じた時間差を空間周波数成分の違いに変換することができ、各画素(図23のpx参照。)までの光学距離とその反射率を電気的に検出することが可能になる。また、分光した波長ごとに、結像光学系1016によって2次元受光センサ1019の素子列(図23のPX参照。)に結像することと、さらに,その素子列が受光した信号の波長帯域成分についてフーリエ変換を行うため、光の波長帯域成分の位相を整合して加算した(マッチドフィルタ)ことにより、レーダーにおけるパルス圧縮と同様の結果が得られる。あたかも、短いパルス幅(波長帯域幅と中心波長で決まる)の光を用いて、電波を送受信することで対象物を測定するレーダーと同じ解像効果を光軸OA方向に得られる。  It is difficult to electrically detect the slight time difference (phase difference) in which the light reaches each pixel (see px in FIG. 23) of the subject 1003. When the generated interference fringes are Fourier transformed, the time difference corresponding to the difference in optical distance can be converted into a difference in spatial frequency components. It becomes possible to detect Further, for each spectrally separated wavelength, an image is formed on the element row (see PX in FIG. 23) of the two-dimensional light receiving sensor 1019 by the imaging optical system 1016, and further, the wavelength band component of the signal received by the element row To perform the Fourier transform of , the phase-matched summation of the wavelength band components of the light (matched filter) yields a result similar to pulse compression in radar. By transmitting and receiving radio waves using light with a short pulse width (determined by the wavelength bandwidth and center wavelength), it is possible to obtain the same resolution effect in the optical axis OA direction as radar, which measures objects. 
 ちなみに、光源1039には、解像度とスペクトル解析に必要な帯域を満足させるために、広帯域光源が使用される。将来、掃引型の広帯域光源が出現すれば、分光器1014と2次元受光(エリア)センサ1019の代わりに、1次元受光センサを紙面と垂直の方向(X方向)に配置すればよいことになる。ただし、広帯域光源には、広帯域波長掃引の高い直線性や周波数安定度が必要であり、1次元受光センサには、各素子の出力を並列に読み出せる機能と、その出力を記憶して、順次読み出しながらフーリエ変換を行う機能が必要になる。 Incidentally, the light source 1039 uses a broadband light source in order to satisfy the bandwidth required for resolution and spectral analysis. If a sweep-type broadband light source appears in the future, instead of the spectroscope 1014 and the two-dimensional light receiving (area) sensor 1019, a one-dimensional light receiving sensor should be placed in the direction perpendicular to the paper surface (X direction). . However, a broadband light source requires high linearity and frequency stability in a broadband wavelength sweep. A function to perform Fourier transform while reading is required.
 そして、以上の一連の処理を繰返しながら、走査機構1004によって図22上の縦方向(S方向)に走査を行なうことで、被写体1003の3次元の解像と、その画素pxごとのスペクトル強度分布を検出することができる。 Then, while repeating the above series of processes, the scanning mechanism 1004 scans in the vertical direction (S direction) in FIG. can be detected.
 以下に、RGB画像を生成する処理について説明する。図22に示す2次元受光センサ1019の出力は、信号処理部1051内にあって図24に示すFFT61に入力し、図25に示す可視光帯域80についてのフーリエ変換が行われ、図25に示すW(white)信号81が生成される。被写体が生体で、生体表面からある程度深い部分を透過像として描出する場合は、図25に示した透過性の良い近赤外領域85にW信号81を含めた範囲についてフーリエ変換を行い、W信号として生成される。 The process of generating an RGB image is explained below. The output of the two-dimensional light receiving sensor 1019 shown in FIG. 22 is input to the FFT 61 shown in FIG. A W (white) signal 81 is generated. When the subject is a living body and a portion deep from the surface of the living body is to be visualized as a transmission image, Fourier transform is performed on the range including the W signal 81 in the near-infrared region 85 with good transparency shown in FIG. is generated as
 W信号81の生成に並行して、FFT62によって、図25に示すR帯域のフーリエ変換が先ず行われ、R信号82が生成される。R信号82は、図24に示す補間メモリ部63によって画素の補間が行われ、W信号(輝度信号)81と時間軸(画素位置)の同期がなされる。続いて、図24に示すFFT62によって、図25に示すB帯域のフーリエ変換が行なわれ、B信号83が生成される。B信号83も同じく図24に示す補間メモリ部64によって画素の補間が行われ、W信号81と画素位置の同期がなされる。 In parallel with the generation of the W signal 81, the FFT 62 first performs the Fourier transform of the R band shown in FIG. 25 to generate the R signal 82. The R signal 82 undergoes pixel interpolation by the interpolation memory unit 63 shown in FIG. 24, and is synchronized with the W signal (luminance signal) 81 on the time axis (pixel position). Subsequently, the B-band Fourier transform shown in FIG. 25 is performed by the FFT 62 shown in FIG. 24 to generate the B signal 83 . The pixel position of the B signal 83 is similarly interpolated by the interpolation memory unit 64 shown in FIG.
 次に、これらのW、R、Bの信号をマトリクス変換器65でマトリクス変換を行い、RGBの映像信号が生成される。 Next, these W, R, and B signals are matrix-converted by the matrix converter 65 to generate RGB video signals.
 また、正確な色再現の情報が必要な場合、2次元受光センサ1019(図23参照。)の出力に、XYZ等色関数に応じた係数を乗じて、フーリエ変換を行なえば、XYZの信号を得ることができる。 If accurate color reproduction information is required, the output of the two-dimensional light receiving sensor 1019 (see FIG. 23) can be multiplied by a coefficient corresponding to the XYZ color matching function and Fourier transformed to obtain an XYZ signal. Obtainable.
 R信号82とB信号83は、W信号81よりも波長帯域幅が狭く、中心波長も異なるため、R信号82とB信号83の解像度は、W信号82の1/3程度になるが、人の目の解像度が同じく1/3程度であるため問題ない。 Since the R signal 82 and the B signal 83 have a narrower wavelength bandwidth and different center wavelengths than the W signal 81, the resolution of the R signal 82 and the B signal 83 is about 1/3 that of the W signal 82. There is no problem because the resolution of the eye is also about 1/3.
 このように、図25に示すように、分割されたR帯域とB帯域に対応した範囲のフーリエ変換を行うことで、R信号82とB信号83を生成することができる。その信号生成処理は、線形な系における重ね合わせの原理に基づいている。すなわち、広帯域光源は、R、G、Bと赤外線を含めて、波長帯域を分割した複数の光源の線形和から成り立っていると考えられ、照明、反射、参照光との干渉、分光、フーリエ変換のすべての処理が、線形処理である。よって、2次元受光センサ1019の出力の内、RやBの帯域に相当する部分の時系列信号を抜き出して、フーリエ変換を行うことで、RやBの単独光源を用いて信号生成処理を行ったのと同じ画像が得られることになる。 Thus, as shown in FIG. 25, the R signal 82 and the B signal 83 can be generated by performing the Fourier transform in the range corresponding to the divided R band and B band. Its signal generation process is based on the principle of superposition in linear systems. In other words, a broadband light source is considered to consist of a linear sum of multiple light sources with divided wavelength bands, including R, G, B, and infrared. is a linear process. Therefore, by extracting time-series signals corresponding to the R and B bands from the output of the two-dimensional light receiving sensor 1019 and performing Fourier transform, signal generation processing is performed using a single light source for R and B. You will get the same image.
 以下に、撮像装置1001で得たマルチスペクトルデータを使用して行なうマルチスペクトル解析について説明する。 Multispectral analysis performed using the multispectral data obtained by the imaging device 1001 will be described below.
 撮像装置1001は、反射板1008を矢印H方向にスライドし、吸収帯1012によって参照光が生じないようにすれば、既知のハイパースペクトルカメラとして使用することができる。 The imaging device 1001 can be used as a known hyperspectral camera by sliding the reflector 1008 in the direction of arrow H so that the absorption band 1012 does not generate reference light.
 または、撮像装置1001の2次元受光センサ1019の分光出力に、画素(図23の符号px参照。)までの光路長に対応する干渉縞の周波数成分をコンボリューション(畳み込み積分)を行うことで、その画素におけるマルチスペクトルデータ(分光特性)を検出することができる。 Alternatively, by convoluting the spectral output of the two-dimensional light receiving sensor 1019 of the imaging device 1001 with the frequency component of the interference fringes corresponding to the optical path length to the pixel (see symbol px in FIG. 23), Multispectral data (spectral characteristics) at that pixel can be detected.
 マルチスペクトルデータを使用して物質を特定するためには、先ず、撮像装置1001によって、目的物質の特定に必要なマルチスペクトルデータを出来るだけ多く取得する。次に、目的物質の名称、組成、前処理(クラスタの分散を小さくする正規化処理)に必要な情報(本情報は、照明光の輝度のバラツキ、照明光の波長帯域のバラツキ、画像の切出し情報などを含み、タグと呼ぶ。)を、取得されたマルチスペクトルデータにデータフォーマット作成部70によって付加して、オフラインのコンピュータにRAWデータとして蓄積する。 In order to specify a substance using multispectral data, first, the imaging device 1001 acquires as much multispectral data necessary for specifying the target substance as possible. Next, the name and composition of the target substance, information necessary for preprocessing (normalization processing to reduce cluster dispersion) (this information includes variations in the brightness of illumination light, variations in the wavelength band of illumination light, image cut including information and called a tag) is added to the acquired multispectral data by the data format creation unit 70 and stored as RAW data in an off-line computer.
 そして、オフラインのコンピュータで、このRAWデータの前処理を行った後、マルチスペクトルの波形をAIに学習(教師あり)させたり、スペクトル成分を多次元直交軸とした座標空間で、主成分分析やFS(Foley-Sammon)変換などの多変量解析による特徴軸の絞り込みを行ったりする。そして、この処理よって得た学習済みのAIのデータ(ニューロン係数)や、マルチスペクトルデータを特徴軸へ射影変換するためのマトリクス変換係数が、図23に示すコントロール部71に送られ、コントロール部71が有するメモリに記憶される。 Then, after preprocessing this RAW data on an offline computer, AI learns (with a teacher) multispectral waveforms, and principal component analysis and The feature axis is narrowed down by multivariate analysis such as FS (Foley-Sammon) transformation. Then, the learned AI data (neuron coefficients) obtained by this processing and the matrix transformation coefficients for projectively transforming the multispectral data onto the feature axis are sent to the control unit 71 shown in FIG. is stored in the memory of
 特定する物質が複数の場合、それぞれの物質に対応したマルチスペクトルデータを、直接、AIに学習(教師あり)させて、複数の物質を特定してもよい。AIによる識別は、図26に示すように、非線形な仕切りZにより可能である。 When there are multiple substances to be specified, multispectral data corresponding to each substance can be directly learned (supervised) by AI to identify multiple substances. Identification by AI is possible with a non-linear partition Z, as shown in FIG.
 または、図26に示すように、主成分分析やFS(Foley-Sammon)変換などの多変量解析によって、データ圧縮と同様に、スペクトル成分の特徴軸EU1~EUnに絞り込んでおいて(経験上、1000軸のスペクトルデータを多くても5~6の特徴軸に絞り込める)、AIに学習させると、AIの規模を大幅に縮小することができる。このAIの構成は、本実施例の撮像装置1001に内装してある図23のAI75と同じ構成である。特徴軸EU1~EUnへの射影変換は、外部コンピュータから図23のコントロール部71に送られて記憶されたマトリクス変換係数72-1~72-nを、乗算器73-1~73-nによって2次元受光センサ1019の出力に乗ずることでなされる。 Alternatively, as shown in FIG. 26, by multivariate analysis such as principal component analysis and FS (Foley-Sammon) transformation, similar to data compression, narrow down to the characteristic axes EU1 to EUn of the spectral components (experimentally, 1000 axis spectrum data can be narrowed down to 5 to 6 feature axes at most), and AI can be trained to significantly reduce the scale of AI. The configuration of this AI is the same as that of AI 75 shown in FIG. Projective transformation to the characteristic axes EU1 to EUn is performed by multiplying the matrix transformation coefficients 72-1 to 72-n sent from the external computer to the control unit 71 in FIG. This is done by multiplying the output of the dimensional light receiving sensor 1019 .
 ただし、上記の特定方式は、二つのクラスタを識別する方式のため、複数の物質を特定する際は、その都度、特徴軸を切換える必要がある。その切換えをツリー状に組合せて複数回行なったとしても、回路規模が大幅に縮小されているので、高速に物質特定を行なうことができる。 However, since the above identification method is a method for identifying two clusters, it is necessary to switch the characteristic axis each time when identifying multiple substances. Even if the switching is performed multiple times in a tree-like combination, the material can be specified at high speed because the circuit scale is greatly reduced.
 図22に示す実施例1の撮像装置1001による完像時間は、2次元受光センサ1019の読み出し時間と走査方向のライン数を掛け合わせた値になるため、高速に撮像を行うためには、高速のイメージセンサが必要である。例えば、現存する200万画素で1万フレーム/秒のイメージセンサを使用した場合、実施例1の撮像装置1001におけるフレームレートは10フレーム/秒になる。ゆえに、実施例1の撮像装置1001は、静止している被写体や、動きの少ない被写体の検出に向いている。ただし、スペクトル数を256程度に抑えることができる撮像装置の場合、 200万画素で1万フレーム/秒のイメージセンサを使用すると、 60フレーム/秒のフレームレートで検出が可能である。 22 is the product of the readout time of the two-dimensional light receiving sensor 1019 and the number of lines in the scanning direction. of image sensors are required. For example, if an existing image sensor with 2 million pixels and 10,000 frames/second is used, the frame rate in the imaging device 1001 of the first embodiment will be 10 frames/second. Therefore, the imaging apparatus 1001 of the first embodiment is suitable for detecting stationary subjects and subjects with little movement. However, in the case of an imaging device that can suppress the number of spectra to about 256, if an image sensor with 2 million pixels and 10,000 frames per second is used, detection is possible at a frame rate of 60 frames per second.
 具体的な用途として、コンピュータグラフィックス(CG)用の3次元計測装置がある。撮像した画像データからCGによって、自由な視点と方向から観察した画像、透過像、断面像(断層像)を表示することが可能である。スペクトルの分光情報から、正確な色再現や照明色に適合した表示が可能になる。また、本実施例の撮像装置1001は、画素ごとの成分分析が可能であるので、Z軸方向の分解能が高い顕微鏡装置や、表面形状の測定と測色が可能な表面検査装置、断層像の検出と組成分析が可能な眼底カメラなどへ応用できる。 A specific application is a three-dimensional measuring device for computer graphics (CG). It is possible to display an image observed from a free viewpoint and direction, a transmission image, and a cross-sectional image (tomographic image) using CG from captured image data. The spectral information of the spectrum enables accurate color reproduction and display that matches the lighting color. In addition, since the imaging apparatus 1001 of the present embodiment is capable of component analysis for each pixel, a microscope apparatus with high resolution in the Z-axis direction, a surface inspection apparatus capable of surface shape measurement and colorimetry, and a tomographic imaging apparatus can be used. It can be applied to a fundus camera capable of detection and composition analysis.
実施例2.ワンショットで、被写体表面の撮像とスペクトル解析が可能な撮像装置
 図27は、ライン分光方式の分光画像の検出(発明の概要の項を参照。)と光干渉解像処理(光干渉断層法の撮像原理に基づく処理:実施例1を参照。)の技術を組合せることで、被写体1003の撮像とスペクトルの解析を動的計測に適したワンショット撮像で行える撮像装置1101の実施例を示す。実施例2の撮像装置1101では、実施例1(図2参照。)の対物光学系1005が、特殊なシリンドリカル光学系1025の要素を備える。図27の紙面と垂直方向(X方向)の解像(結像)は、シリンドリカル光学系1025によって行われ、集光線1022の長手方向の解像は、サイドルッキングレーダーと同様に、被写体1003を斜めから(光軸OAに関し傾斜する方向に)照明し撮像することで、図27の紙面の水平方向(矢印Z方向)の光干渉解像処理による解像を利用して行なわれる。本実施例では、3次元の形状測定はできないが、ワンショットで動いている被写体1003の撮像とスペクトルの解析が可能である。
Example 2. Imaging device capable of imaging and spectrum analysis of the surface of an object in one shot FIG. Processing Based on Imaging Principles: See Embodiment 1.) By combining the techniques, an imaging apparatus 1101 capable of imaging an object 1003 and analyzing its spectrum by one-shot imaging suitable for dynamic measurement will be shown. In the image pickup apparatus 1101 of Example 2, the objective optical system 1005 of Example 1 (see FIG. 2) comprises a special cylindrical optical system 1025 element. Resolution (image formation) in the direction (X direction) perpendicular to the plane of FIG. By illuminating and imaging from an angle (in a direction inclined with respect to the optical axis OA), resolution by optical interference resolution processing in the horizontal direction (direction of arrow Z) of the paper surface of FIG. 27 is used. In this embodiment, three-dimensional shape measurement is not possible, but imaging of a moving subject 1003 and spectrum analysis are possible in one shot.
 図27において、第2のスリット1026の開口中心Cと光軸OAが交わる点を通過した照明光に注目し、説明を行う。第2のスリット1026の開口中心C以外を通過する照明光も同じ原理に沿って説明することができる。集光線1022(破線)に示すように、光軸OAに対して斜めに集光され、被写体1003を斜めから照明する。第2のスリット1026から集光線1022までの光路長が、集光線1022の長手方向の一端部1023から遠くなる(他端部1024に近づく)につれ、第2のシリンドリカル光学系1025の焦点位置が、集光度を一定に保ちながら漸次長くなるように、第2のシリンドリカル光学系1025の紙面と垂直方向(X方向)の曲率と開口が、集光線1022の長手方向に対して徐々に変化するように設定されている。また、第2のスリット1026の開口1026aのX方向に関する各点を通過した照明光の集光線が、被写体1033の面上に平行に照射(投影)されるように、第2のシリンドリカル光学系1025の、図27の紙面と垂直方向(X方向)の投影倍率が、集光線1022の長手方向に対して徐々に変化する光学系要素を備えている。また、第2のシリンドリカル光学系1025の縦方向(矢印Y方向)に関する拡散と強度分布(レンズパワー)が、集光線1022の長手方向に応じて適宜設定されている。ゆえに、第2のシリンドリカル光学系1025の構成要素に自由曲面的な(光軸OAに対して非対称な形状を有する)結像要素を含むことが望ましい。 In FIG. 27, attention will be paid to the illumination light that has passed through the point where the center C of the opening of the second slit 1026 and the optical axis OA intersect. The illumination light passing through the second slit 1026 other than the aperture center C can also be explained according to the same principle. As indicated by a condensed ray 1022 (broken line), the light is obliquely condensed with respect to the optical axis OA, and obliquely illuminates the subject 1003 . As the optical path length from the second slit 1026 to the focused beam 1022 becomes farther from one end 1023 in the longitudinal direction of the focused beam 1022 (closer to the other end 1024), the focal position of the second cylindrical optical system 1025 becomes The curvature and aperture of the second cylindrical optical system 1025 in the direction perpendicular to the plane of the paper (X direction) gradually change with respect to the longitudinal direction of the condensed light beam 1022 so that the condensed light beam 1022 is gradually elongated while maintaining a constant degree of condensed light. is set. In addition, the second cylindrical optical system 1025 is arranged so that condensed rays of illumination light that have passed through each point in the X direction of the aperture 1026a of the second slit 1026 are irradiated (projected) in parallel onto the surface of the subject 1033. 27 is equipped with an optical system element in which the projection magnification in the direction (X direction) perpendicular to the plane of FIG. Diffusion and intensity distribution (lens power) in the vertical direction (arrow Y direction) of the second cylindrical optical system 1025 are appropriately set according to the longitudinal direction of the condensed light beam 1022 . Therefore, it is desirable to include a free-form surface (having an asymmetrical shape with respect to the optical axis OA) imaging element as a component of the second cylindrical optical system 1025 .
 集光線1022の長手方向の解像と感度について説明する。図27に示すように、第2のスリット1026の開口中心Cと光軸OAが交わる点を通過した照明光の集光線1022を、第2のシリンドリカル光学系1025を通じて、斜め方向から被写体1003の表面上にライン状に収束させ、被写体1003から反射してきた反射光は、第2のシリンドリカル光学系1025、第2のスリット1026、第2のコリメート光学系1007を介し、第2のビームスプリッタ1008によって、反射板1010からの反射光である参照光に干渉される。集光線1022の長手方向の解像は、信号処理部1051に設置したOCI(Optical Coherence Imaging:以下、本明細書においてOCIと称す。)処理部にて行われる。OCI 処理は、OCT(Optical Coherence Tomography)の処理をベースとしている。OCI処理は、マイケルソン干渉法とフーリエ変換処理によって、ミクロンオーダーの短い光パルスを斜めから照射して、被写体表面から順次反射してくる光パルスを受光して1次元の画像を得る処理(光干渉解像処理)である。 The longitudinal resolution and sensitivity of the condensed light beam 1022 will be explained. As shown in FIG. 27, a condensed ray 1022 of illumination light that has passed through the point where the aperture center C of the second slit 1026 and the optical axis OA intersect is obliquely projected onto the surface of the subject 1003 through the second cylindrical optical system 1025. The light reflected from the object 1003, converged in a line upward, passes through the second cylindrical optical system 1025, the second slit 1026, the second collimating optical system 1007, and the second beam splitter 1008. The reference light, which is the reflected light from the reflector 1010, interferes. The longitudinal resolution of the condensed light beam 1022 is performed by an OCI (Optical Coherence Imaging: hereinafter referred to as OCI in this specification) processing section installed in the signal processing section 1051 . OCI processing is based on OCT (Optical Coherence Tomography) processing. OCI processing uses Michelson interferometry and Fourier transform processing to obtain a one-dimensional image by illuminating a micron-order short light pulse obliquely and receiving the light pulses that are successively reflected from the object surface. interference resolution processing).
 第2のスリット1026から左側(光路の下流側)に位置する構成は、針孔写真機と同様な構成(縦方向(図27のY方向)において)であるため、感度が低いようにみえる。しかし、前述したように、光干渉解像処理においてフーリエ変換を行うことで、光の各波長成分の位相が整合されて加算されるため、レーダーにおけるパルス圧縮同様、フーリエ変換後の信号の強度は、集光線1022の長手方向の画素数分だけ倍増される。SN比は画素数の平方根の値となり、感度に関する懸念はない。 The configuration located on the left side (downstream side of the optical path) from the second slit 1026 has the same configuration (in the vertical direction (Y direction in FIG. 27)) as a pinhole camera, so it seems to have low sensitivity. However, as mentioned above, by performing Fourier transform in optical interference resolution processing, the phases of each wavelength component of light are matched and added, so the intensity of the signal after Fourier transform is similar to pulse compression in radar. , is multiplied by the number of pixels in the longitudinal direction of the condensed ray 1022 . The SN ratio is the square root of the number of pixels, so there are no concerns about sensitivity.
 なお、反射光は、第2のスリット1026から集光線1022上の各反射点までの往復距離の違いに比例して(集光線1022上の反射点が離れるに従い)位相がずれ、それらが重畳された信号として受光される。当該重畳された反射光が、反射板1010からの参照光と干渉させられると、位相ずれに比例した周波数で干渉縞が発生し、それらが重畳された信号となる。干渉縞の周波数は、第2のスリット1026から集光線1022の反射位置までの光路長が長くなる反射点ほど高い。 In addition, the reflected light is out of phase in proportion to the difference in round-trip distance from the second slit 1026 to each reflection point on the condensed light beam 1022 (as the reflection points on the condensed light beam 1022 are separated), and they are superimposed. received as a signal. When the superimposed reflected light is caused to interfere with the reference light from reflector 1010, interference fringes are generated at a frequency proportional to the phase shift, resulting in a superimposed signal. The frequency of the interference fringes is higher at the reflection point where the optical path length from the second slit 1026 to the reflection position of the condensed beam 1022 is longer.
 図27の紙面と垂直な方向(矢印X)の解像は、シリンドリカル光学系1025を用いる結像によってなされる。第2のスリット1026より右側(光路の上流側)に図示された構成は、図22に示される実施例1の構成と同一であり、実施例1と同様に、画素ごとのスペクトル強度分布を検出する。 Resolution in the direction (arrow X) perpendicular to the plane of FIG. 27 is achieved by imaging using the cylindrical optical system 1025. The configuration shown on the right side of the second slit 1026 (on the upstream side of the optical path) is the same as the configuration of Example 1 shown in FIG. do.
 上記した実施例2の撮像方式によれば、被写体1003の表面の撮像とスペクトル解析が、ワンショットで行えるため、動きのある被写体の検出に向いている。また、本撮像方式により得られる画像は、図28に示されるように、被写体1003を斜めに照明し、被写体1003上の集光線1022と照明光の波面5が交わる反射点6を波面5の接線方向7から観察した画像と同じになる。さらに、被写体1003が半透明の場合、図28の左下の拡大図7Aが示すように、被写体内部の同一波面からの反射信号が重畳され、光軸OAが交差する波面5上の接線方向7から透過像を観察したのと同じ画像が得られる。本撮像方式の被写体が生体である場合には、透過性の高い近赤外領域(0.68μm~1.5μm)を照明光に含めるとよい。なお、図7Aの符号Tは、被写体1003が内部に血管などの組織を模式的に示している。 According to the imaging method of the second embodiment described above, the imaging of the surface of the object 1003 and the spectrum analysis can be performed in one shot, so it is suitable for detecting a moving object. Also, as shown in FIG. 28, an image obtained by this imaging method is obtained by obliquely illuminating the subject 1003 and aligning the reflection point 6 where the condensed light 1022 on the subject 1003 and the wavefront 5 of the illumination light intersect with the tangent line of the wavefront 5. It becomes the same as the image observed from the direction 7. Furthermore, when the object 1003 is translucent, as shown in the lower left enlarged view 7A of FIG. The same image as observed in transmission is obtained. When the subject of this imaging method is a living body, the illumination light should include a highly transmissive near-infrared region (0.68 μm to 1.5 μm). Note that the symbol T in FIG. 7A schematically indicates a tissue such as a blood vessel inside the subject 1003 .
 実施例2の具体的な用途として、ハンディな検査装置、例えば、表面検査装置、測色計、マイクロスコープ、術中顕微鏡装置などが挙げられる。 Specific applications of the second embodiment include handy inspection devices such as surface inspection devices, colorimeters, microscopes, and intraoperative microscope devices.
実施例3.大画面の撮像とスペクトル解析が高速に行える撮像装置
  図29は、撮像装置1201を複数備える実施例3を示す。実施例3の撮像装置1201は、実施例2の撮像装置1101が備える第2及び第1のスリット1026,1027を複数の既知のピンホールに置き換え、2次元受光センサ1053を既知のラインセンサーに置き換えられている構成で、当該ラインセンサーの素子列が縦方向(図27の矢印Y方向参照。)に配置されている。そして、被写体1003を図29の紙面と垂直の方向(矢印H方向)に移動させつつ撮像装置1201による検出を行なう構成は、工場のライン生産(ベルトコンベア)で使用される高速検査装置などに応用することができる。
Example 3. Imaging Apparatus Capable of Capturing Large-Screen Imaging and Spectrum Analysis at High Speed FIG. The imaging device 1201 of the third embodiment replaces the second and first slits 1026 and 1027 of the imaging device 1101 of the second embodiment with a plurality of known pinholes, and replaces the two-dimensional light receiving sensor 1053 with a known line sensor. In this configuration, the element rows of the line sensor are arranged in the vertical direction (see arrow Y direction in FIG. 27). The configuration in which the imaging device 1201 performs detection while moving the subject 1003 in the direction perpendicular to the plane of FIG. can do.
 図29には、上記の撮像装置1201を多段に組合せる構成を採択することで、大画面を高速に検査できる検査装置の実施例が示されている。図29に示すように、隣り合う撮像装置1201による撮像により取得される重複部分1032の画像をブロックに分割し、ブロック間の相関を算出し3次元のズレ量を検出し、当該ズレ量に基づき画素の移動と補間を行って画像を張り合わせることで、大画面を検出することができる。加えて画素ごとの周波数成分分析を行うことができる。 FIG. 29 shows an embodiment of an inspection apparatus capable of inspecting a large screen at high speed by adopting a configuration in which the imaging apparatuses 1201 described above are combined in multiple stages. As shown in FIG. 29, the image of the overlapping portion 1032 acquired by imaging by the adjacent imaging devices 1201 is divided into blocks, the correlation between the blocks is calculated, the three-dimensional shift amount is detected, and based on the shift amount A large screen can be detected by moving and interpolating pixels and pasting images together. In addition, frequency component analysis can be performed for each pixel.
 撮像装置1~n(nは自然数)は、重複部分1032における照明光の相互干渉を防ぐために、奇数番目に配列される撮像装置と偶数番目に配列される撮像装置に分けて駆動され撮像を行い、2回の撮像から1ライン(図29の矢印W方向)の画像を検出する。 The imaging devices 1 to n (n is a natural number) are driven separately into odd-numbered imaging devices and even-numbered imaging devices to prevent mutual interference of illumination light in the overlapping portion 1032. , an image of one line (in the direction of arrow W in FIG. 29) is detected from two imagings.
 実施例3の具体的な用途として、シートや鉄板などの大画面を高速に検査する検査装置、血液分析や遺伝子検査のように、大量に並べられたピットの被検体を、スペクトル解析によって一括で検査する装置などが挙げられる。 As a specific application of the third embodiment, an inspection device that inspects large screens such as sheets and iron plates at high speed, and a large number of pits to be inspected, such as blood analysis and genetic testing, can be collectively analyzed by spectral analysis. An inspection device and the like can be mentioned.
 上記した実施例は、光干渉解像処理及び分光画像の検出処理が同時に実施される構成であるが、本発明は、本実施例に限定されない。例えば、撮像装置の使用用途に応じて、光干渉解像処理、分光画像の検出処理等の各処理の実施タイミングを異ならせたり、各処理の一方のみを実施する構成とすることも可能であることは言うまでもない。 Although the above embodiment has a configuration in which optical interference resolution processing and spectral image detection processing are performed simultaneously, the present invention is not limited to this embodiment. For example, depending on the intended use of the imaging device, it is possible to change the execution timing of each process such as the optical interference resolution process and the spectral image detection process, or to implement only one of the processes. Needless to say.
 以上、本発明を適用した実施形態およびその変形例、実施例について説明したが、本発明は、各実施形態やその変形例、実施例そのままに限定されるものではなく、実施段階では、発明の要旨を逸脱しない範囲内で構成要素を変形して具体化することができる。また、上記した各実施形態や変形例、実施例に開示されている複数の構成要素を適宜組み合わせることによって、種々の発明を形成することができる。例えば、各実施形態や変形例、実施例に記載した全構成要素からいくつかの構成要素を削除しても良い。さらに、異なる実施の形態や変形例で説明した構成要素を適宜組み合わせても良い。このように、発明の主旨を逸脱しない範囲内において種々の変形や応用が可能である。 Embodiments to which the present invention is applied, modifications thereof, and examples have been described above, but the present invention is not limited to each embodiment, modifications thereof, and examples as they are. Components can be modified and embodied without departing from the scope of the invention. Further, various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the above-described embodiments, modifications, and examples. For example, some constituent elements may be deleted from all the constituent elements described in each embodiment, modified examples, and examples. Furthermore, components described in different embodiments and modifications may be combined as appropriate. As described above, various modifications and applications are possible without departing from the gist of the invention.
 以上のように、本発明は、簡易な構造で、被写体に対し3次元の解像と分光画像の検出を同時に実現できる3次元撮像装置に適している。
  本出願は、2021年4月19日に出願された日本特許出願2021- 70725号及び2021年12月24日に出願された日本特許出願2021-211632号の利益を主張するものであり 、その内容は全体として参照して本明細書に援用される。
INDUSTRIAL APPLICABILITY As described above, the present invention is suitable for a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection of a subject with a simple structure.
This application claims the benefit of Japanese Patent Application No. 2021-70725 filed on April 19, 2021 and Japanese Patent Application No. 2021-211632 filed on December 24, 2021, the content of which is incorporated herein by reference in its entirety.
 1 光源
 2 ビームスプリッタ
 3 被写体
 4 ミラー
 5 メモリ
 6 反射点
 7 反射光
 8 撮像素子
 9 光軸
 10 曲線
 11 フーリエ変換処理
 12 2次元フィルタ処理
 13 光干渉計
 14 光路差
 15 周波数
 16 点線
 17 点線
 18 参照光
 19 反射光 
 21 掃引の帯域幅
 22 掃引の帯域幅
 23 干渉縞の周波数
 24 点線
 25 光路差
 32 合波部
 33-1~33-n 受光素子
 34 フーリエ変換処理
 36-1~36-n 複素信号
 37 2次元フィルタ処理
 1001 撮像装置
 1003 被写体
 1004 走査機構
 1005 対物光学系
 1006,1026 第2のスリット
 1006a,1026a 開口
 1007 第1のコリメート光学系
 1008 第2のビームスプリッタ(分割部、合成部)
 1009 第2のコリメート光学系
 1010 反射板
 1011 第1のビームスプリッタ
 1012 吸収帯
 1013 第1のコリメート光学系
 1014 分光器(分光部)
 1015,1027 第1のスリット
 1016 結像光学系
 1017,1025 シリンドリカル光学系
 1019,1053 2次元受光センサ
 1022 集光線
 1023 一端部
 1024 他端部
 1031 ミラー
 1032 重複部分
 1101,1201 撮像装置
 1039 点光源
 OA 光軸
 PX, px 画素
1 light source 2 beam splitter 3 subject 4 mirror 5 memory 6 reflection point 7 reflected light 8 image sensor 9 optical axis 10 curve 11 Fourier transform processing 12 two-dimensional filter processing 13 optical interferometer 14 optical path difference 15 frequency 16 dotted line 17 dotted line 18 reference light 19 Reflected light
21 Sweep bandwidth 22 Sweep bandwidth 23 Interference fringe frequency 24 Dotted line 25 Optical path difference 32 Combiner 33-1 to 33-n Light receiving element 34 Fourier transform processing 36-1 to 36-n Complex signal 37 Two-dimensional filter Processing 1001 Imaging device 1003 Subject 1004 Scanning mechanism 1005 Objective optical system 1006, 1026 Second slit 1006a, 1026a Aperture 1007 First collimating optical system 1008 Second beam splitter (splitting unit, synthesizing unit)
1009 Second collimating optical system 1010 Reflector 1011 First beam splitter 1012 Absorption band 1013 First collimating optical system 1014 Spectroscope (spectroscope)
1015, 1027 First slit 1016 Imaging optical system 1017, 1025 Cylindrical optical system 1019, 1053 Two-dimensional light receiving sensor 1022 Condensed light beam 1023 One end 1024 Other end 1031 Mirror 1032 Overlapping part 1101, 1201 Imaging device 1039 Point light source OA light axis PX, px pixels

Claims (19)

  1.  光の周波数、もしくは、光の振幅変調の周波数を掃引して被写体を照明する照明光を供給する光源と、
     前記被写体からの反射光と参照光を合波して干渉縞を発生させる光干渉計と、
     2次元配列される受光素子により、又は1次元配列される前記受光素子が1次元走査されることにより、又は単体の前記受光素子が2次元走査されることにより、前記干渉縞を2次元の検出位置で干渉縞信号として検出する2次元検出機構と、
     前記被写体の3次元に分布する反射点を反射する前記反射光の、前記光源から前記2次元検出機構の2次元の検出位置までの光路長と、前記光源により出射される前記参照光の、前記光源から前記2次元検出機構の2次元の検出位置までの光路長との光路差を、解像する全ての前記反射点について、前記2次元の検出位置ごとに算出する光路差算出手段と、
     前記干渉縞信号の周波数を検出することで、前記2次元の検出位置ごとに受光方向を解像し3次元データ列を取得する検出部と、
     前記検出部により取得される前記3次元データ列と、前記光路差算出手段により算出される前記光路差の情報と、を使用し、電気的な処理による結像を行うことで、前記受光方向と交差する面の解像を行う2次元フィルタ処理部と、を備え、
     前記被写体の3次元に分布する反射点を3次元に解像することを特徴とする3次元撮像装置。
    a light source that sweeps the frequency of light or the frequency of amplitude modulation of light to supply illumination light that illuminates a subject;
    an optical interferometer that combines the reflected light from the object and the reference light to generate interference fringes;
    Two-dimensional detection of the interference fringes by two-dimensionally arranged light receiving elements, one-dimensional scanning of the one-dimensionally arranged light receiving elements, or two-dimensional scanning of a single light receiving element. a two-dimensional detection mechanism that detects as an interference fringe signal at a position;
    the optical path length from the light source to the two-dimensional detection position of the two-dimensional detection mechanism of the reflected light reflected by the three-dimensionally distributed reflection points of the subject; an optical path difference calculating means for calculating an optical path difference between an optical path length from a light source to a two-dimensional detection position of the two-dimensional detection mechanism for each of the two-dimensional detection positions for all the reflection points to be resolved;
    a detection unit that obtains a three-dimensional data string by detecting the frequency of the interference fringe signal to resolve the light receiving direction for each of the two-dimensional detection positions;
    The light receiving direction and the a two-dimensional filtering unit for resolving intersecting planes,
    3. A three-dimensional image pickup apparatus characterized by three-dimensionally resolving reflection points distributed three-dimensionally on the subject.
  2.  前記検出部は、前記干渉縞信号をフーリエ変換することにより前記3次元データ列を検出し、前記3次元データ列は、振幅と位相の複素信号であり、
     前記2次元フィルタ処理部は、前記3次元データ列から、前記結像の開口に相当するデータ列を選択し、前記選択された前記データ列から、前記2次元の検出位置から前記反射点までの光路長に一致するデータを、前記光路差の情報を用いて抽出し、前記光路差から算出したフィルタ係数を乗算し、加算することで、前記結像を行って前記反射点を解像し、同様に、解像する全ての前記反射点について前記フィルタ係数を重畳積分することで、前記受光方向と交差する面を解像することを特徴とする請求項1に記載の3次元撮像装置。
    The detection unit detects the three-dimensional data string by Fourier transforming the interference fringe signal, the three-dimensional data string being a complex signal of amplitude and phase,
    The two-dimensional filter processing unit selects a data string corresponding to the imaging aperture from the three-dimensional data string, and selects a data string from the selected data string to the two-dimensional detection position to the reflection point. Data corresponding to the optical path length is extracted using the information of the optical path difference, multiplied by the filter coefficient calculated from the optical path difference, and added to perform the imaging to resolve the reflection point, 2. The three-dimensional image pickup apparatus according to claim 1, wherein a surface intersecting with the light receiving direction is similarly resolved by convolutively integrating the filter coefficients for all the reflection points to be resolved.
  3.  前記3次元データ列を記憶する記憶部と、
     前記記憶部から、前記2次元検出機構の検出位置から解像する前記反射点までの光路長に一致する前記データを読み出すためのアドレスを、前記光路差を用いて生成するアドレス生成部と、
     前記アドレスを用いて、前記データを読み出し、前記受光方向のデータ補間と、初期位相の整合と、前記結像の開口の重みづけを行う前記フィルタ係数を生成するフィルタ係数生成部と、を備え、
     前記2次元フィルタ処理部が前記フィルタ係数を前記複素信号のデータに重畳積分することを特徴とする請求項2に記載の3次元撮像装置。
    a storage unit that stores the three-dimensional data string;
    an address generation unit that generates an address for reading out the data that matches the optical path length from the detection position of the two-dimensional detection mechanism to the reflection point to be resolved from the storage unit, using the optical path difference;
    a filter coefficient generation unit that reads the data using the address and generates the filter coefficients for data interpolation in the light receiving direction, initial phase matching, and weighting of the imaging aperture;
    3. The three-dimensional imaging apparatus according to claim 2, wherein the two-dimensional filter processing section performs convolution integration of the filter coefficient on the data of the complex signal.
  4.  前記2次元フィルタ処理を行う前記結像の開口を複数のブロックに分割し、前記ブロックごとに、前記2次元フィルタ処理と同じ処理によって、解像する前記反射点を中心とした近傍の反射点の解像を行い、各前記ブロックで得た前記近傍の反射点の複素信号のデータの相互相関演算から、光波面の乱れを検出し、前記アドレス生成部による前記アドレスの生成に反映させることで、前記光波面の乱れを補正する補正手段を備えることを特徴とする請求項3に記載の3次元撮像装置。 The imaging aperture for performing the two-dimensional filtering is divided into a plurality of blocks, and for each of the blocks, the same process as the two-dimensional filtering is performed on the neighboring reflection points centered on the reflection point to be resolved. Disturbance of the optical wavefront is detected from the cross-correlation calculation of the data of the complex signals of the reflection points in the neighborhood obtained in each of the blocks, and is reflected in the generation of the address by the address generation unit, 4. The three-dimensional imaging apparatus according to claim 3, further comprising correcting means for correcting disturbance of said light wavefront.
  5.  前記光源の周波数掃引の歪みと変動を検出し、前記歪みによって生じる干渉縞の周波数成分の分散を、位相整合フィルタによって補正する補正手段を備えることを特徴とする請求項1~4の何れか一項に記載の3次元撮像装置。 5. The apparatus according to any one of claims 1 to 4, further comprising correction means for detecting distortion and variation in the frequency sweep of said light source and correcting dispersion of frequency components of interference fringes caused by said distortion by means of a phase matching filter. 3. The three-dimensional imaging device according to the item.
  6.  クラスタが既知の被写体の反射スペクトルから、フィッシャーレシオが大きい順にスペクトル成分を算出し、前記スペクトル成分を用い、クラスタが未知の被写体の反射スペクトルから被写体の識別を行う識別手段を備えることを特徴とする請求項1~5の何れか一項に記載の3次元撮像装置。 An identification means for calculating spectral components from the reflectance spectrum of a subject whose cluster is known in descending order of Fisher ratio, and using the spectral components to identify the subject from the reflectance spectrum of the subject whose cluster is unknown. The three-dimensional imaging device according to any one of claims 1 to 5.
  7.  前記識別手段は、ディープラーニングを実行するAIを用いることを特徴とする請求項6に記載の3次元撮像装置。 The three-dimensional imaging apparatus according to claim 6, wherein the identification means uses AI that executes deep learning.
  8.  前記光源の代りに、低コヒーレンス光源と、分光器と、を備え、前記検出部と前記2次元フィルタ処理部によって3次元の解像を行うことを特徴とする請求項1~7の何れか一項に記載の3次元撮像装置。 8. The apparatus according to any one of claims 1 to 7, further comprising a low coherence light source and a spectroscope instead of the light source, and performing three-dimensional resolution by the detection unit and the two-dimensional filtering unit. 3. The three-dimensional imaging device according to the item.
  9.  前記2次元検出機構で検出した干渉縞信号に、3次元の解像とスペクトルの解析に必要な情報を付加するデータフォーマット作成部と、前記3次元の解像と前記スペクトルの解析に必要な情報を付加した前記干渉縞信号を、RAWデータとして記憶する記憶部と、を備えることを特徴とする請求項1~8の何れか一項に記載の3次元撮像装置。 A data format creation unit that adds information necessary for three-dimensional resolution and spectrum analysis to the interference fringe signal detected by the two-dimensional detection mechanism, and information necessary for the three-dimensional resolution and spectrum analysis. 9. The three-dimensional imaging apparatus according to any one of claims 1 to 8, further comprising a storage unit that stores the interference fringe signal to which is added as RAW data.
  10.  前記光源のコヒーレンス度と周波数掃引の帯域特性(歪を含む)と指向性、前記2次元検出機構の検出位置の座標と前記受光素子の指向性、前記2次元検出機構の検出位置に対する前記照明光と前記参照光の出射位置の3次元座標、そして、被写体に関する情報などを備えることを特徴とする請求項9に記載の3次元撮像装置。 Coherence degree of the light source, frequency sweep band characteristics (including distortion) and directivity, coordinates of the detection position of the two-dimensional detection mechanism and directivity of the light receiving element, the illumination light with respect to the detection position of the two-dimensional detection mechanism 10. The three-dimensional image pickup apparatus according to claim 9, further comprising: three-dimensional coordinates of the output position of the reference light, and information about the subject.
  11.  光源部と、
       前記光源部は、
    広帯域光又は広帯域の波長掃引光を出射する点光源と、
        前記点光源から出射された光を線状の線状光に変換する光学系と、
        第1の開口を有し前記線状光を成形する第1のスリットと、を有し、
     前記光源部から出射された前記線状光を照明光と参照光に分割する分割部と、
     被写体からの反射光を結像させる撮像光学系と、
     前記撮像光学系の結像面に設置され第2の開口を有する第2のスリットと、前記第2のスリットと共役に配置され、前記参照光を生成する反射板と、を有する反射部と、
     前記反射光と前記参照光を干渉させて干渉光を生成する合成部と、
     前記干渉光を、前記第2の開口の長手方向に対し交差する交差方向に分光する分光部と、
     前記第2のスリット及び前記反射板と共役に配置され、前記分光部により生成される2次元の干渉縞を受光する受光センサと、を備え、
     前記反射板が前記第2の開口の長手方向長さに対応する長さを有することを特徴とする3次元撮像装置。
    a light source;
    The light source unit
    a point light source that emits broadband light or broadband wavelength swept light;
    an optical system that converts light emitted from the point light source into linear linear light;
    a first slit that has a first opening and shapes the linear light,
    a dividing unit that divides the linear light emitted from the light source unit into illumination light and reference light;
    an imaging optical system that forms an image of reflected light from a subject;
    a reflecting unit having a second slit that is installed on the imaging plane of the imaging optical system and has a second aperture; and a reflector that is arranged conjugate with the second slit and generates the reference light;
    a synthesizing unit that interferes the reflected light and the reference light to generate interference light;
    a spectroscopic section that disperses the interference light in a cross direction crossing the longitudinal direction of the second opening;
    a light-receiving sensor arranged conjugate with the second slit and the reflecting plate and configured to receive two-dimensional interference fringes generated by the spectroscopic unit;
    A three-dimensional imaging device, wherein the reflector has a length corresponding to the longitudinal length of the second opening.
  12.  前記交差方向に前記撮像装置の撮像範囲を走査するように前記被写体及び前記撮像範囲の少なくとも一方を移動するための走査機構と、を備えることを特徴とする請求項11に記載の3次元撮像装置。 12. The three-dimensional imaging device according to claim 11, further comprising a scanning mechanism for moving at least one of the subject and the imaging range so as to scan the imaging range of the imaging device in the cross direction. .
  13.  前記受光センサが、シャッター機能を有することを特徴とする請求項11に記載の撮像装置。 The imaging device according to claim 11, wherein the light receiving sensor has a shutter function.
  14.  前記反射板に置換可能な吸収帯を備えることを特徴とする請求項11~13の何れか一項に記載の3次元撮像装置。 The three-dimensional imaging device according to any one of claims 11 to 13, characterized by comprising an absorption band that can be replaced with the reflector.
  15.  前記干渉光から、所定の波長帯域成分を取り出し、フーリエ変換を行い、前記所定の波長帯域成分における画像を生成する信号処理部を備えることを特徴とする請求項11~14の何れか一項に記載の3次元撮像装置。 15. The method according to any one of claims 11 to 14, further comprising a signal processing unit that extracts a predetermined wavelength band component from the interference light, performs Fourier transform, and generates an image in the predetermined wavelength band component. The three-dimensional imaging device described.
  16.  前記信号処理部は、前記干渉光から3原色に対応する波長帯域成分を取り出し、フーリエ変換を行い、3原色の画像信号を生成し、前記3原色の画像信号に基づき、RGB画像信号を生成する信号処理部を備えることを特徴とする請求項15に記載の3次元撮像装置。 The signal processing unit extracts wavelength band components corresponding to the three primary colors from the interference light, performs Fourier transform, generates image signals of the three primary colors, and generates RGB image signals based on the image signals of the three primary colors. 16. The three-dimensional imaging apparatus according to claim 15, further comprising a signal processing section.
  17.  クラスタが既知の被写体の反射スペクトルからフィッシャー・レシオが大きい順に複数のスペクトル成分を算出し、前記スペクトル成分を用い、クラスタが未知の被写体の反射スペクトルから識別を行う識別手段を備えることを特徴とする請求項11~16の何れか一項に記載の3次元撮像装置。 A discriminating means for calculating a plurality of spectral components from the reflection spectrum of a subject whose cluster is known in descending order of Fisher ratio, and using the spectral components to discriminate from the reflection spectrum of a subject whose cluster is unknown. The three-dimensional imaging device according to any one of claims 11-16.
  18.  前記反射部は、前記参照光をコリメートするコリメートレンズを有することを特徴とする請求項11~17の何れか一項に記載の3次元撮像装置。 The three-dimensional imaging apparatus according to any one of claims 11 to 17, wherein the reflector has a collimating lens that collimates the reference light.
  19.  前記撮像光学系は、前記照明光を、前記被写体上の線状の照射領域に結像し、
     前記分光部は、前記干渉光を波長成分に応じて分割し、
     前記受光センサは、
      前記照射領域を構成する一の反射点に対応し、第1の波長成分を有する第1の干渉縞が結像され、第1の方向に配列される第1の画素と、
      前記一の反射点に対応し、前記第1の波長成分とは異なる第2の波長成分を有する第1の干渉縞が結像され、前記第1の方向に交差する第2の方向に配列される第2の画素と、を有することを特徴とする請求項11~18の何れか一項に記載の3次元撮像装置。
     
     

     
    The imaging optical system forms an image of the illumination light on a linear irradiation area on the subject,
    The spectroscopic unit divides the interference light according to wavelength components,
    The light receiving sensor is
    a first pixel arranged in a first direction on which a first interference fringe having a first wavelength component is imaged corresponding to one reflection point constituting the irradiation area;
    A first interference fringe corresponding to the one reflection point and having a second wavelength component different from the first wavelength component is imaged and arranged in a second direction crossing the first direction. The three-dimensional imaging device according to any one of claims 11 to 18, characterized by comprising a second pixel that



PCT/JP2022/017973 2021-04-19 2022-04-16 Three-dimensional image pickup device WO2022224917A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021070725A JP6918395B1 (en) 2021-04-19 2021-04-19 Imaging device
JP2021-070725 2021-04-19
JP2021-211632 2021-12-24
JP2021211632A JP7058901B1 (en) 2021-12-24 2021-12-24 3D imager

Publications (1)

Publication Number Publication Date
WO2022224917A1 true WO2022224917A1 (en) 2022-10-27

Family

ID=83723317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/017973 WO2022224917A1 (en) 2021-04-19 2022-04-16 Three-dimensional image pickup device

Country Status (1)

Country Link
WO (1) WO2022224917A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024198525A1 (en) * 2023-03-29 2024-10-03 五邑大学 Compression ultrafast three-dimensional imaging method and system, electronic device, and storage medium
CN120181841A (en) * 2025-05-22 2025-06-20 厦门城建市政建设管理有限公司 High-efficient intelligent sorting system of low-value recoverable thing of living source

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0634525A (en) * 1992-07-21 1994-02-08 Olympus Optical Co Ltd High-speed spectrophotometer
JPH0886745A (en) * 1994-09-14 1996-04-02 Naohiro Tanno Spatial interference light wave reflectometer and light wave echography apparatus using the same
JPH08313344A (en) * 1995-05-23 1996-11-29 Shimadzu Corp Spectrometer
JP2002139422A (en) * 2001-09-10 2002-05-17 Fuji Photo Film Co Ltd Light absorption measuring instrument for light scattering medium
JP2005180931A (en) * 2003-12-16 2005-07-07 Nippon Roper:Kk Spectroscopic processing apparatus
JP2011179950A (en) * 2010-03-01 2011-09-15 Nikon Corp Measuring system
WO2012005315A1 (en) * 2010-07-07 2012-01-12 兵庫県 Holographic microscope, microscopic subject hologram image recording method, method of creation of hologram for reproduction of high-resolution image, and method for reproduction of image
US20150168125A1 (en) * 2012-07-30 2015-06-18 Adom, Advanced Optical Technologies Ltd. System for Performing Dual Path, Two-Dimensional Optical Coherence Tomography (OCT)
JP2016180733A (en) * 2015-03-25 2016-10-13 日本分光株式会社 Microspectroscopic device
JP2018017670A (en) * 2016-07-29 2018-02-01 株式会社リコー Spectral characteristic acquisition device, image evaluation device and image formation device
CN108732133A (en) * 2018-04-12 2018-11-02 杭州电子科技大学 It is a kind of based on the plant disease of optical image technology in body nondestructive detection system
JP2020182604A (en) * 2019-04-30 2020-11-12 のりこ 安間 Endoscope device capable of high-definition imaging and spectrum analysis
JP6918395B1 (en) * 2021-04-19 2021-08-11 のりこ 安間 Imaging device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0634525A (en) * 1992-07-21 1994-02-08 Olympus Optical Co Ltd High-speed spectrophotometer
JPH0886745A (en) * 1994-09-14 1996-04-02 Naohiro Tanno Spatial interference light wave reflectometer and light wave echography apparatus using the same
JPH08313344A (en) * 1995-05-23 1996-11-29 Shimadzu Corp Spectrometer
JP2002139422A (en) * 2001-09-10 2002-05-17 Fuji Photo Film Co Ltd Light absorption measuring instrument for light scattering medium
JP2005180931A (en) * 2003-12-16 2005-07-07 Nippon Roper:Kk Spectroscopic processing apparatus
JP2011179950A (en) * 2010-03-01 2011-09-15 Nikon Corp Measuring system
WO2012005315A1 (en) * 2010-07-07 2012-01-12 兵庫県 Holographic microscope, microscopic subject hologram image recording method, method of creation of hologram for reproduction of high-resolution image, and method for reproduction of image
US20150168125A1 (en) * 2012-07-30 2015-06-18 Adom, Advanced Optical Technologies Ltd. System for Performing Dual Path, Two-Dimensional Optical Coherence Tomography (OCT)
JP2016180733A (en) * 2015-03-25 2016-10-13 日本分光株式会社 Microspectroscopic device
JP2018017670A (en) * 2016-07-29 2018-02-01 株式会社リコー Spectral characteristic acquisition device, image evaluation device and image formation device
CN108732133A (en) * 2018-04-12 2018-11-02 杭州电子科技大学 It is a kind of based on the plant disease of optical image technology in body nondestructive detection system
JP2020182604A (en) * 2019-04-30 2020-11-12 のりこ 安間 Endoscope device capable of high-definition imaging and spectrum analysis
JP6918395B1 (en) * 2021-04-19 2021-08-11 のりこ 安間 Imaging device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024198525A1 (en) * 2023-03-29 2024-10-03 五邑大学 Compression ultrafast three-dimensional imaging method and system, electronic device, and storage medium
CN120181841A (en) * 2025-05-22 2025-06-20 厦门城建市政建设管理有限公司 High-efficient intelligent sorting system of low-value recoverable thing of living source

Similar Documents

Publication Publication Date Title
JP7058901B1 (en) 3D imager
JP4389032B2 (en) Optical coherence tomography image processing device
EP2905645B1 (en) Holographic microscope and holographic image generation method
US11644791B2 (en) Holographic imaging device and data processing method therefor
JP5623028B2 (en) Imaging method and apparatus for taking optical coherence tomographic image
US8731272B2 (en) Computational adaptive optics for interferometric synthetic aperture microscopy and other interferometric imaging
AU2011384697B2 (en) Spectroscopic instrument and process for spectral analysis
US20120218557A1 (en) Image forming method and optical coherence tomograph apparatus using optical coherence tomography
JP2017522066A (en) Imaging system and method with improved frequency domain interferometry
JP2008542758A (en) System, method and apparatus capable of using spectrally encoded heterodyne interferometry for imaging
CN104870930A (en) System and method for parallel imaging optical coherence tomography
EP3627093B1 (en) Apparatus for parallel fourier domain optical coherence tomography imaging and imaging method using parallel fourier domain optical coherence tomography
EP2649421A1 (en) Image mapped optical coherence tomography
KR20080065126A (en) Optical Imaging System Based on Coherent Frequency Domain Reflectometry
CN101732035B (en) Method for optical super resolution based on optical path encoding and coherent synthesis
JP2017047110A (en) Imaging apparatus
JP2009008393A (en) Optical image measuring device
JP6918395B1 (en) Imaging device
KR20170139126A (en) The image pickup device
JP5557397B2 (en) Method and apparatus for imaging translucent materials
JP2010164351A (en) Optical tomographic imaging apparatus
JP2019088773A (en) Image acquisition device, system and method
WO2022224917A1 (en) Three-dimensional image pickup device
JP2015114284A (en) Optical coherence tomography
US20130178735A1 (en) System and method for acquiring images from within a tissue

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22791690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22791690

Country of ref document: EP

Kind code of ref document: A1