WO2021044670A1 - Super-resolution measurement device and super-resolution measurement device operation method - Google Patents
Super-resolution measurement device and super-resolution measurement device operation method Download PDFInfo
- Publication number
- WO2021044670A1 WO2021044670A1 PCT/JP2020/018465 JP2020018465W WO2021044670A1 WO 2021044670 A1 WO2021044670 A1 WO 2021044670A1 JP 2020018465 W JP2020018465 W JP 2020018465W WO 2021044670 A1 WO2021044670 A1 WO 2021044670A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- complex amplitude
- super
- resolution
- optical
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the technology of the present disclosure relates to a super-resolution measuring device and a method of operating the super-resolution measuring device.
- a technique for measuring an object to be measured with a sensor such as a CCD (Charge-Coupled Device) image sensor and obtaining an output image having a resolution exceeding the resolution of the sensor (hereinafter, super-resolution) is known. ing.
- Masao Hamada "Adaptive Filter Type Super-Resolution", Panasonic Technical Journal Vol. 56 No. 4 Jan. 2011, Internet ⁇ URL: https://www.panasonic.com/jp/corporate/technology-design/ptj/pdf/v5604/p0208.pdf> is an image obtained by measuring the object to be measured with a sensor.
- a super-resolution technique for estimating a super-resolution component by applying a magnifying filter process to the image is described. That is, in Hamada's "Adaptive Filter Type Super-Resolution", the super-resolution component is estimated by magnifying filter processing from an image that does not contain the super-resolution component.
- the super-resolution component is acquired by moving the sensor in the horizontal direction at a pitch smaller than the pixel size.
- the super-resolution component is estimated by magnifying filter processing.
- a super-resolution component is acquired by diffusing the input light wave of the object to be measured. Therefore, according to the super-resolution technology studied by the present inventors, it is possible to obtain a super-resolution output image without moving the sensor and without relying on estimation by magnifying filter processing. It becomes.
- the image quality of the super-resolution output image may be deteriorated.
- the present inventors have found that one of the causes of this deterioration in image quality is that the amount of information contained in the diffused input light wave is large.
- An object of the present disclosure technique is to provide a super-resolution measuring device and a method of operating a super-resolution measuring device capable of reducing image quality deterioration that occurs in a super-resolution output image.
- the super-resolution measuring apparatus of the present disclosure includes a diffuser member that diffuses the input light wave of the object to be measured, and a partial transmission mask that partially transmits the diffused input light wave.
- a sensor that measures the intensity and phase of the input light wave transmitted through the partial transmission mask and outputs an optical complex amplitude image, and an input light wave of the object to be measured that contains a super-resolution component that is a component with a resolution exceeding the resolution of the sensor.
- Is a calculation process for reproducing an optical complex amplitude image and includes a virtual optical system that generates an output image by performing a calculation process including a phase conjugation function of a transmission function of a diffuser member on a computer.
- the virtual optical system generates a first intermediate image from an optical complex amplitude image, generates a second intermediate image from a reference image of the object to be measured, and subtracts a second intermediate image from the first intermediate image based on the image. It is preferable to generate an output image.
- the real optical system includes an optical complex amplitude image acquisition optical path for obtaining an optical complex amplitude image that reaches the sensor via a diffuser member, and a reference image acquisition optical path for obtaining a reference image that reaches the sensor without passing through the diffuser member. It is preferable to have.
- the reference image acquisition optical path branches from the object to be measured side rather than the diffusion member of the optical complex amplitude image acquisition optical path. In this case, it is preferable that the reference image acquisition optical paths merge on the sensor side of the diffusion member of the optical complex amplitude image acquisition optical path. Further, in the reference image acquisition optical path, it is preferable that the input light wave is focused to a size that fits in the transmission portion of the partial transmission mask.
- the optical complex amplitude image acquisition optical path and the reference image acquisition optical path are also used, and the diffusion member and the partial transmission mask are the approach position that entered the optical complex amplitude image acquisition optical path and the evacuation that was retracted from the optical complex amplitude image acquisition optical path.
- the optical complex amplitude image acquisition optical path which is moved to the position, preferably functions as a reference image acquisition optical path when the diffusion member and the partial transmission mask are in the retracted position.
- the diffusing member is preferably a random diffusing plate that randomly diffuses the input light wave.
- the diffuser member is a spatial light modulator whose diffusion characteristics of the input light wave can be changed, and the sensor measures the intensity and phase of a plurality of types of input light waves whose diffusion characteristics are changed by the spatial light modulator, and the input light wave.
- the virtual optical system outputs a plurality of optical complex amplitude images having different diffusion characteristics, generates an intermediate image from each of the plurality of optical complex amplitude images, and obtains an image obtained by adding and averaging the plurality of intermediate images as an output image. Is preferable.
- the actual optical system preferably includes a pair of Fourier transform lenses arranged on the object to be measured side of the diffusing member and on the sensor side of the diffusing member.
- the diffusing member is a random diffusing plate that randomly diffuses the input light wave, and a random diffusing plate provided in the central portion with a light-shielding portion that cuts low spatial frequency components by a Fourier transform lens arranged on the object to be measured. Is preferable.
- the diffuser member is a random diffuser plate that randomly diffuses the input light wave, and is a random diffuser plate provided in the central portion with an opening that transmits low spatial frequency components by the Fourier conversion lens arranged on the side to be measured. Therefore, it is preferable that the actual optical system is provided with a condensing lens that collects low spatial frequency components transmitted through the aperture on the transmission portion of the partial transmission mask.
- the method of operating the super-resolution measuring device of the present disclosure uses a real optical system including a diffuser member that diffuses the input light wave of the object to be measured and a partially transmitted mask that partially transmits the diffused input light wave, and uses a sensor.
- the intensity and phase of the input light wave transmitted through the partial transmission mask are measured to output an optical complex amplitude image, and in the virtual optical system, the object to be measured containing a super-resolution component that is a component with a resolution exceeding the resolution of the sensor.
- FIG. 4A shows the case where there is no random diffuser
- FIG. 4B shows the case where there is a random diffuser.
- FIG. 4A shows the case where there is no random diffuser
- FIG. 4B shows the case where there is a random diffuser.
- FIG. 4A shows the case where there is no random diffuser
- FIG. 4B shows the case where there is a random diffuser.
- Optical complex amplitude image acquisition It is a figure which shows the state of obtaining the optical complex amplitude image using an optical path. It is a figure which shows the state of obtaining the reference image using the reference image acquisition optical path.
- FIG. 20A is a diagram showing an input image of the first embodiment
- FIG. 20A shows an intensity image of the input image
- FIG. 20B shows a phase image of the input image.
- FIG. 21A shows the intensity image of the optical complex amplitude image
- FIG. 21B shows the phase image of the optical complex amplitude image.
- FIG. 22A is a diagram showing a first intermediate image of the first embodiment, FIG.
- FIG. 22A shows an intensity image of the first intermediate image
- FIG. 22B shows a phase image of the first intermediate image
- FIG. 23A is a diagram showing a reference image of the first embodiment
- FIG. 23A shows an intensity image of the reference image
- FIG. 23B shows a phase image of the reference image.
- FIG. 24A shows the intensity image of the 2nd intermediate image
- FIG. 24B shows the phase image of the 2nd intermediate image.
- FIG. 25A shows the intensity image of the subtraction image
- FIG. 25B shows the phase image of the subtraction image.
- FIG. 26A is a diagram showing an output image of the first embodiment
- FIG. 26A is a diagram showing an output image of the first embodiment
- FIG. 26A shows an intensity image of the output image
- FIG. 26B shows a phase image of the output image
- FIG. 27A is a diagram showing another example of the input image of the first embodiment
- FIG. 27A shows an intensity image of the input image
- FIG. 27B shows a phase image of the input image.
- It is a figure which shows the reference image in the case of the input image of FIG.
- FIG. 29A shows the intensity image of the optical complex amplitude image
- FIG. 29B shows the phase image of the optical complex amplitude image.
- FIG. 30A shows the intensity image of the output image
- FIG. 30B shows the phase image of the output image. It is a figure which shows another example of a real optical system. It is a figure which shows another example of a real optical system. It is a figure which shows the real optical system of 2nd Embodiment. It is a figure which shows the generation part of the 2nd Embodiment. It is a figure which shows the process of 2nd Embodiment collectively. It is a figure which shows the phase image of the output image of Example 2. It is a figure which shows the real optical system of 3rd Embodiment. It is a figure which shows the random diffusion plate of 3rd Embodiment. It is a figure which shows the input image of Example 3, FIG. 39A shows the intensity image of the input image, and FIG.
- FIG. 39B shows the phase image of the input image. It is a figure which shows the high spatial frequency component of the input image shown in FIG. 39, FIG. 40A shows the intensity image of the high spatial frequency component, and FIG. 40B shows the phase image of the high spatial frequency component.
- FIG. 41A is a diagram showing an output image of the third embodiment, FIG. 41A shows an intensity image of the output image, and FIG. 41B shows a phase image of the output image. It is a figure which shows the real optical system of 4th Embodiment. It is a figure which shows the random diffusion plate of 4th Embodiment. It is a figure which shows the generation part of 4th Embodiment. It is a figure which shows another example of a partial transmission mask. It is a figure which shows another example of a partial transmission mask.
- the super-resolution measuring device 10 includes a stage 11, an actual optical system 12, a sensor 13, and a processing device 14.
- the object to be measured 15 is set on the stage 11.
- the real optical system 12 takes in the input light wave of the object to be measured 15 set on the stage 11 and guides it to the sensor 13.
- the sensor 13 measures the intensity and phase of the input light wave guided by the real optical system 12 and outputs an optical complex amplitude image 54 (see FIG. 6).
- the sensor 13 transmits the optical complex amplitude image 54 to the processing device 14.
- the processing device 14 is, for example, a desktop personal computer.
- the processing device 14 generates an output image 76 (see FIG. 9) based on the optical complex amplitude image 54 from the sensor 13.
- the real optical system 12 has an optical complex amplitude image acquisition optical path 20 and a reference image acquisition optical path 21.
- the optical complex amplitude image acquisition optical path 20 is an optical path for obtaining an optical complex amplitude image 54.
- the reference image acquisition optical path 21 is an optical path for obtaining a reference image 61 (see FIG. 7) of the object to be measured 15.
- a random diffusion plate 22 which is an example of the “diffusion member” according to the technique of the present disclosure is arranged in the optical complex amplitude image acquisition optical path 20.
- the random diffuser plate 22 is not arranged in the reference image acquisition optical path 21. That is, the optical complex amplitude image acquisition optical path 20 is an optical path that reaches the sensor 13 via the random diffuser plate 22.
- the reference image acquisition optical path 21 is an optical path that reaches the sensor 13 without passing through the random diffuser plate 22.
- the random diffusion plate 22 has an uneven surface 23 on which random irregularities are formed on the surface.
- the random diffusion plate 22 is arranged with the uneven surface 23 facing the sensor 13 side.
- the random diffuser plate 22 diffuses the input light wave of the object to be measured 15.
- a beam splitter 25 In addition to the random diffuser plate 22, a beam splitter 25, a Fourier transform lens 26, a Fourier transform lens 27, a beam splitter 28, and a partial transmission mask 29 are arranged in the optical complex amplitude image acquisition optical path 20.
- the beam splitter 25 transmits the input light wave of the object to be measured 15 toward the Fourier transform lens 26. Further, the beam splitter 25 reflects the input light wave of the object to be measured 15 toward the mirror 30 of the reference image acquisition optical path 21 by 90 °. The beam splitter 25 branches the reference image acquisition optical path 21 from the object to be measured 15 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20.
- the Fourier transform lenses 26 and 27 are arranged at positions sandwiching the random diffuser plate 22. More specifically, the Fourier transform lens 26 is arranged closer to the object to be measured 15 than the random diffuser 22. The Fourier transform lens 27 is arranged closer to the sensor 13 than the random diffuser 22.
- the Fourier transform lenses 26 and 27 are described as "a diffraction image obtained on the rear focal plane of the lens when a transmitting object is placed on the front focal plane of the lens and illuminated with a plane wave having a uniform intensity (amplitude) distribution from behind the transparent object. Is defined as a lens having the property that the intensity distribution of is represented by the Fourier transform of the intensity distribution of the object.
- the beam splitter 28 transmits the input light wave of the object to be measured 15 diffused by the random diffuser plate 22 toward the partial transmission mask 29. Further, the beam splitter 28 reflects the input light wave of the object to be measured 15 from the reference image acquisition optical path 21 toward the partial transmission mask 29 by 90 °. By this beam splitter 28, the reference image acquisition optical path 21 merges on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20.
- a mirror 30, a lens 31, a lens 32, and a mirror 33 are arranged in the reference image acquisition optical path 21.
- the mirror 30 reflects the input light wave of the object to be measured 15 from the beam splitter 25 toward the lens 31 by 90 °.
- the lenses 31 and 32 collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 (see also FIG. 5) of the partial transmission mask 29.
- the mirror 33 reflects the input light wave of the object to be measured 15 focused by the lenses 31 and 32 toward the beam splitter 28 of the optical complex amplitude image acquisition optical path 20 by 90 °.
- a shutter 36 is provided between the Fourier transform lens 27 of the optical complex amplitude image acquisition optical path 20 and the beam splitter 28.
- a shutter 37 is also provided between the mirror 33 of the reference image acquisition optical path 21 and the beam splitter 28 of the optical complex amplitude image acquisition optical path 20.
- the shutter 36 is movable between an approach position that has entered the optical complex amplitude image acquisition optical path 20 and a retracted position that has been retracted from the optical complex amplitude image acquisition optical path 20.
- the shutter 37 is movable between the approach position that has entered the reference image acquisition optical path 21 and the retracted position that has been retracted from the reference image acquisition optical path 21.
- the shutter 36 When the optical complex amplitude image 54 is obtained by using the optical complex amplitude image acquisition optical path 20, the shutter 36 is moved to the retracted position and the shutter 37 is moved to the approach position as shown in FIG. On the other hand, when the reference image 61 is obtained by using the reference image acquisition optical path 21, the shutter 36 is moved to the approach position and the shutter 37 is moved to the retracted position as shown in FIG. In this way, the optical complex amplitude image acquisition optical path 20 and the reference image acquisition optical path 21 are used properly.
- the positions where the shutters 36 and 37 are provided are not limited to the positions exemplified above.
- a shutter 36 may be provided between the beam splitter 25 and the Fourier transform lens 26, or a shutter 37 may be provided between the beam splitter 25 and the mirror 30.
- FIG. 4 conceptually shows the action of the random diffusion plate 22.
- FIG. 4A shows the case where the random diffuser plate 22 is not present
- FIG. 4B shows the case where the random diffuser plate 22 is present.
- the input light wave (indicated by the dashed arrow) from the region 41 for one pixel of the object 15 to be measured is one of the plurality of pixels 40 of the sensor 13. Incident in. That is, there is a one-to-one relationship between the area 41 for one pixel of the object to be measured 15 and the pixel 40 of the sensor 13.
- FIG. 4A shows the case where the random diffuser plate 22 is not present
- FIG. 4B shows the case where the random diffuser plate 22 is present.
- the input light wave (indicated by the dashed arrow) from the region 41 for one pixel of the object 15 to be measured is one of the plurality of pixels 40 of the sensor 13.
- the input light wave from the region 41 for one pixel of the object to be measured 15 is diffused by the random diffuser plate 22 and a large number of pixels 40 of the sensor 13. Incident in. That is, the area 41 for one pixel of the object to be measured 15 and the pixel 40 of the sensor 13 have a one-to-many relationship. In this way, the input light wave from the region 41 for one pixel of the object to be measured is incident on a large number of pixels 40 of the sensor 13, so that the final output image 76 has a super-resolution that exceeds the resolution of the sensor 13. Image components will be included.
- the partial transmission mask 29 has a square shape in which a transmission portion 35, which is a square hole, is formed in the central portion.
- the portion of the partial transmission mask 29 other than the transmission portion 35 blocks the input light wave of the object to be measured 15. Therefore, when the optical complex amplitude image acquisition optical path 20 is used, the input light wave of the measured object 15 incident on the sensor 13 is 1/2 or less of the input light wave of the measured object 15 diffused by the random diffuser plate 22. , More preferably limited to 1/4 or less.
- the reference image acquisition optical path 21 the input light waves of the object to be measured 15 are focused by the lenses 31 and 32 to a size that fits in the transmitting portion 35, as described above.
- the input light wave of the object to be measured 15 incident on the sensor 13 is not limited by the partial transmission mask 29.
- the partially transparent mask 29 is not limited to a square shape.
- it may be circular.
- the transmission portion 35 is not limited to a square shape, and may be, for example, a circular shape.
- the sensor 13 for example, a general sensor for phase shift digital holography is used.
- the sensor 13 is a CCD image sensor, a CMOS (Complementary Metal Oxide Sensor) image sensor, or the like.
- the type of the sensor 13 is not particularly limited as long as it can measure the spatial distribution of the optical complex amplitude of the input light wave.
- FIG. 6 shows how the optical complex amplitude image acquisition 54 is obtained by using the optical complex amplitude image acquisition optical path 20.
- the optical complex amplitude of the input image 50 represented by the input light wave of the object 15 to be measured is a_R (x, y) and the complex Fourier transform function of the Fourier transform lens 26 is F_R ⁇ * ⁇ , after the complex Fourier transform.
- the transmission function of the random diffuser plate 22 is ⁇ _R (X, Y)
- the optical complex amplitude of the image 52 transmitted through the random diffuser plate 22 is A_R (X, Y) x ⁇ _R (X, Y)
- the method is not necessarily limited to the method using the random diffuser plate 22, and the light whose phase is randomly modulated spatially is measured. Transmitted light or scattered light obtained by irradiating the object 15 may be used.
- the optical complex amplitude of the image 53 after the complex Fourier transform is F_R ⁇ A_R (X, Y) x ⁇ _R (X, Y) ⁇ Can be expressed as.
- the optical complex amplitude of the optical complex amplitude image 54 measured by the sensor 13 through the partial transmission mask 29 is Rec_R ⁇ F_R ⁇ A_R (X, Y) x ⁇ _R (X, Y) ⁇ Can be expressed as.
- Rec_R ⁇ * ⁇ shows a value of * in the transmission portion 35 of the partial transmission mask 29, and shows a value of 0 in the peripheral light-shielding portion other than the transmission portion 35.
- FIG. 7 shows how the reference image 61 is obtained by using the reference image acquisition optical path 21.
- the Fourier transform lenses 26 and 27 and the random diffuser 22 are not arranged in the reference image acquisition optical path 21.
- the size is such that the input light wave of the object to be measured 15 fits in the transmission portion 35. Therefore, the optical complex amplitude of the reference image 61 measured by the sensor 13 can be represented by b_R (x, y), which is the same as the optical complex amplitude of the input image 60 represented by the input light wave of the object to be measured 15.
- the resolution of the reference image 61 to be measured is limited to the resolution of the sensor 13. That is, the reference image 61 does not have a super-resolution component that exceeds the resolution of the sensor 13.
- the computer constituting the processing device 14 includes a storage device 65, a memory 66, a CPU (Central Processing Unit) 67, a communication unit 68, a display 69, and an input device 70. These are interconnected via a bus line 71.
- a bus line 71 interconnects a CPU (Central Processing Unit) 67, a communication unit 68, a display 69, and an input device 70.
- the storage device 65 is a hard disk drive built in the computer constituting the processing device 14 or connected via a cable or a network. Alternatively, the storage device 65 is a disk array in which a plurality of hard disk drives are connected in series. The storage device 65 stores control programs such as an operating system, various application programs, and various data associated with these programs. A solid state drive may be used instead of the hard disk drive.
- the memory 66 is a work memory for the CPU 67 to execute a process.
- the CPU 67 comprehensively controls each part of the computer by loading the program stored in the storage device 65 into the memory 66 and executing the processing according to the program.
- the communication unit 68 is a network interface that controls the transmission of various information via various networks.
- the display 69 displays various screens.
- the computer constituting the processing device 14 receives input of an operation instruction from the input device 70 through various screens.
- the input device 70 is a keyboard, a mouse, a touch panel, or the like.
- the operation program 75 is stored in the storage device 65.
- the operation program 75 is an application program for operating the computer as the processing device 14.
- the storage device 65 also stores an optical complex amplitude image 54, a reference image 61, an output image 76, and transfer function information 77.
- the CPU 67 of the computer constituting the processing device 14 cooperates with the memory 66 and the like to form an image acquisition unit 80 and a read / write (hereinafter abbreviated as RW (Read Write)) control unit 81. , And function as a generator 82.
- RW Read Write
- the image acquisition unit 80 acquires the optical complex amplitude image 54 and the reference image 61 from the sensor.
- the image acquisition unit 80 outputs the acquired optical complex amplitude image 54 and the reference image 61 to the RW control unit 81.
- the RW control unit 81 controls the storage of various data in the storage device 65 and the reading of various data in the storage device 65.
- the RW control unit 81 stores the optical complex amplitude image 54 and the reference image 61 from the image acquisition unit 80 in the storage device 65. Further, the RW control unit 81 reads out the optical complex amplitude image 54 and the reference image 61 from the storage device 65 and outputs them to the generation unit 82.
- the RW control unit 81 reads the transfer function information 77 from the storage device 65 and outputs it to the generation unit 82. Further, the RW control unit 81 stores the output image 76 from the generation unit 82 in the storage device 65.
- the generation unit 82 generates the output image 76 by performing calculation processing on the optical complex amplitude image 54.
- the calculation process is a process of reproducing the input light wave of the object to be measured 15 including a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13 from the optical complex amplitude image 54, and is a process of reproducing the input light wave of the optical complex amplitude image 54 from the computer constituting the processing device 14. This is the process performed above.
- This calculation process includes the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107, which is an example of the “phase conjugation function of the transmission function of the diffusion member” described later.
- the output image 76 is generated as an image showing the input light wave including the super-resolution component. Therefore, the generation unit 82 is an example of the “virtual optical system” according to the technique of the present disclosure.
- the generation unit 82 includes a first intermediate image generation unit 85, a second intermediate image generation unit 86, and a subtraction unit 87.
- An optical complex amplitude image 54 is input to the first intermediate image generation unit 85.
- the first intermediate image generation unit 85 generates the first intermediate image 88 (see also FIG. 11) from the optical complex amplitude image 54.
- the first intermediate image generation unit 85 outputs the first intermediate image 88 to the subtraction unit 87.
- the reference image 61 is input to the second intermediate image generation unit 86.
- the second intermediate image generation unit 86 generates a second intermediate image 89 (see also FIG. 14) from the reference image 61.
- the second intermediate image generation unit 86 outputs the second intermediate image 89 to the subtraction unit 87.
- the subtraction unit 87 subtracts the second intermediate image 89 from the first intermediate image 88.
- the subtraction unit 87 generates the output image 76 based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88.
- a plurality of transfer functions are registered in the transfer function information 77. These transfer functions are used when the first intermediate image generation unit 85 generates the first intermediate image 88 and when the second intermediate image generation unit 86 generates the second intermediate image 89.
- F_V1 -1 ⁇ * ⁇ is the complex Fourier inverse transform function of the virtual Fourier transform lens 106, 108 (see FIG. 12).
- F_V1 -1 ⁇ * ⁇ is an inverse function of the complex Fourier transform function F_R of the Fourier transform lens 26, 27 of the actual optical system 12 ⁇ * ⁇ .
- ⁇ _V1 (X, Y) is a transmission function of the virtual random diffuser plate 107 (see FIG. 12).
- ⁇ _V1 (X, Y) is a computer that returns the diffused input light wave to the input light wave before diffusion, as opposed to the action of the random diffuser plate 22 diffusing the input light wave of the object to be measured 15 in the real optical system 12. It is a function that is virtually realized above.
- F_V2 ⁇ * ⁇ is a complex Fourier transform function of the virtual Fourier transform lens 131, 133 (see FIG. 15).
- F_V2 ⁇ * ⁇ is a function equivalent to the complex Fourier transform function F_R ⁇ * ⁇ of the Fourier transform lenses 26 and 27 of the real optical system 12.
- ⁇ _V2 (X, Y) is a transmission function of the virtual random diffuser 132 (see FIG. 15).
- ⁇ _V2 (X, Y) is a function equivalent to the transmission function ⁇ _R (X, Y) of the random diffuser plate 22 of the real optical system 12.
- Rect_V ⁇ * ⁇ is a transparency function of the virtual partial transparency mask 134 (see FIG. 15).
- Rect_V ⁇ * ⁇ is a function equivalent to the transmission function Rect_R ⁇ * ⁇ of the partial transmission mask 29 of the actual optical system 12.
- the first intermediate image generation unit 85 includes a fast Fourier inverse transform unit 95, a calculation unit 96, and a fast Fourier inverse transform unit 97.
- Inverse fast Fourier transform unit 95 using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 106 ⁇ * ⁇ is subjected to inverse fast Fourier transform with respect to the optical complex amplitude image 54.
- optical complex amplitude of the image 98 after the fast Fourier inverse transform is approximately F_V1 -1 ⁇ Rect_R ⁇ F_R ⁇ A_R (X, Y) ⁇ ⁇ _R (X, Y) ⁇ ⁇ A_SR (X, Y) ⁇ ⁇ _R (X, Y) Can be expressed as.
- A_SR (X, Y) contains a super-resolution component.
- the calculation unit 96 multiplies the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107 by the optical complex amplitude A_SR (X, Y) ⁇ ⁇ _R (X, Y) of the image 98 after the fast Fourier inverse transform (X, Y).
- the calculated optical complex amplitude of the image 99 eventually becomes A_SR (X, Y).
- the calculation process performed by the generation unit 82 includes the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
- Inverse fast Fourier transform unit 97 using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 108 ⁇ * ⁇ is subjected to inverse fast Fourier transform to calculate the image after 99. As a result, the first intermediate image 88 is generated.
- FIG. 12 shows a first virtual optical system 105 equivalent to the processing of the first intermediate image generation unit 85.
- the first virtual optical system 105 includes a virtual Fourier transform lens 106, a virtual random diffuser plate 107, and a virtual Fourier transform lens 108.
- the action of the virtual Fourier transform lens 106 is equivalent to the processing of the fast Fourier inverse transform unit 95.
- the action of the virtual random diffuser 107 is equivalent to the processing of the calculation unit 96.
- the action of the virtual Fourier transform lens 108 is equivalent to the processing of the fast Fourier inverse transform unit 97.
- the second intermediate image generation unit 86 includes a fast Fourier transform unit 110, a calculation unit 111, a fast Fourier transform unit 112, a calculation unit 113, a fast Fourier inverse transform unit 114, and a calculation unit 115. And has a fast Fourier transform unit 116.
- the fast Fourier transform unit 110 performs a fast Fourier transform on the reference image 61 using the complex Fourier transform function F_V2 ⁇ * ⁇ of the virtual Fourier transform lens 131.
- the calculation unit 111 multiplies the transmission function ⁇ _V2 (X, Y) of the virtual random diffusion plate 132 by the optical complex amplitude B_R (X, Y) of the image 117 after the fast Fourier transform.
- the calculated optical complex amplitude of image 118 is B_R (X, Y) x ⁇ _V2 (X, Y) Can be expressed as.
- the fast Fourier transform unit 112 performs a fast Fourier transform on the calculated image 118 by using the complex Fourier transform function F_V2 ⁇ * ⁇ of the virtual Fourier transform lens 133.
- the optical complex amplitude of image 119 after the fast Fourier transform is F_V2 ⁇ B_R (X, Y) x ⁇ _V2 (X, Y) ⁇ Can be expressed as.
- the calculation unit 113 uses the transmission function Rect_V ⁇ * ⁇ of the virtual partial transparency mask 134 to perform a calculation equivalent to that of the partial transparency mask 29 on the image 119 after the fast Fourier transform.
- the calculated optical complex amplitude of the image 120 is Rec_V ⁇ F_V2 ⁇ B_R (X, Y) x ⁇ _V2 (X, Y) ⁇ Can be expressed as.
- Inverse fast Fourier transform unit 114 like the fast Fourier inverse transform unit 95 of the first intermediate image generating unit 85, using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 106 ⁇ * ⁇ , after calculation A fast Fourier transform is applied to the image 120.
- the optical complex amplitude of the image 121 after the fast Fourier inverse transform is approximately F_V1 -1 ⁇ Rect_V ⁇ F_V2 ⁇ B_R (X, Y) ⁇ ⁇ _V2 (X, Y) ⁇ ⁇ B_NSR (X, Y) ⁇ ⁇ _V2 (X, Y) Can be expressed as.
- B_NSR (X, Y) does not contain a super-resolution component.
- the calculation unit 115 multiplies the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107 by the optical complex amplitude B_NSR (X, Y) ⁇ ⁇ _V2 (X, Y) of the image 121 after the fast Fourier inverse transform (X, Y).
- Inverse fast Fourier transform unit 116 using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 108 ⁇ * ⁇ is subjected to inverse fast Fourier transform to calculate the image after 122. As a result, the second intermediate image 89 is generated.
- FIG. 15 shows a second virtual optical system 130 equivalent to the processing of the second intermediate image generation unit 86.
- the virtual Fourier transform lens 131, the virtual random diffuser plate 132, and the virtual It is composed of a Fourier transform lens 133 and a virtual partial transmission mask 134.
- the action of the virtual Fourier transform lens 131 is equivalent to the processing of the fast Fourier transform unit 110.
- the action of the virtual random diffuser 132 is equivalent to the processing of the calculation unit 111.
- the action of the virtual Fourier transform lens 133 is equivalent to the processing of the fast Fourier transform unit 112.
- the action of the virtual partial transparency mask 134 is equivalent to the processing of the calculation unit 113.
- the virtual Fourier transform lens 131, the virtual random diffuser 132, the virtual Fourier transform lens 133, and the virtual partial transmission mask 134 are the Fourier transform lens 26, the random diffuser 22, the Fourier transform lens 27, and the partial transmission mask of the real optical system 12. Equivalent to 29. That is, the processing of the fast Fourier transform unit 110, the calculation unit 111, the fast Fourier transform unit 112, and the calculation unit 113 of the second intermediate image generation unit 86 virtualizes the action of the optical complex amplitude image acquisition optical path 20 of the real optical system 12. Equivalent to the transformed process.
- the subtraction unit 87 subtracts the optical complex amplitude b_NSR (x, y) of the second intermediate image 89 from the optical complex amplitude a_SR (x, y) of the first intermediate image 88. More specifically, the real and imaginary parts of a_SR (x, y) and b_NSR (x, y) are subtracted. As a result, the output image 76 (O (x, y)) approximates the original input image 50 by subtracting the real and imaginary parts of the optical complex amplitudes a_R (x, y) and b_R (x, y). ) Can be obtained.
- Re ⁇ * ⁇ represents the real part and Im ⁇ * ⁇ represents the imaginary part.
- the normalization constant C is how much the intensity of the optical complex amplitude a_SR (x0, y0) of the first intermediate image 88 decreases when the point light source a_R (x0, y0) is placed at the center (x0, y0). Can be obtained by examining in advance.
- C 2 ⁇ a_R (x0, y0) ⁇ 2 / ⁇ a_SR (x0, y0) ⁇ 2
- the standardization constant C is stored in the storage device 65 by the RW control unit 81. Further, the standardized constant C is read from the storage device 65 by the RW control unit 81 and output to the subtraction unit 87.
- the process of generating the second intermediate image 89 by using the image, and the process of generating the output image 76 based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88 in the subtraction unit 87 correspond.
- the object to be measured 15 is set on the stage 11 (step ST100). Then, as shown in FIG. 2, the shutter 36 is set to the retracted position and the shutter 37 is set to the approach position, and the input light wave of the object to be measured 15 is guided to the optical complex amplitude image acquisition optical path 20 (step ST110).
- the input light wave of the object to be measured 15 passes through the Fourier transform lens 26 and is subjected to complex Fourier transform by the Fourier transform lens 26.
- the input light wave after the complex Fourier transform passes through the random diffuser plate 22 and is diffused by the random diffuser plate 22.
- the diffused input light wave passes through the Fourier transform lens 27 and is subjected to complex Fourier transform by the Fourier transform lens 27.
- the input light wave after the complex Fourier transform passes through the transmission portion 35 of the partial transmission mask 29 and reaches the sensor 13.
- the optical complex amplitude image 54 is output from the sensor 13 (step ST120).
- the optical complex amplitude image 54 is sent from the sensor 13 to the image acquisition unit 80 and acquired by the image acquisition unit 80. Then, it is stored in the storage device 65 by the RW control unit 81 (step ST130).
- the shutter 36 is set to the approach position and the shutter 37 is set to the retracted position, and the input light wave of the object to be measured 15 is guided to the reference image acquisition optical path 21 (step ST140).
- the input light wave of the object to be measured 15 passes through the lenses 31 and 32, and is focused by the lenses 31 and 32 to a size that fits in the transmission portion 35 of the partial transmission mask 29.
- the collected input light wave of the object to be measured 15 passes through the transmission portion 35 of the partial transmission mask 29 and reaches the sensor 13.
- the reference image 61 is output from the sensor 13 (step ST150).
- the reference image 61 is sent from the sensor 13 to the image acquisition unit 80 and acquired by the image acquisition unit 80. Then, it is stored in the storage device 65 by the RW control unit 81 (step ST160).
- the RW control unit 81 reads out the optical complex amplitude image 54 and the reference image 61 from the storage device 65 and outputs them to the generation unit 82.
- the first intermediate image generation unit 85 generates the first intermediate image 88 (step ST170).
- the second intermediate image generation unit 86 generates the second intermediate image 89 (step ST180).
- the first intermediate image 88 and the second intermediate image 89 are output to the subtraction unit 87.
- the subtraction unit 87 subtracts the second intermediate image 89 from the first intermediate image 88, and the output image 76 is generated (step ST190). More specifically, the output image 76 is generated by multiplying the first intermediate image 88 minus the second intermediate image 89 by the normalization constant C and further adding the reference image 61.
- the output image 76 is sent from the subtraction unit 87 to the RW control unit 81, and is stored in the storage device 65 by the RW control unit 81 (step ST200).
- the output image 76 stored in the storage device 65 is transmitted to another device via the communication unit 68 or displayed on the display 69.
- the super-resolution measuring device 10 includes an actual optical system 12, a sensor 13, and a generation unit 82.
- the real optical system 12 includes a random diffuser 22 that diffuses the input light wave of the object 15 to be measured, and a partial transmission mask 29 that partially transmits the diffused input light wave.
- the sensor 13 measures the intensity and phase of the input light wave transmitted through the partial transmission mask 29, and outputs an optical complex amplitude image 54.
- the generation unit 82 is a calculation process for reproducing the input light wave of the object to be measured 15 including the super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13 from the optical complex amplitude image 54, and is a calculation process of the random diffuser plate 22.
- the output image 76 is generated by performing a calculation process including the phase conjugation function ⁇ _V1 (X, Y) of the transmission function on a computer. Since the partial transmission mask 29 limits the input light wave of the object to be measured 15 incident on the sensor 13, it is possible to reduce the image quality deterioration that occurs in the super-resolution output image 76.
- the input light wave of the object to be measured 15 diffused by the random diffuser plate 22 is limited. Then, the limited input light wave is measured by using the sensor 13 at the same resolution as when the reference image 61 is obtained. Therefore, it is possible to obtain a finer change in the wave surface of the input light wave. In other words, compared to the case where the input image 50 is photographed at the same magnification, the resolution is equivalently twice or four times higher.
- the super-resolution component is woven into the finer changes in the wave surface of the input light wave. Therefore, by limiting the input light wave of the object to be measured 15 by the partial transmission mask 29, it is possible to efficiently obtain the super-resolution component.
- the generation unit 82 generates the first intermediate image 88 from the optical complex amplitude image 54, and generates the second intermediate image 89 from the reference image 61. Then, the output image 76 is generated based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88.
- the first intermediate image 88 contains a noise component in addition to the super-resolution component.
- the noise component is mainly due to a measurement error due to the finite size and resolution of the sensor 13. The measurement error is noticeable because the input image 50 is diffused by the random diffuser 22.
- the noise component is also caused by processing errors of the fast Fourier inverse transform unit 95, the calculation unit 96, etc., and distortion applied to the input light wave transmitted through the transmission unit 35 of the partial transmission mask 29. ..
- the second intermediate image 89 does not contain a super-resolution component, but contains a noise component. Therefore, by subtracting the second intermediate image 89 from the first intermediate image 88, only the noise component can be removed from the first intermediate image 88. Therefore, the deterioration of the image quality of the output image 76 can be further reduced.
- the actual optical system 12 has an optical complex amplitude image acquisition optical path 20 that reaches the sensor 13 via the random diffuser plate 22 and a reference image acquisition optical path 21 that reaches the sensor 13 without passing through the random diffuser plate 22. Therefore, the optical complex amplitude image 54 and the reference image 61 can be easily obtained in a short time.
- the reference image acquisition optical path 21 is branched from the object to be measured 15 side of the random diffusion plate 22 of the optical complex amplitude image acquisition optical path 20. Further, the reference image acquisition optical path 21 merges on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20. Therefore, it is not necessary to prepare two sensors 13 for the optical complex amplitude image acquisition optical path 20 and the reference image acquisition optical path 21, and one sensor 13 can cover the problem.
- the lenses 31 and 32 collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 of the partial transmission mask 29. Therefore, when the reference image 61 is obtained using the reference image acquisition optical path 21, the partial transmission mask 29 is retracted from the optical path, and when the optical complex amplitude image acquisition optical path 20 is used to obtain the optical complex amplitude image 54, the partial transmission mask 29 is retracted from the optical path. It is not necessary to provide a mechanism for returning the transmission mask 29 to the optical path. It is possible to avoid an increase in component cost for the real optical system 12 and an increase in the size of the real optical system 12.
- the actual optical system 12 uses a random diffuser plate 22 that randomly diffuses an input light wave as a diffuser member. Therefore, the input light wave of the object to be measured 15 can be incident on a larger number of pixels 40 of the sensor 13, and the super-resolution component included in the output image 76 can be increased.
- Example 1 In the following, Example 1 based on the numerical simulation of the super-resolution measuring device 10 of the first embodiment will be shown.
- FIG. 19 is Table 140 showing various parameters in the first embodiment.
- the light wavelength used is the wavelength of light emitted from the light source of the sensor 13, and is 532 nm.
- the number of pixels of the input light wave is 512 ⁇ 512, and the pixel size is 20 ⁇ m.
- the calculated oversampling rate for the input light wave is 4, and the zero padding rate is 2.
- the number of pixels of the random diffuser 22 is 4096 ⁇ 4096, the pixel size is 5 ⁇ m, and the phase gradation is 256.
- the number of pixels of the sensor 13 is 256 ⁇ 256, and the pixel size is 5 ⁇ m.
- the input image 50 has four times the resolution of the sensor 13. It should be noted that these various parameters are also followed in the following Examples 2 and 3.
- FIG. 20 shows an example of the input image 50 (a_R (x, y)) used in the first embodiment.
- FIG. 20A shows the intensity image 50AM of the input image 50
- FIG. 20B shows the phase image 50PH of the input image 50.
- the input image 50 is an image in which minute marks 145 are attached to the central portion and the left corner portion of the white background, as shown by the intensity image 50AM.
- the mark 145 is a black dot surrounded by a black square frame.
- Reference numeral 146 in FIG. 20A indicates an intensity scale
- reference numeral 147 in FIG. 20B indicates a phase scale.
- FIG. 21 shows an optical complex amplitude image 54 (Rect_R ⁇ F_R ⁇ ) obtained by measuring the input image 50 shown in FIG. 20 with the sensor 13 using the optical complex amplitude image acquisition optical path 20 as shown in FIG. A_R (X, Y) ⁇ ⁇ _R (X, Y) ⁇ ).
- 21A shows the intensity image 54AM of the optical complex amplitude image 54
- FIG. 21B shows the phase image 54PH of the optical complex amplitude image 54, respectively.
- the square-shaped region in the center indicates a portion that has passed through the transparent portion 35 of the partial transparent mask 29.
- the transparent portion 35 has a size of (1/8) ⁇ (1/8) of the input image 50.
- FIG. 22 shows a first intermediate image 88 (a_SR (x, y)) generated by subjecting the optical complex amplitude image 54 shown in FIG. 21 to the processing shown in FIGS. 11 and 12.
- FIG. 22A shows the intensity image 88AM of the first intermediate image 88
- FIG. 22B shows the phase image 88PH of the first intermediate image 88.
- the first intermediate image 88 no longer retains the original form of the input image 50, but contains a super-resolution component.
- the first intermediate image 88 also includes a noise component.
- FIG. 23 is a reference image 61 (b_R (x, y)) obtained by measuring with the sensor 13 using the reference image acquisition optical path 21 as shown in FIG.
- FIG. 23A shows the intensity image 61AM of the reference image 61
- FIG. 23B shows the phase image 61PH of the reference image 61.
- the reference image 61 is an image that embodies the resolution of the sensor 13 as it is, and is an image in which the super-resolution component included in the input image 50 is missing. Therefore, the reference image 61 is an image in which the outline of the mark 145 is unclear or the mark 145 itself is not reflected as compared with the input image 50 shown in FIG.
- FIG. 24 is a second intermediate image 89 (b_NSR (x, y)) generated by performing the processes shown in FIGS. 13 to 15 with respect to the reference image 61 shown in FIG. 23.
- FIG. 24A shows the intensity image 89AM of the second intermediate image 89
- FIG. 24B shows the phase image 89PH of the second intermediate image 89.
- the second intermediate image 89 no longer retains the original shape of the reference image 61, but contains a noise component.
- FIG. 25 is an image (hereinafter referred to as a subtracted image) 148 obtained by subtracting the second intermediate image 89 shown in FIG. 24 from the first intermediate image 88 shown in FIG. 22.
- FIG. 25A shows the intensity image 148AM of the subtraction image 148
- FIG. 25B shows the phase image 148PH of the subtraction image 148.
- FIG. 26 is an output image 76 (O (x, y)) obtained by performing the processing shown in FIG.
- FIG. 26A shows the intensity image 76AM of the output image 76
- FIG. 26B shows the phase image 76PH of the output image 76. According to FIG. 26, it can be seen that the mark 145 is accurately reproduced in both intensity and phase.
- FIG. 27 shows another example of the input image 50 (a_R (x, y)).
- 27A shows the intensity image 50AM of the input image 50
- FIG. 27B shows the phase image 50PH of the input image 50.
- the input image 50 is an image in which minute marks 150 are added to the central portion and the left corner portion of the black background, as shown by the intensity image 50AM.
- the mark 150 is a square horizontal line drawn in the center of a white frame as shown by the intensity image 50AM.
- the mark 150 is a phase image 50PH in which a black point is surrounded by a black square frame.
- the input image 50 shown in FIG. 27 has an extremely small proportion of the signal light wave (the portion where the pixel value is not 0) in the input light wave as compared with the input image 50 shown in FIG.
- a solid black image shown in FIG. 28 is used as the reference image 61 in the case of such an input image 50.
- the optical complex amplitude b_R (x, y) of the reference image 61 of such a solid black image is 0.
- FIG. 29 shows an optical complex amplitude image 54 (Rect_R ⁇ F_R ⁇ ) obtained by measuring the input image 50 shown in FIG. 27 with the sensor 13 using the optical complex amplitude image acquisition optical path 20 as shown in FIG. A_R (X, Y) ⁇ ⁇ _R (X, Y) ⁇ ).
- FIG. 29A shows the intensity image 54AM of the optical complex amplitude image 54
- FIG. 29B shows the phase image 54PH of the optical complex amplitude image 54. Similar to FIG. 21, the central square region indicates a portion that has passed through the transparent portion 35 of the partial transparent mask 29.
- FIG. 30 is an output image 76 (O (x, y)) in the case of the input image 50 of FIG. 27.
- FIG. 30A shows the intensity image 76AM of the output image 76
- FIG. 30B shows the phase image 76PH of the output image 76. According to FIG. 30, it can be seen that the mark 150 of the input image 50 is accurately reproduced in terms of both intensity and phase, as in the case of FIG. 26.
- the super-resolution measuring device 10 of the first embodiment has the effect of reducing the image quality deterioration that occurs in the super-resolution output image 76. did it.
- the reference image acquisition optical path 162 is branched from the object to be measured 15 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 161 as in the real optical system 12. However, the reference image acquisition optical path 162 does not merge on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 161.
- Two sensors 13 are provided, a sensor 13A for the optical complex amplitude image acquisition optical path 161 and a sensor 13B for the reference image acquisition optical path 162.
- the number of measurement pixels (resolution) of the sensor 13A is equal to the number of measurement pixels (resolution) of the sensor 13B.
- the optical complex amplitude image acquisition optical path 161 has the same configuration as the optical complex amplitude image acquisition optical path 20 of the actual optical system 12 except that the beam splitter 28 and the shutter 36 are not arranged.
- the reference image acquisition optical path 162 has a configuration fundamentally different from that of the reference image acquisition optical path 21 of the actual optical system 12. More specifically, the reference image acquisition optical path 162 does not have the mirror 33 and the shutter 37 arranged.
- the lenses 163 and 164 collect the input light waves of the object to be measured 15, such as the lenses 31 and 32 of the reference image acquisition optical path 21 of the actual optical system 12, into a size that fits in the transmission portion 35 of the partial transmission mask 29. It is not a lens that has the effect of
- the real optical system 160 unlike the real optical system 12, it is not necessary to provide a mechanism for moving the shutters 36, 37 and the shutters 36, 37 to the approach position and the retract position. Further, it is not necessary to use lenses 31 and 32 having a special action. Further, the optical complex amplitude image 54 and the reference image 61 can be obtained at the same time.
- the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172 are used in combination.
- the random diffuser plate 22 and the partial transmission mask 29 are retracted from the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172, and the approach position indicated by the solid line that has entered the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172. It is moved to the retracted position indicated by the broken line.
- the random diffusion plate 22 and the partial transmission mask 29 shown in FIG. 32 are in the approach position, they function as an optical complex amplitude image acquisition optical path 171.
- the random diffuser plate 22 and the partial transmission mask 29 are in the retracted position, they function as the reference image acquisition optical path 172.
- the real optical system 170 no optical members such as beam splitters 25 and 28 and mirrors 30 and 33 of the real optical system 12 are required. Further, unlike the actual optical system 12, it is not necessary to provide a mechanism for moving the shutters 36, 37 and the shutters 36, 37 to the approach position and the retract position. Further, as with the actual optical system 12, one sensor 13 can cover the problem.
- the step of obtaining the reference image 61 using the reference image acquisition optical path 21 is unnecessary.
- the solid black image shown in FIG. 28 may be used as the reference image 61.
- the super-resolution measuring device 10 for product defect inspection for detecting minute scratches on the surface of a mirror-processed product that cannot be discriminated by the resolution of the sensor 13. In this case, since there is product design data when there are no scratches, the design data can be diverted to the reference image 61.
- the real object capable of generating the reference image 61 is the object to be measured 15, for example, the real object capable of generating the reference image 61 is covered by using the optical complex amplitude image acquisition optical path 20 of the real optical system 12. It may be measured as the measuring object 15.
- an image represented by Rec_R ⁇ F_R ⁇ B_R (X, Y) ⁇ ⁇ _R (X, Y) ⁇ can be obtained.
- This image is equivalent to the image 120 represented by Rect_V ⁇ F_V2 ⁇ B_R (X, Y) ⁇ ⁇ _V2 (X, Y) ⁇ output from the calculation unit 113 of the second intermediate image generation unit 86.
- the step of obtaining the reference image 61 using the reference image acquisition optical path 21 is unnecessary.
- the sensor 13 can obtain an image represented by Rec_R ⁇ F_R ⁇ B_R (X, Y) ⁇ ⁇ _R (X, Y) ⁇ , and therefore FIGS. 13 to 15 Of the processes of the second intermediate image generation unit 86 shown in the above, the processes of the fast Fourier transform unit 110, the calculation unit 111, the fast Fourier transform unit 112, and the calculation unit 113 are unnecessary.
- the object to be measured 15 may be an actual image as illustrated in FIG. 20 or the like, or may be an image displayed on a display.
- the random diffusion plate 22 is not limited to the one on which the uneven surface 23 is formed.
- a resin containing fine particles may be applied to the surface. In short, it suffices as long as it can sufficiently diffuse the input light wave of the object to be measured 15. Further, the Fourier transform lenses 26 and 27 may be omitted.
- the partial transmission mask 29 may be moved to an approach position that has entered the optical complex amplitude image acquisition optical path 20 and a retracted position that has been retracted from the optical complex amplitude image acquisition optical path 20.
- the partial transmission mask 29 is moved to the approach position, and when the reference image acquisition optical path 21 is used to obtain the reference image 61, the partial transmission mask 29 is partially transmitted.
- the mask 29 is moved to the retracted position. In this way, it is not necessary for the lenses 31 and 32 to collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 of the partial transmission mask 29.
- the second to fourth embodiments shown below are configured to eliminate the need for the reference image 61 itself.
- a spatial light modulator 182 capable of changing the diffusion characteristics of the input light wave is used as the diffusion member.
- the real optical system 180 of the second embodiment has a Fourier transform lens 26, a beam splitter 181 and a spatial light modulator 182, a Fourier transform lens 27, and a partial transmission mask 29.
- the beam splitter 181 transmits the input light wave of the object to be measured 15 that has passed through the Fourier transform lens 26 toward the spatial light modulator 182. Further, the beam splitter 181 reflects the input light wave phase-modulated by the phase pattern displayed on the spatial light modulator 182 toward the Fourier transform lens 27 by 90 °.
- the spatial light modulator 182 is also called an SLM (Spatial Light Modulator), and is composed of, for example, an LCD (Liquid Crystal Display), an LCOS (Liquid Crystal on Silicon), a DMD (Digital Mirror Device), or the like.
- the spatial light modulator 182 can change the displayed phase pattern in various ways.
- the generation unit 185 of the second embodiment is the same as the generation unit 82 of the first embodiment, and is an input of the object to be measured 15 including a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13.
- the output image 76 is generated by performing a calculation process on a computer to reproduce the light wave from the optical complex amplitude image 54. That is, the generation unit 185 is an example of the "virtual optical system" according to the technique of the present disclosure.
- the generation unit 185 has an intermediate image generation unit 186 and an addition averaging unit 187.
- the intermediate image generation unit 186 uses the phase conjugation of the transmission function of the spatial light modulator 182 instead of the transmission function ⁇ _V1 (X, Y) of the virtual random diffuser 107 in the first intermediate image generation unit 85 of the first embodiment.
- an intermediate image 188 (a_SR (x, y)) corresponding to the first intermediate image 88 of the first embodiment is generated from the optical complex amplitude image 54.
- the intermediate image generation unit 186 outputs the intermediate image 188 to the addition averaging unit 187.
- the addition averaging unit 187 adds and averages the intermediate image 188.
- the addition averaging unit 187 outputs the averaging image as an output image 76.
- the phase pattern displayed in the spatial light modulator 182 is changed to phase patterns 1, 2, 3, ..., N.
- the sensor 13 outputs optical complex amplitude images 54_1, 54_2, 54_3, ..., 54_N each time the phase pattern displayed by the spatial light modulator 182 is changed.
- the intermediate image generation unit 186 generates intermediate images 188_1, 188_2, 188_3, ..., 188_N each time the phase pattern displayed in the spatial light modulator 182 is changed.
- the addition averaging unit 187 adds the intermediate images 188_1, 188_2, 188_3, ..., 188_N.
- the addition averaging unit 187 divides the added image by N as shown by reference numeral 190.
- the calculation process for generating the output image 76 performed by the generation unit 185 is the process of generating the intermediate image 188 in the intermediate image generation unit 186 and the intermediate image 188 in the addition averaging unit 187.
- the process of calculating the added average corresponds.
- the calculation process performed by the generation unit 185 includes the phase conjugation function of the transmission function of the spatial light modulator 182, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
- the spatial light modulator 182 capable of changing the diffusion characteristics of the input light wave is used as the diffusion member.
- the sensor 13 measures the optical complex amplitudes of a plurality of types of input light waves whose diffusion characteristics are changed by the spatial light modulator 182, and outputs a plurality of optical complex amplitude images 54_1 to 54_N having different diffusion characteristics of the input light waves.
- the generation unit 185 generates intermediate images 188_1 to 188_N from each of the plurality of optical complex amplitude images 54_1 to 54_N, and adds and averages the plurality of intermediate images 188_1 to 188_N to obtain the output image 76.
- the optical complex amplitude images 54_1 to 54_N, and thus the intermediate images 188_1 to 188_N, contain super-resolution components having slightly different contents.
- the output image 76 which is the addition average of the intermediate images 188_1 to 188_N, contains a clear super-resolution component as compared with the output image 76 when one random diffusion plate 22 is used. .. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
- a random diffusion plate 201 in which the light-shielding portion 202 is provided in the central portion is used as the diffusion member.
- the real optical system 200 of the third embodiment has a Fourier transform lens 26, a random diffuser 201, a Fourier transform lens 27, and a partial transmission mask 29.
- a light-shielding portion 202 is provided in the central portion of the random diffusion plate 201.
- the light-shielding portion 202 cuts the low spatial frequency component by the Fourier transform lens 26, and diffuses only the high spatial frequency component by the Fourier transform lens 26.
- the optical complex amplitude image 54 output from the sensor 13 lacks the low spatial frequency component of the input light wave of the object to be measured 15.
- Reference numeral 203 is an uneven surface.
- the generation unit of the third embodiment uses the phase conjugation function of the transmission function of the random diffusion plate 201 instead of the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107. 1 It is the same as the intermediate image generation unit 85. Therefore, the illustration of the generation unit of the third embodiment is omitted.
- the generation unit of the third embodiment like the generation unit 82 of the first embodiment and the generation unit 185 of the second embodiment, includes a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13.
- the output image 76 is generated by performing a calculation process on the computer to reproduce the input light wave of the measurement object 15 from the optical complex amplitude image 54. That is, the generation unit of the third embodiment is an example of the "virtual optical system" according to the technique of the present disclosure.
- the generation unit of the third embodiment performs substantially the same processing as the first intermediate image generation unit 85 on the optical complex amplitude image 54 output from the sensor 13. Then, the image obtained by this is output as the output image 76.
- the calculation process for generating the output image 76 performed by the generation unit corresponds to substantially the same processing as the first intermediate image generation unit 85 performed on the optical complex amplitude image 54.
- the calculation process performed by the generation unit of the third embodiment includes the phase conjugation function of the transmission function of the random diffusion plate 201, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
- the light-shielding portion 202 for cutting the low spatial frequency component by the Fourier transform lens 26 arranged on the object to be measured 15 side is provided in the central portion of the random diffusion plate 201. Is used.
- the super-resolution component is hardly contained in the low-spatial frequency component, and most of it is contained in the high-spatial frequency component.
- the white background portion without the mark 145 becomes a low spatial frequency component due to the Fourier transform lens 26, and the portion marked 145 becomes a high spatial frequency component due to the Fourier transform lens 26. .. Therefore, if the low-spatial frequency component that is considered to hardly contribute to the super-resolution component is cut by the light-shielding unit 202, most of the resolution of the sensor 13 can be allocated to the high-spatial frequency component. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
- FIG. 39 shows an input image 50 used in the numerical simulation of the third embodiment.
- FIG. 39A shows the intensity image 50AM of the input image 50
- FIG. 39B shows the phase image 50PH of the input image 50.
- the input image 50 is an image in which the mark 210 is attached to the central portion and the left corner portion of the white background, similarly to the input image 50 shown in FIG.
- FIG. 40 shows the high spatial frequency component 205 of the input image 50 shown in FIG. 39.
- FIG. 40A shows the intensity image 205AM of the high spatial frequency component 205
- FIG. 40B shows the phase image 205PH of the high spatial frequency component 205.
- the mark 210 can also be confirmed in the high spatial frequency component 205.
- FIG. 41 is an output image 76 with respect to the input image 50 shown in FIG. 39.
- FIG. 41A shows the intensity image 76AM of the output image 76
- FIG. 41B shows the phase image 76PH of the output image 76. According to the output image 76, it can be seen that the mark 210 is accurately reproduced.
- a random diffusion plate 221 having an opening 223 provided in the central portion is used as the diffusion member.
- the actual optical system 220 of the fourth embodiment has a Fourier transform lens 26, a random diffuser plate 221 and a condenser lens 222, a Fourier transform lens 27, and a partial transmission mask 29.
- an opening 223 is provided in the central portion of the random diffuser plate 221. Through this aperture 223, the low spatial frequency component by the Fourier transform lens 26 is transmitted without being diffused, and only the high spatial frequency component by the Fourier transform lens 26 is diffused.
- the condenser lens 222 concentrates the low spatial frequency component transmitted through the aperture 223 on the transmission portion 35 of the partial transmission mask 29.
- the optical complex amplitude image 54 output from the sensor 13 includes a diffused high spatial frequency component and an undiffused low spatial frequency component of the input light wave of the object to be measured 15.
- Reference numeral 224 is an uneven surface.
- the generation unit 230 of the fourth embodiment is the same as the generation unit 82 of the first embodiment, the generation unit 185 of the second embodiment, and the generation unit of the third embodiment of the sensor 13.
- the output image 76 is generated by performing a calculation process on a computer to reproduce the input light wave of the object to be measured 15 including the super-resolution component which is a component having a resolution exceeding the resolution from the optical complex amplitude image 54. That is, the generation unit 230 is an example of the "virtual optical system" according to the technique of the present disclosure.
- the generation unit 230 has a component separation unit 231, a first processing unit 232, a second processing unit 233, and a synthesis unit 234.
- the component separation unit 231 separates the high spatial frequency component 205 and the low spatial frequency component 235 of the optical complex amplitude image 54 output from the sensor 13.
- the component separation unit 231 outputs the high spatial frequency component 205 to the first processing unit 232 and the low spatial frequency component 235 to the second processing unit 233, respectively.
- the first processing unit 232 performs substantially the same processing as the first intermediate image generation unit 85 on the high spatial frequency component 205 of the optical complex amplitude image 54. Then, the processed high spatial frequency component 205PR obtained thereby is output to the synthesis unit 234.
- the second processing unit 233 performs substantially the same processing as the first intermediate image generation unit 85 on the low spatial frequency component 235 of the optical complex amplitude image 54. Then, the processed low spatial frequency component 235PR obtained thereby is output to the synthesis unit 234.
- the first processing unit 232 and the second processing unit 233 use the phase conjugation function of the transmission function of the random diffusion plate 221 instead of the transmission function ⁇ _V1 (X, Y) of the virtual random diffusion plate 107.
- the synthesis unit 234 synthesizes the processed high spatial frequency component 205PR and the processed low spatial frequency component 235PR, and outputs the combined image as an output image 76.
- the calculation process for generating the output image 76 which is carried out by the generation unit 230, is performed by the component separation unit 231 with the high spatial frequency component 205 and the low spatial frequency component 235 of the optical complex amplitude image 54.
- the first processing which is performed in the first processing unit 232 for the high spatial frequency component 205, is substantially the same as that in the first intermediate image generation unit 85, and is performed in the second processing unit 233 for the low spatial frequency component 235.
- the processing substantially the same as that of the intermediate image generation unit 85, and the processing of synthesizing the processed high spatial frequency component 205PR and the processed low spatial frequency component 235PR in the compositing unit 234 correspond.
- the calculation process performed by the generation unit 230 includes the phase conjugation function of the transmission function of the random diffusion plate 221 which is an example of the "phase conjugation function of the transmission function of the diffusion member".
- the opening 223 that transmits the low spatial frequency component by the Fourier transform lens 26 arranged on the object to be measured 15 side uses the random diffusion plate 221 provided in the central portion as the diffusion member. .. Then, a condenser lens 222 that concentrates the low spatial frequency component transmitted through the aperture 223 on the transmission portion 35 of the partial transmission mask 29 is arranged in the actual optical system 220.
- the low spatial frequency component by the Fourier transform lens 26 contains almost no super-resolution component, it is still a part of the input light wave of the object to be measured 15. Therefore, in the fourth embodiment, the low spatial frequency component cut in the third embodiment is taken into the sensor 13 by the aperture 223 and the condenser lens 222. However, if the low spatial frequency component is diffused by the random diffuser plate 221 in the same manner as the high spatial frequency component, the resolution of the sensor 13 which is already small is wasted in the low spatial frequency component. Therefore, in the fourth embodiment, an opening 223 for transmitting the low spatial frequency component is formed in the random diffuser plate 221 and is guided to the sensor 13 without diffusing the low spatial frequency component. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
- the partial transmission mask 29 in which the transmission portion 35 is formed of holes which is shown in each of the above embodiments, is an example, and is not limited thereto.
- the peripheral portion may be coated with a light-shielding material, and the central portion may be formed as a transmission portion 241 without being coated with a light-shielding material.
- the transmissive portion does not necessarily have to be formed in the central portion of the partial transmissive mask.
- the transmission portion 251 may be displaced from the central portion.
- a and / or B is synonymous with "at least one of A and B". That is, “A and / or B” means that it may be A alone, B alone, or a combination of A and B. Further, in the present specification, when three or more matters are connected and expressed by "and / or", the same concept as “A and / or B" is applied.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Health & Medical Sciences (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Microscoopes, Condenser (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
This super-resolution measurement device comprises: a real optical system comprising a diffusion member for diffusing input light waves from an object under measurement and a partial transmission mask for partially transmitting the diffused input light waves; a sensor for measuring the intensity and phase of the input light waves that have passed through the partial transmission mask and outputting an optical complex-amplitude image; and a virtual optical system for generating an output image by using a computer to carry out computation processing that is for reproducing, from the optical complex-amplitude image, input light waves from the object under measurement that include a super resolution component, which is a component of a resolution exceeding the resolution of the sensor, and includes a phase conjugation function of the transmission function of the diffusion member.
Description
本開示の技術は、超解像計測装置および超解像計測装置の作動方法に関する。
The technology of the present disclosure relates to a super-resolution measuring device and a method of operating the super-resolution measuring device.
CCD(Charge-Coupled Device)イメージセンサ等のセンサで被計測物体を計測し、センサの解像度を超える解像度(以下、超解像)の出力画像を得る技術(以下、超解像技術)が知られている。
A technique (hereinafter, super-resolution technology) for measuring an object to be measured with a sensor such as a CCD (Charge-Coupled Device) image sensor and obtaining an output image having a resolution exceeding the resolution of the sensor (hereinafter, super-resolution) is known. ing.
井手下 昴史、外2名、"デジタルホログラフィによる超分解能計測アルゴリズムの開発"、2009年度精密工学会秋季大会学術講演会講演論文集、インターネット〈URL:https://www.jstage.jst.go.jp/article/pscjspe/2009A/0/2009A_0_687/_article/-char/ja/〉には、光複素振幅計測技術の1つである位相シフトデジタルホログラフィを用いた超解像技術が記載されている。井手下、外2名著の"デジタルホログラフィによる超分解能計測アルゴリズムの開発"では、水平方向の分解能を向上させるために、ピクセルサイズよりも小さいピッチでセンサを水平方向に数箇所移動させ、センサ固定では取得できない、センサの解像度を超える解像度の成分(以下、超解像成分)を取得している。そして、センサを移動させた各箇所で計測したホログラムを合成することで、超解像の出力画像を得ている。なお、光複素振幅とは、光波の振動を複素数で表したものである。
Atsushi Ideshita, 2 outsiders, "Development of super-resolution measurement algorithm by digital holography", Proceedings of the 2009 Autumn Meeting of the Japan Society for Precision Engineering, Proceedings of the Academic Lecture, Internet <URL: https://www.jstage.jst.go. jp / article / pscjspe / 2009A / 0 / 2009A_0_687 / _article / -char / ja /> describes a super-resolution technique using phase-shift digital holography, which is one of the optical complex amplitude measurement techniques. In "Development of super-resolution measurement algorithm by digital holography" by Ideshita and Gaito, in order to improve the horizontal resolution, the sensor is moved in several places in the horizontal direction at a pitch smaller than the pixel size, and the sensor is fixed. A component with a resolution exceeding the resolution of the sensor (hereinafter referred to as a super-resolution component) that cannot be obtained is acquired. Then, a super-resolution output image is obtained by synthesizing the holograms measured at each location where the sensor is moved. The optical complex amplitude represents the vibration of a light wave as a complex number.
また、濱田 匡夫、"適応フィルタ型超解像"、Panasonic Technical Journal Vol.56 No.4 Jan.2011、インターネット〈URL:https://www.panasonic.com/jp/corporate/technology-design/ptj/pdf/v5604/p0208.pdf〉には、被計測物体をセンサで計測して得られた画像に拡大フィルタ処理を施すことで、超解像成分を推定する超解像技術が記載されている。すなわち、濱田著の"適応フィルタ型超解像"では、超解像成分を含まない画像から、拡大フィルタ処理により超解像成分を推定している。
Also, Masao Hamada, "Adaptive Filter Type Super-Resolution", Panasonic Technical Journal Vol. 56 No. 4 Jan. 2011, Internet <URL: https://www.panasonic.com/jp/corporate/technology-design/ptj/pdf/v5604/p0208.pdf> is an image obtained by measuring the object to be measured with a sensor. A super-resolution technique for estimating a super-resolution component by applying a magnifying filter process to the image is described. That is, in Hamada's "Adaptive Filter Type Super-Resolution", the super-resolution component is estimated by magnifying filter processing from an image that does not contain the super-resolution component.
井手下、外2名著の"デジタルホログラフィによる超分解能計測アルゴリズムの開発"に記載の技術では、センサを水平方向に移動させる必要がある。また、濱田著の"適応フィルタ型超解像"に記載の技術では、超解像成分はあくまでも拡大フィルタ処理による推定であって、実際の超解像成分を正確に反映したものであるという保証はない。そこで、本発明者らは、被計測物体の入力光波を拡散させてセンサに入射させ、拡散された入力光波の光複素振幅をセンサで計測することで、実際に超解像成分をセンサで計測し、これにより得られた画像に基づいて超解像の出力画像を生成することを検討している。
In the technology described in "Development of super-resolution measurement algorithm by digital holography" by Ideshita and Gaita, it is necessary to move the sensor in the horizontal direction. In addition, in the technology described in "Adaptive Filter Type Super-Resolution" by Hamada, it is guaranteed that the super-resolution component is only estimated by magnifying filter processing and accurately reflects the actual super-resolution component. There is no. Therefore, the present inventors actually measure the super-resolution component with the sensor by diffusing the input light wave of the object to be measured and incidenting it on the sensor and measuring the optical complex amplitude of the diffused input light wave with the sensor. However, we are considering generating a super-resolution output image based on the image obtained by this.
すなわち、井手下、外2名著の"デジタルホログラフィによる超分解能計測アルゴリズムの開発"では、ピクセルサイズよりも小さいピッチでセンサを水平方向に移動させることで、超解像成分を取得している。また、濱田著の"適応フィルタ型超解像"では、拡大フィルタ処理により超解像成分を推定している。対して、本発明者らが検討している超解像技術では、被計測物体の入力光波を拡散させることで、超解像成分を取得する。したがって、この本発明者らが検討している超解像技術によれば、センサを移動させることなく、また、拡大フィルタ処理による推定に依拠せず、超解像の出力画像を得ることが可能となる。
That is, in "Development of super-resolution measurement algorithm by digital holography" by Ideshita and Gaito, the super-resolution component is acquired by moving the sensor in the horizontal direction at a pitch smaller than the pixel size. In Hamada's "Adaptive Filter Type Super-Resolution", the super-resolution component is estimated by magnifying filter processing. On the other hand, in the super-resolution technology studied by the present inventors, a super-resolution component is acquired by diffusing the input light wave of the object to be measured. Therefore, according to the super-resolution technology studied by the present inventors, it is possible to obtain a super-resolution output image without moving the sensor and without relying on estimation by magnifying filter processing. It becomes.
しかしながら、本発明者らが検討している、被計測物体の入力光波を拡散させる超解像技術では、超解像の出力画像に画質劣化が生じる場合があった。本発明者らは、この画質劣化の一因が、拡散された入力光波に含まれる情報量が多いことであることを突き止めた。
However, in the super-resolution technology for diffusing the input light wave of the object to be measured, which the present inventors are studying, the image quality of the super-resolution output image may be deteriorated. The present inventors have found that one of the causes of this deterioration in image quality is that the amount of information contained in the diffused input light wave is large.
本開示の技術は、超解像の出力画像に生じる画質劣化を低減することが可能な超解像計測装置および超解像計測装置の作動方法を提供することを目的とする。
An object of the present disclosure technique is to provide a super-resolution measuring device and a method of operating a super-resolution measuring device capable of reducing image quality deterioration that occurs in a super-resolution output image.
上記目的を達成するために、本開示の超解像計測装置は、被計測物体の入力光波を拡散させる拡散部材、および拡散された入力光波を部分的に透過させる部分透過マスクを含む実光学系と、部分透過マスクを透過した入力光波の強度と位相を計測して光複素振幅画像を出力するセンサと、センサの解像度を超える解像度の成分である超解像成分を含む被計測物体の入力光波を、光複素振幅画像から再生する計算処理であって、拡散部材の透過関数の位相共役関数を含む計算処理をコンピュータ上で施して、出力画像を生成する仮想光学系と、を備える。
In order to achieve the above object, the super-resolution measuring apparatus of the present disclosure includes a diffuser member that diffuses the input light wave of the object to be measured, and a partial transmission mask that partially transmits the diffused input light wave. A sensor that measures the intensity and phase of the input light wave transmitted through the partial transmission mask and outputs an optical complex amplitude image, and an input light wave of the object to be measured that contains a super-resolution component that is a component with a resolution exceeding the resolution of the sensor. Is a calculation process for reproducing an optical complex amplitude image, and includes a virtual optical system that generates an output image by performing a calculation process including a phase conjugation function of a transmission function of a diffuser member on a computer.
仮想光学系は、光複素振幅画像から第1中間画像を生成し、被計測物体の基準画像から第2中間画像を生成し、第1中間画像から第2中間画像を減算した画像に基づいて、出力画像を生成することが好ましい。
The virtual optical system generates a first intermediate image from an optical complex amplitude image, generates a second intermediate image from a reference image of the object to be measured, and subtracts a second intermediate image from the first intermediate image based on the image. It is preferable to generate an output image.
実光学系は、拡散部材を経てセンサに至る、光複素振幅画像を得るための光複素振幅画像取得光路と、拡散部材を通さずにセンサに至る、基準画像を得るための基準画像取得光路とを有することが好ましい。
The real optical system includes an optical complex amplitude image acquisition optical path for obtaining an optical complex amplitude image that reaches the sensor via a diffuser member, and a reference image acquisition optical path for obtaining a reference image that reaches the sensor without passing through the diffuser member. It is preferable to have.
光複素振幅画像取得光路の拡散部材よりも被計測物体側から、基準画像取得光路が分岐していることが好ましい。この場合、光複素振幅画像取得光路の拡散部材よりもセンサ側で、基準画像取得光路が合流していることが好ましい。さらに、基準画像取得光路においては、入力光波が部分透過マスクの透過部に収まるサイズに集光されることが好ましい。
It is preferable that the reference image acquisition optical path branches from the object to be measured side rather than the diffusion member of the optical complex amplitude image acquisition optical path. In this case, it is preferable that the reference image acquisition optical paths merge on the sensor side of the diffusion member of the optical complex amplitude image acquisition optical path. Further, in the reference image acquisition optical path, it is preferable that the input light wave is focused to a size that fits in the transmission portion of the partial transmission mask.
光複素振幅画像取得光路と基準画像取得光路とは兼用されており、拡散部材および部分透過マスクは、光複素振幅画像取得光路内に進入した進入位置と、光複素振幅画像取得光路から退避した退避位置とに移動され、光複素振幅画像取得光路は、拡散部材および部分透過マスクが退避位置にある場合に、基準画像取得光路として機能することが好ましい。
The optical complex amplitude image acquisition optical path and the reference image acquisition optical path are also used, and the diffusion member and the partial transmission mask are the approach position that entered the optical complex amplitude image acquisition optical path and the evacuation that was retracted from the optical complex amplitude image acquisition optical path. The optical complex amplitude image acquisition optical path, which is moved to the position, preferably functions as a reference image acquisition optical path when the diffusion member and the partial transmission mask are in the retracted position.
拡散部材は、入力光波をランダムに拡散させるランダム拡散板であることが好ましい。
The diffusing member is preferably a random diffusing plate that randomly diffuses the input light wave.
拡散部材は、入力光波の拡散特性を変更可能な空間光変調器であり、センサは、空間光変調器により拡散特性が変更された複数種の入力光波の強度と位相を計測して、入力光波の拡散特性が異なる複数の光複素振幅画像を出力し、仮想光学系は、複数の光複素振幅画像の各々から中間画像を生成し、複数の中間画像を加算平均した画像を、出力画像とすることが好ましい。
The diffuser member is a spatial light modulator whose diffusion characteristics of the input light wave can be changed, and the sensor measures the intensity and phase of a plurality of types of input light waves whose diffusion characteristics are changed by the spatial light modulator, and the input light wave. The virtual optical system outputs a plurality of optical complex amplitude images having different diffusion characteristics, generates an intermediate image from each of the plurality of optical complex amplitude images, and obtains an image obtained by adding and averaging the plurality of intermediate images as an output image. Is preferable.
実光学系は、拡散部材よりも被計測物体側、および拡散部材よりもセンサ側に配された、一対のフーリエ変換レンズを含むことが好ましい。
The actual optical system preferably includes a pair of Fourier transform lenses arranged on the object to be measured side of the diffusing member and on the sensor side of the diffusing member.
拡散部材は、入力光波をランダムに拡散させるランダム拡散板であって、被計測物体側に配されたフーリエ変換レンズによる低空間周波数成分をカットする遮光部が、中央部分に設けられたランダム拡散板であることが好ましい。
The diffusing member is a random diffusing plate that randomly diffuses the input light wave, and a random diffusing plate provided in the central portion with a light-shielding portion that cuts low spatial frequency components by a Fourier transform lens arranged on the object to be measured. Is preferable.
拡散部材は、入力光波をランダムに拡散させるランダム拡散板であって、被計測物体側に配されたフーリエ変換レンズによる低空間周波数成分を透過する開口が、中央部分に設けられたランダム拡散板であり、実光学系には、開口を透過した低空間周波数成分を、部分透過マスクの透過部に集光する集光レンズが配されていることが好ましい。
The diffuser member is a random diffuser plate that randomly diffuses the input light wave, and is a random diffuser plate provided in the central portion with an opening that transmits low spatial frequency components by the Fourier conversion lens arranged on the side to be measured. Therefore, it is preferable that the actual optical system is provided with a condensing lens that collects low spatial frequency components transmitted through the aperture on the transmission portion of the partial transmission mask.
本開示の超解像計測装置の作動方法は、被計測物体の入力光波を拡散させる拡散部材、および拡散された入力光波を部分的に透過させる部分透過マスクを含む実光学系を用い、センサによって、部分透過マスクを透過した入力光波の強度と位相を計測して光複素振幅画像を出力させ、仮想光学系において、センサの解像度を超える解像度の成分である超解像成分を含む被計測物体の入力光波を、光複素振幅画像から再生する計算処理であって、拡散部材の透過関数の位相共役関数を含む計算処理をコンピュータ上で施して、出力画像を生成する。
The method of operating the super-resolution measuring device of the present disclosure uses a real optical system including a diffuser member that diffuses the input light wave of the object to be measured and a partially transmitted mask that partially transmits the diffused input light wave, and uses a sensor. , The intensity and phase of the input light wave transmitted through the partial transmission mask are measured to output an optical complex amplitude image, and in the virtual optical system, the object to be measured containing a super-resolution component that is a component with a resolution exceeding the resolution of the sensor. It is a calculation process for reproducing an input light wave from an optical complex amplitude image, and a calculation process including a phase conjugation function of a transmission function of a diffuser member is performed on a computer to generate an output image.
本開示の技術によれば、超解像の出力画像に生じる画質劣化を低減することが可能な超解像計測装置および超解像計測装置の作動方法を提供することができる。
According to the technique of the present disclosure, it is possible to provide an operating method of a super-resolution measuring device and a super-resolution measuring device capable of reducing image quality deterioration that occurs in a super-resolution output image.
[第1実施形態]
図1において、超解像計測装置10は、ステージ11、実光学系12、センサ13、および処理装置14を備える。ステージ11には被計測物体15がセットされる。実光学系12は、ステージ11にセットされた被計測物体15の入力光波を取り込み、センサ13へと導く。センサ13は、実光学系12によって導かれた入力光波の強度と位相を計測して、光複素振幅画像54(図6参照)を出力する。センサ13は、光複素振幅画像54を処理装置14に送信する。処理装置14は、例えばデスクトップ型のパーソナルコンピュータである。処理装置14は、センサ13からの光複素振幅画像54に基づいて、出力画像76(図9参照)を生成する。 [First Embodiment]
In FIG. 1, thesuper-resolution measuring device 10 includes a stage 11, an actual optical system 12, a sensor 13, and a processing device 14. The object to be measured 15 is set on the stage 11. The real optical system 12 takes in the input light wave of the object to be measured 15 set on the stage 11 and guides it to the sensor 13. The sensor 13 measures the intensity and phase of the input light wave guided by the real optical system 12 and outputs an optical complex amplitude image 54 (see FIG. 6). The sensor 13 transmits the optical complex amplitude image 54 to the processing device 14. The processing device 14 is, for example, a desktop personal computer. The processing device 14 generates an output image 76 (see FIG. 9) based on the optical complex amplitude image 54 from the sensor 13.
図1において、超解像計測装置10は、ステージ11、実光学系12、センサ13、および処理装置14を備える。ステージ11には被計測物体15がセットされる。実光学系12は、ステージ11にセットされた被計測物体15の入力光波を取り込み、センサ13へと導く。センサ13は、実光学系12によって導かれた入力光波の強度と位相を計測して、光複素振幅画像54(図6参照)を出力する。センサ13は、光複素振幅画像54を処理装置14に送信する。処理装置14は、例えばデスクトップ型のパーソナルコンピュータである。処理装置14は、センサ13からの光複素振幅画像54に基づいて、出力画像76(図9参照)を生成する。 [First Embodiment]
In FIG. 1, the
図2および図3において、実光学系12は、光複素振幅画像取得光路20と基準画像取得光路21とを有する。光複素振幅画像取得光路20は光複素振幅画像54を得るための光路である。基準画像取得光路21は、被計測物体15の基準画像61(図7参照)を得るための光路である。光複素振幅画像取得光路20には、本開示の技術に係る「拡散部材」の一例であるランダム拡散板22が配置されている。対して基準画像取得光路21には、ランダム拡散板22は配されていない。すなわち、光複素振幅画像取得光路20は、ランダム拡散板22を経てセンサ13に至る光路である。対して基準画像取得光路21は、ランダム拡散板22を通さずにセンサ13に至る光路である。
In FIGS. 2 and 3, the real optical system 12 has an optical complex amplitude image acquisition optical path 20 and a reference image acquisition optical path 21. The optical complex amplitude image acquisition optical path 20 is an optical path for obtaining an optical complex amplitude image 54. The reference image acquisition optical path 21 is an optical path for obtaining a reference image 61 (see FIG. 7) of the object to be measured 15. A random diffusion plate 22 which is an example of the “diffusion member” according to the technique of the present disclosure is arranged in the optical complex amplitude image acquisition optical path 20. On the other hand, the random diffuser plate 22 is not arranged in the reference image acquisition optical path 21. That is, the optical complex amplitude image acquisition optical path 20 is an optical path that reaches the sensor 13 via the random diffuser plate 22. On the other hand, the reference image acquisition optical path 21 is an optical path that reaches the sensor 13 without passing through the random diffuser plate 22.
ランダム拡散板22は、表面にランダムな凹凸が形成された凹凸面23を有する。ランダム拡散板22は、凹凸面23をセンサ13側に向けて配置されている。ランダム拡散板22は、被計測物体15の入力光波を拡散させる。
The random diffusion plate 22 has an uneven surface 23 on which random irregularities are formed on the surface. The random diffusion plate 22 is arranged with the uneven surface 23 facing the sensor 13 side. The random diffuser plate 22 diffuses the input light wave of the object to be measured 15.
光複素振幅画像取得光路20には、ランダム拡散板22に加えて、ビームスプリッタ25、フーリエ変換レンズ26、フーリエ変換レンズ27、ビームスプリッタ28、および部分透過マスク29が配置されている。
In addition to the random diffuser plate 22, a beam splitter 25, a Fourier transform lens 26, a Fourier transform lens 27, a beam splitter 28, and a partial transmission mask 29 are arranged in the optical complex amplitude image acquisition optical path 20.
ビームスプリッタ25は、被計測物体15の入力光波を、フーリエ変換レンズ26に向けて透過させる。また、ビームスプリッタ25は、被計測物体15の入力光波を、基準画像取得光路21のミラー30に向けて90°反射させる。このビームスプリッタ25によって、光複素振幅画像取得光路20のランダム拡散板22よりも被計測物体15側から、基準画像取得光路21が分岐することとなる。
The beam splitter 25 transmits the input light wave of the object to be measured 15 toward the Fourier transform lens 26. Further, the beam splitter 25 reflects the input light wave of the object to be measured 15 toward the mirror 30 of the reference image acquisition optical path 21 by 90 °. The beam splitter 25 branches the reference image acquisition optical path 21 from the object to be measured 15 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20.
フーリエ変換レンズ26、27は、ランダム拡散板22を挟む位置に配置されている。より詳しくは、フーリエ変換レンズ26は、ランダム拡散板22よりも被計測物体15側に配されている。フーリエ変換レンズ27は、ランダム拡散板22よりもセンサ13側に配されている。フーリエ変換レンズ26、27は、「レンズの前側焦点面に透過物体を置き、その背後から均一な強度(振幅)分布をもつ平面波で照明したとき、レンズの後側焦点面上に得られる回折像の強度分布が物体の強度分布のフーリエ変換で表される、という性質を有するレンズ」と定義される。
The Fourier transform lenses 26 and 27 are arranged at positions sandwiching the random diffuser plate 22. More specifically, the Fourier transform lens 26 is arranged closer to the object to be measured 15 than the random diffuser 22. The Fourier transform lens 27 is arranged closer to the sensor 13 than the random diffuser 22. The Fourier transform lenses 26 and 27 are described as "a diffraction image obtained on the rear focal plane of the lens when a transmitting object is placed on the front focal plane of the lens and illuminated with a plane wave having a uniform intensity (amplitude) distribution from behind the transparent object. Is defined as a lens having the property that the intensity distribution of is represented by the Fourier transform of the intensity distribution of the object.
ビームスプリッタ28は、ランダム拡散板22によって拡散された被計測物体15の入力光波を、部分透過マスク29に向けて透過させる。また、ビームスプリッタ28は、基準画像取得光路21からの被計測物体15の入力光波を、部分透過マスク29に向けて90°反射させる。このビームスプリッタ28によって、光複素振幅画像取得光路20のランダム拡散板22よりもセンサ13側で、基準画像取得光路21が合流することとなる。
The beam splitter 28 transmits the input light wave of the object to be measured 15 diffused by the random diffuser plate 22 toward the partial transmission mask 29. Further, the beam splitter 28 reflects the input light wave of the object to be measured 15 from the reference image acquisition optical path 21 toward the partial transmission mask 29 by 90 °. By this beam splitter 28, the reference image acquisition optical path 21 merges on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20.
基準画像取得光路21には、ミラー30、レンズ31、レンズ32、およびミラー33が配置されている。ミラー30は、ビームスプリッタ25からの被計測物体15の入力光波を、レンズ31に向けて90°反射させる。レンズ31、32は、被計測物体15の入力光波を、部分透過マスク29の透過部35(図5も参照)に収まるサイズに集光する。ミラー33は、レンズ31、32により集光された被計測物体15の入力光波を、光複素振幅画像取得光路20のビームスプリッタ28に向けて90°反射させる。
A mirror 30, a lens 31, a lens 32, and a mirror 33 are arranged in the reference image acquisition optical path 21. The mirror 30 reflects the input light wave of the object to be measured 15 from the beam splitter 25 toward the lens 31 by 90 °. The lenses 31 and 32 collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 (see also FIG. 5) of the partial transmission mask 29. The mirror 33 reflects the input light wave of the object to be measured 15 focused by the lenses 31 and 32 toward the beam splitter 28 of the optical complex amplitude image acquisition optical path 20 by 90 °.
光複素振幅画像取得光路20のフーリエ変換レンズ27とビームスプリッタ28との間には、シャッタ36が設けられている。同様に、基準画像取得光路21のミラー33と光複素振幅画像取得光路20のビームスプリッタ28との間にも、シャッタ37が設けられている。
A shutter 36 is provided between the Fourier transform lens 27 of the optical complex amplitude image acquisition optical path 20 and the beam splitter 28. Similarly, a shutter 37 is also provided between the mirror 33 of the reference image acquisition optical path 21 and the beam splitter 28 of the optical complex amplitude image acquisition optical path 20.
シャッタ36は、光複素振幅画像取得光路20内に進入した進入位置と、光複素振幅画像取得光路20から退避した退避位置との間で移動自在である。同様に、シャッタ37は、基準画像取得光路21内に進入した進入位置と、基準画像取得光路21から退避した退避位置との間で移動自在である。シャッタ36、37が進入位置にある場合は、被計測物体15の入力光波が遮光される。シャッタ36、37が退避位置にある場合は、被計測物体15の入力光波の入射が許容される。
The shutter 36 is movable between an approach position that has entered the optical complex amplitude image acquisition optical path 20 and a retracted position that has been retracted from the optical complex amplitude image acquisition optical path 20. Similarly, the shutter 37 is movable between the approach position that has entered the reference image acquisition optical path 21 and the retracted position that has been retracted from the reference image acquisition optical path 21. When the shutters 36 and 37 are in the approach position, the input light wave of the object to be measured 15 is blocked. When the shutters 36 and 37 are in the retracted position, the input light wave of the object to be measured 15 is allowed to enter.
光複素振幅画像取得光路20を用いて光複素振幅画像54を得る場合には、図2に示すようにシャッタ36が退避位置に移動され、シャッタ37が進入位置に移動される。対して、基準画像取得光路21を用いて基準画像61を得る場合には、図3に示すようにシャッタ36が進入位置に移動され、シャッタ37が退避位置に移動される。このようにして光複素振幅画像取得光路20と基準画像取得光路21とが使い分けられる。なお、シャッタ36、37を設ける位置は、上記に例示した位置に限らない。ビームスプリッタ25とフーリエ変換レンズ26との間にシャッタ36を設けてもよいし、ビームスプリッタ25とミラー30との間にシャッタ37を設けてもよい。
When the optical complex amplitude image 54 is obtained by using the optical complex amplitude image acquisition optical path 20, the shutter 36 is moved to the retracted position and the shutter 37 is moved to the approach position as shown in FIG. On the other hand, when the reference image 61 is obtained by using the reference image acquisition optical path 21, the shutter 36 is moved to the approach position and the shutter 37 is moved to the retracted position as shown in FIG. In this way, the optical complex amplitude image acquisition optical path 20 and the reference image acquisition optical path 21 are used properly. The positions where the shutters 36 and 37 are provided are not limited to the positions exemplified above. A shutter 36 may be provided between the beam splitter 25 and the Fourier transform lens 26, or a shutter 37 may be provided between the beam splitter 25 and the mirror 30.
図4に、ランダム拡散板22の作用を概念的に示す。図4Aはランダム拡散板22がない場合、図4Bはランダム拡散板22がある場合をそれぞれ示す。図4Aに示すように、ランダム拡散板22がない場合、被計測物体15の1ピクセル分の領域41からの入力光波(破線矢印で示す)は、センサ13の複数のピクセル40のうちの1つに入射する。つまり、被計測物体15の1ピクセル分の領域41と、センサ13のピクセル40とは、一対一の関係にある。対して図4Bに示すように、ランダム拡散板22がある場合、被計測物体15の1ピクセル分の領域41からの入力光波は、ランダム拡散板22で拡散されて、センサ13の多数のピクセル40に入射する。つまり、被計測物体15の1ピクセル分の領域41と、センサ13のピクセル40とは、一対多の関係にある。こうして被計測物体15の1ピクセル分の領域41からの入力光波が、センサ13の多数のピクセル40に入射するため、最終的に出力される出力画像76には、センサ13の解像度を超える超解像成分が含まれることとなる。
FIG. 4 conceptually shows the action of the random diffusion plate 22. FIG. 4A shows the case where the random diffuser plate 22 is not present, and FIG. 4B shows the case where the random diffuser plate 22 is present. As shown in FIG. 4A, in the absence of the random diffuser 22, the input light wave (indicated by the dashed arrow) from the region 41 for one pixel of the object 15 to be measured is one of the plurality of pixels 40 of the sensor 13. Incident in. That is, there is a one-to-one relationship between the area 41 for one pixel of the object to be measured 15 and the pixel 40 of the sensor 13. On the other hand, as shown in FIG. 4B, when there is a random diffuser plate 22, the input light wave from the region 41 for one pixel of the object to be measured 15 is diffused by the random diffuser plate 22 and a large number of pixels 40 of the sensor 13. Incident in. That is, the area 41 for one pixel of the object to be measured 15 and the pixel 40 of the sensor 13 have a one-to-many relationship. In this way, the input light wave from the region 41 for one pixel of the object to be measured is incident on a large number of pixels 40 of the sensor 13, so that the final output image 76 has a super-resolution that exceeds the resolution of the sensor 13. Image components will be included.
図5において、部分透過マスク29は、中央部分に正方形状の穴である透過部35が形成された正方形状をしている。部分透過マスク29の透過部35以外の部分は、被計測物体15の入力光波を遮光する。このため、光複素振幅画像取得光路20を用いた場合にセンサ13に入射される被計測物体15の入力光波は、ランダム拡散板22によって拡散された被計測物体15の入力光波の1/2以下、より好ましくは1/4以下に制限される。なお、基準画像取得光路21を用いた場合は、前述のように、レンズ31、32により被計測物体15の入力光波が透過部35に収まるサイズに集光される。このため、センサ13に入射される被計測物体15の入力光波は、部分透過マスク29によって制限されない。なお、部分透過マスク29は正方形状に限らない。例えば円形状でもよい。透過部35も同様に、正方形状に限らず、例えば円形状でもよい。
In FIG. 5, the partial transmission mask 29 has a square shape in which a transmission portion 35, which is a square hole, is formed in the central portion. The portion of the partial transmission mask 29 other than the transmission portion 35 blocks the input light wave of the object to be measured 15. Therefore, when the optical complex amplitude image acquisition optical path 20 is used, the input light wave of the measured object 15 incident on the sensor 13 is 1/2 or less of the input light wave of the measured object 15 diffused by the random diffuser plate 22. , More preferably limited to 1/4 or less. When the reference image acquisition optical path 21 is used, the input light waves of the object to be measured 15 are focused by the lenses 31 and 32 to a size that fits in the transmitting portion 35, as described above. Therefore, the input light wave of the object to be measured 15 incident on the sensor 13 is not limited by the partial transmission mask 29. The partially transparent mask 29 is not limited to a square shape. For example, it may be circular. Similarly, the transmission portion 35 is not limited to a square shape, and may be, for example, a circular shape.
センサ13には、例えば、一般的な位相シフトデジタルホログラフィ用のセンサが用いられる。具体的には、センサ13は、CCDイメージセンサ、またはCMOS(Complementary Metal Oxide Semiconductor)イメージセンサ等である。ただし、入力光波の空間的な光複素振幅の分布を計測可能なものであれば、センサ13の種類は特に限定されない。
For the sensor 13, for example, a general sensor for phase shift digital holography is used. Specifically, the sensor 13 is a CCD image sensor, a CMOS (Complementary Metal Oxide Sensor) image sensor, or the like. However, the type of the sensor 13 is not particularly limited as long as it can measure the spatial distribution of the optical complex amplitude of the input light wave.
図6は、光複素振幅画像取得光路20を用いて光複素振幅画像54を得る様子を示す。まず、被計測物体15の入力光波で表される入力画像50の光複素振幅をa_R(x、y)、フーリエ変換レンズ26の複素フーリエ変換関数をF_R{*}とした場合、複素フーリエ変換後の画像51の光複素振幅は、
F_R{a_R(x、y)}=A_R(X、Y)
と表せる。 FIG. 6 shows how the optical complexamplitude image acquisition 54 is obtained by using the optical complex amplitude image acquisition optical path 20. First, when the optical complex amplitude of the input image 50 represented by the input light wave of the object 15 to be measured is a_R (x, y) and the complex Fourier transform function of the Fourier transform lens 26 is F_R {*}, after the complex Fourier transform. The optical complex amplitude of the image 51 of
F_R {a_R (x, y)} = A_R (X, Y)
Can be expressed as.
F_R{a_R(x、y)}=A_R(X、Y)
と表せる。 FIG. 6 shows how the optical complex
F_R {a_R (x, y)} = A_R (X, Y)
Can be expressed as.
ランダム拡散板22の透過関数をΦ_R(X、Y)とした場合、ランダム拡散板22を透過した画像52の光複素振幅は、
A_R(X、Y)×Φ_R(X、Y)
と表せる。ここで、A_R(X、Y)×Φ_R(X、Y)を取得するためには、必ずしもこのランダム拡散板22を用いる方法に限定されず、空間的に位相をランダムに変調した光を被計測物体15に照射して得られる透過光または散乱光を用いてもよい。 When the transmission function of therandom diffuser plate 22 is Φ_R (X, Y), the optical complex amplitude of the image 52 transmitted through the random diffuser plate 22 is
A_R (X, Y) x Φ_R (X, Y)
Can be expressed as. Here, in order to acquire A_R (X, Y) × Φ_R (X, Y), the method is not necessarily limited to the method using therandom diffuser plate 22, and the light whose phase is randomly modulated spatially is measured. Transmitted light or scattered light obtained by irradiating the object 15 may be used.
A_R(X、Y)×Φ_R(X、Y)
と表せる。ここで、A_R(X、Y)×Φ_R(X、Y)を取得するためには、必ずしもこのランダム拡散板22を用いる方法に限定されず、空間的に位相をランダムに変調した光を被計測物体15に照射して得られる透過光または散乱光を用いてもよい。 When the transmission function of the
A_R (X, Y) x Φ_R (X, Y)
Can be expressed as. Here, in order to acquire A_R (X, Y) × Φ_R (X, Y), the method is not necessarily limited to the method using the
フーリエ変換レンズ27の複素フーリエ変換関数を、フーリエ変換レンズ26と同じくF_R{*}とした場合、複素フーリエ変換後の画像53の光複素振幅は、
F_R{A_R(X、Y)×Φ_R(X、Y)}
と表せる。 When the complex Fourier transform function of theFourier transform lens 27 is F_R {*} as in the Fourier transform lens 26, the optical complex amplitude of the image 53 after the complex Fourier transform is
F_R {A_R (X, Y) x Φ_R (X, Y)}
Can be expressed as.
F_R{A_R(X、Y)×Φ_R(X、Y)}
と表せる。 When the complex Fourier transform function of the
F_R {A_R (X, Y) x Φ_R (X, Y)}
Can be expressed as.
部分透過マスク29の透過関数をRect_R{*}とした場合、部分透過マスク29を透過してセンサ13において計測される光複素振幅画像54の光複素振幅は、
Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}}
と表せる。ここで、Rect_R{*}は、部分透過マスク29の透過部35においては*の値を示し、透過部35以外の周辺の遮光部においては0の値を示す。 When the transmission function of thepartial transmission mask 29 is Rect_R {*}, the optical complex amplitude of the optical complex amplitude image 54 measured by the sensor 13 through the partial transmission mask 29 is
Rec_R {F_R {A_R (X, Y) x Φ_R (X, Y)}}
Can be expressed as. Here, Rec_R {*} shows a value of * in thetransmission portion 35 of the partial transmission mask 29, and shows a value of 0 in the peripheral light-shielding portion other than the transmission portion 35.
Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}}
と表せる。ここで、Rect_R{*}は、部分透過マスク29の透過部35においては*の値を示し、透過部35以外の周辺の遮光部においては0の値を示す。 When the transmission function of the
Rec_R {F_R {A_R (X, Y) x Φ_R (X, Y)}}
Can be expressed as. Here, Rec_R {*} shows a value of * in the
図7は、基準画像取得光路21を用いて基準画像61を得る様子を示す。基準画像取得光路21には、フーリエ変換レンズ26、27およびランダム拡散板22は配置されていない。また、前述のように、基準画像取得光路21においては、被計測物体15の入力光波が透過部35に収まるサイズとされる。このため、センサ13において計測される基準画像61の光複素振幅は、被計測物体15の入力光波で表される入力画像60の光複素振幅と同じb_R(x、y)で表せる。このとき、被計測物体15がセンサ13の解像度を超える微細な情報を有していたとしても、計測される基準画像61の解像度は、センサ13の有する解像度に制限される。つまり、基準画像61は、センサ13の解像度を超える超解像成分を有さない。
FIG. 7 shows how the reference image 61 is obtained by using the reference image acquisition optical path 21. The Fourier transform lenses 26 and 27 and the random diffuser 22 are not arranged in the reference image acquisition optical path 21. Further, as described above, in the reference image acquisition optical path 21, the size is such that the input light wave of the object to be measured 15 fits in the transmission portion 35. Therefore, the optical complex amplitude of the reference image 61 measured by the sensor 13 can be represented by b_R (x, y), which is the same as the optical complex amplitude of the input image 60 represented by the input light wave of the object to be measured 15. At this time, even if the object to be measured 15 has fine information exceeding the resolution of the sensor 13, the resolution of the reference image 61 to be measured is limited to the resolution of the sensor 13. That is, the reference image 61 does not have a super-resolution component that exceeds the resolution of the sensor 13.
図8において、処理装置14を構成するコンピュータは、ストレージデバイス65、メモリ66、CPU(Central Processing Unit)67、通信部68、ディスプレイ69、および入力デバイス70を備えている。これらはバスライン71を介して相互接続されている。
In FIG. 8, the computer constituting the processing device 14 includes a storage device 65, a memory 66, a CPU (Central Processing Unit) 67, a communication unit 68, a display 69, and an input device 70. These are interconnected via a bus line 71.
ストレージデバイス65は、処理装置14を構成するコンピュータに内蔵、またはケーブル、ネットワークを通じて接続されたハードディスクドライブである。もしくはストレージデバイス65は、ハードディスクドライブを複数台連装したディスクアレイである。ストレージデバイス65には、オペレーティングシステム等の制御プログラム、各種アプリケーションプログラム、およびこれらのプログラムに付随する各種データ等が記憶されている。なお、ハードディスクドライブに代えてソリッドステートドライブを用いてもよい。
The storage device 65 is a hard disk drive built in the computer constituting the processing device 14 or connected via a cable or a network. Alternatively, the storage device 65 is a disk array in which a plurality of hard disk drives are connected in series. The storage device 65 stores control programs such as an operating system, various application programs, and various data associated with these programs. A solid state drive may be used instead of the hard disk drive.
メモリ66は、CPU67が処理を実行するためのワークメモリである。CPU67は、ストレージデバイス65に記憶されたプログラムをメモリ66へロードして、プログラムにしたがった処理を実行することにより、コンピュータの各部を統括的に制御する。
The memory 66 is a work memory for the CPU 67 to execute a process. The CPU 67 comprehensively controls each part of the computer by loading the program stored in the storage device 65 into the memory 66 and executing the processing according to the program.
通信部68は、各種ネットワークを介した各種情報の伝送制御を行うネットワークインターフェースである。ディスプレイ69は各種画面を表示する。処理装置14を構成するコンピュータは、各種画面を通じて、入力デバイス70からの操作指示の入力を受け付ける。入力デバイス70は、キーボード、マウス、タッチパネル等である。
The communication unit 68 is a network interface that controls the transmission of various information via various networks. The display 69 displays various screens. The computer constituting the processing device 14 receives input of an operation instruction from the input device 70 through various screens. The input device 70 is a keyboard, a mouse, a touch panel, or the like.
図9において、ストレージデバイス65には、作動プログラム75が記憶されている。作動プログラム75は、コンピュータを処理装置14として機能させるためのアプリケーションプログラムである。ストレージデバイス65には、光複素振幅画像54、基準画像61、出力画像76、および伝達関数情報77も記憶されている。
In FIG. 9, the operation program 75 is stored in the storage device 65. The operation program 75 is an application program for operating the computer as the processing device 14. The storage device 65 also stores an optical complex amplitude image 54, a reference image 61, an output image 76, and transfer function information 77.
作動プログラム75が起動されると、処理装置14を構成するコンピュータのCPU67は、メモリ66等と協働して、画像取得部80、リードライト(以下、RW(Read Write)と略す)制御部81、および生成部82として機能する。
When the operation program 75 is activated, the CPU 67 of the computer constituting the processing device 14 cooperates with the memory 66 and the like to form an image acquisition unit 80 and a read / write (hereinafter abbreviated as RW (Read Write)) control unit 81. , And function as a generator 82.
画像取得部80は、センサからの光複素振幅画像54および基準画像61を取得する。画像取得部80は、取得した光複素振幅画像54および基準画像61をRW制御部81に出力する。
The image acquisition unit 80 acquires the optical complex amplitude image 54 and the reference image 61 from the sensor. The image acquisition unit 80 outputs the acquired optical complex amplitude image 54 and the reference image 61 to the RW control unit 81.
RW制御部81は、ストレージデバイス65への各種データの記憶、およびストレージデバイス65内の各種データの読み出しを制御する。RW制御部81は、画像取得部80からの光複素振幅画像54および基準画像61をストレージデバイス65に記憶する。また、RW制御部81は、光複素振幅画像54および基準画像61をストレージデバイス65から読み出し、生成部82に出力する。
The RW control unit 81 controls the storage of various data in the storage device 65 and the reading of various data in the storage device 65. The RW control unit 81 stores the optical complex amplitude image 54 and the reference image 61 from the image acquisition unit 80 in the storage device 65. Further, the RW control unit 81 reads out the optical complex amplitude image 54 and the reference image 61 from the storage device 65 and outputs them to the generation unit 82.
RW制御部81は、伝達関数情報77をストレージデバイス65から読み出し、生成部82に出力する。また、RW制御部81は、生成部82からの出力画像76をストレージデバイス65に記憶する。
The RW control unit 81 reads the transfer function information 77 from the storage device 65 and outputs it to the generation unit 82. Further, the RW control unit 81 stores the output image 76 from the generation unit 82 in the storage device 65.
生成部82は、光複素振幅画像54に対して計算処理を施すことで、出力画像76を生成する。計算処理は、センサ13の解像度を超える解像度の成分である超解像成分を含む被計測物体15の入力光波を、光複素振幅画像54から再生する処理であって、処理装置14を構成するコンピュータ上で施される処理である。この計算処理は、後述する、「拡散部材の透過関数の位相共役関数」の一例である仮想ランダム拡散板107の透過関数Φ_V1(X、Y)を含む。こうした計算処理によって、超解像成分を含む入力光波を示す画像として、出力画像76が生成されることになる。したがって、生成部82は、本開示の技術に係る「仮想光学系」の一例である。
The generation unit 82 generates the output image 76 by performing calculation processing on the optical complex amplitude image 54. The calculation process is a process of reproducing the input light wave of the object to be measured 15 including a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13 from the optical complex amplitude image 54, and is a process of reproducing the input light wave of the optical complex amplitude image 54 from the computer constituting the processing device 14. This is the process performed above. This calculation process includes the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107, which is an example of the “phase conjugation function of the transmission function of the diffusion member” described later. By such calculation processing, the output image 76 is generated as an image showing the input light wave including the super-resolution component. Therefore, the generation unit 82 is an example of the “virtual optical system” according to the technique of the present disclosure.
生成部82は、第1中間画像生成部85、第2中間画像生成部86、および減算部87を有する。第1中間画像生成部85には、光複素振幅画像54が入力される。第1中間画像生成部85は、光複素振幅画像54から第1中間画像88(図11も参照)を生成する。第1中間画像生成部85は、第1中間画像88を減算部87に出力する。
The generation unit 82 includes a first intermediate image generation unit 85, a second intermediate image generation unit 86, and a subtraction unit 87. An optical complex amplitude image 54 is input to the first intermediate image generation unit 85. The first intermediate image generation unit 85 generates the first intermediate image 88 (see also FIG. 11) from the optical complex amplitude image 54. The first intermediate image generation unit 85 outputs the first intermediate image 88 to the subtraction unit 87.
一方、第2中間画像生成部86には、基準画像61が入力される。第2中間画像生成部86は、基準画像61から第2中間画像89(図14も参照)を生成する。第2中間画像生成部86は、第2中間画像89を減算部87に出力する。
On the other hand, the reference image 61 is input to the second intermediate image generation unit 86. The second intermediate image generation unit 86 generates a second intermediate image 89 (see also FIG. 14) from the reference image 61. The second intermediate image generation unit 86 outputs the second intermediate image 89 to the subtraction unit 87.
減算部87は、第1中間画像88から第2中間画像89を減算する。減算部87は、第1中間画像88から第2中間画像89を減算した画像に基づいて、出力画像76を生成する。
The subtraction unit 87 subtracts the second intermediate image 89 from the first intermediate image 88. The subtraction unit 87 generates the output image 76 based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88.
図10に示すように、伝達関数情報77には、複数の伝達関数が登録されている。これらの伝達関数は、第1中間画像生成部85において第1中間画像88を生成する場合、および第2中間画像生成部86において第2中間画像89を生成する場合に用いられる。
As shown in FIG. 10, a plurality of transfer functions are registered in the transfer function information 77. These transfer functions are used when the first intermediate image generation unit 85 generates the first intermediate image 88 and when the second intermediate image generation unit 86 generates the second intermediate image 89.
F_V1-1{*}は、仮想フーリエ変換レンズ106、108(図12参照)の複素フーリエ逆変換関数である。F_V1-1{*}は、実光学系12のフーリエ変換レンズ26、27の複素フーリエ変換関数F_R{*}の逆関数である。Φ_V1(X、Y)は、仮想ランダム拡散板107(図12参照)の透過関数である。Φ_V1(X、Y)は、実光学系12において被計測物体15の入力光波をランダム拡散板22が拡散させる作用とは反対に、拡散された入力光波を拡散前の入力光波に戻す作用をコンピュータ上で仮想的に実現する関数である。このため、Φ_V1(X、Y)は、実光学系12のランダム拡散板22の透過関数Φ_R(X、Y)と位相共役の関係を有する。すなわち、
Φ_R(X、Y)×Φ_V1(X、Y)=1
である。Φ_V1(X、Y)は、本開示の技術に係る「拡散部材の透過関数の位相共役関数」の一例である。 F_V1 -1 {*} is the complex Fourier inverse transform function of the virtualFourier transform lens 106, 108 (see FIG. 12). F_V1 -1 {*} is an inverse function of the complex Fourier transform function F_R of the Fourier transform lens 26, 27 of the actual optical system 12 {*}. Φ_V1 (X, Y) is a transmission function of the virtual random diffuser plate 107 (see FIG. 12). Φ_V1 (X, Y) is a computer that returns the diffused input light wave to the input light wave before diffusion, as opposed to the action of the random diffuser plate 22 diffusing the input light wave of the object to be measured 15 in the real optical system 12. It is a function that is virtually realized above. Therefore, Φ_V1 (X, Y) has a phase conjugation relationship with the transmission function Φ_R (X, Y) of the random diffuser plate 22 of the real optical system 12. That is,
Φ_R (X, Y) x Φ_V1 (X, Y) = 1
Is. Φ_V1 (X, Y) is an example of the "phase conjugation function of the transmission function of the diffusion member" according to the technique of the present disclosure.
Φ_R(X、Y)×Φ_V1(X、Y)=1
である。Φ_V1(X、Y)は、本開示の技術に係る「拡散部材の透過関数の位相共役関数」の一例である。 F_V1 -1 {*} is the complex Fourier inverse transform function of the virtual
Φ_R (X, Y) x Φ_V1 (X, Y) = 1
Is. Φ_V1 (X, Y) is an example of the "phase conjugation function of the transmission function of the diffusion member" according to the technique of the present disclosure.
F_V2{*}は、仮想フーリエ変換レンズ131、133(図15参照)の複素フーリエ変換関数である。F_V2{*}は、実光学系12のフーリエ変換レンズ26、27の複素フーリエ変換関数F_R{*}と等価な関数である。Φ_V2(X、Y)は、仮想ランダム拡散板132(図15参照)の透過関数である。Φ_V2(X、Y)は、実光学系12のランダム拡散板22の透過関数Φ_R(X、Y)と等価な関数である。Rect_V{*}は、仮想部分透過マスク134(図15参照)の透過関数である。Rect_V{*}は、実光学系12の部分透過マスク29の透過関数Rect_R{*}と等価な関数である。
F_V2 {*} is a complex Fourier transform function of the virtual Fourier transform lens 131, 133 (see FIG. 15). F_V2 {*} is a function equivalent to the complex Fourier transform function F_R {*} of the Fourier transform lenses 26 and 27 of the real optical system 12. Φ_V2 (X, Y) is a transmission function of the virtual random diffuser 132 (see FIG. 15). Φ_V2 (X, Y) is a function equivalent to the transmission function Φ_R (X, Y) of the random diffuser plate 22 of the real optical system 12. Rect_V {*} is a transparency function of the virtual partial transparency mask 134 (see FIG. 15). Rect_V {*} is a function equivalent to the transmission function Rect_R {*} of the partial transmission mask 29 of the actual optical system 12.
図11に示すように、第1中間画像生成部85は、高速フーリエ逆変換部95、計算部96、および高速フーリエ逆変換部97を有している。
As shown in FIG. 11, the first intermediate image generation unit 85 includes a fast Fourier inverse transform unit 95, a calculation unit 96, and a fast Fourier inverse transform unit 97.
高速フーリエ逆変換部95は、仮想フーリエ変換レンズ106の複素フーリエ逆変換関数F_V1-1{*}を用いて、光複素振幅画像54に対して高速フーリエ逆変換を施す。
Inverse fast Fourier transform unit 95, using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 106 {*} is subjected to inverse fast Fourier transform with respect to the optical complex amplitude image 54.
ここで、一般的にF-1{F{G(X、Y)}}=G(X、Y)が成立する。しかし、光複素振幅画像54の光複素振幅Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}}は、部分透過マスク29の透過関数Rect_Rが掛かっているため、上記式は成立しない。ただし、高速フーリエ逆変換後の画像98の光複素振幅は、近似的に、
F_V1-1{Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}}}≒A_SR(X、Y)×Φ_R(X、Y)
と表せる。A_SR(X、Y)は、超解像成分を含んでいる。 Here, generally, F -1 {F {G (X, Y)}} = G (X, Y) holds. However, since the optical complex amplitude Rect_R {F_R {A_R (X, Y) × Φ_R (X, Y)}} of the opticalcomplex amplitude image 54 is multiplied by the transmission function Rec_R of the partial transmission mask 29, the above equation holds. do not do. However, the optical complex amplitude of the image 98 after the fast Fourier inverse transform is approximately
F_V1 -1 {Rect_R {F_R {A_R (X, Y) × Φ_R (X, Y)}}} ≒ A_SR (X, Y) × Φ_R (X, Y)
Can be expressed as. A_SR (X, Y) contains a super-resolution component.
F_V1-1{Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}}}≒A_SR(X、Y)×Φ_R(X、Y)
と表せる。A_SR(X、Y)は、超解像成分を含んでいる。 Here, generally, F -1 {F {G (X, Y)}} = G (X, Y) holds. However, since the optical complex amplitude Rect_R {F_R {A_R (X, Y) × Φ_R (X, Y)}} of the optical
F_V1 -1 {Rect_R {F_R {A_R (X, Y) × Φ_R (X, Y)}}} ≒ A_SR (X, Y) × Φ_R (X, Y)
Can be expressed as. A_SR (X, Y) contains a super-resolution component.
計算部96は、仮想ランダム拡散板107の透過関数Φ_V1(X、Y)を、高速フーリエ逆変換後の画像98の光複素振幅A_SR(X、Y)×Φ_R(X、Y)に乗算する(A_SR(X、Y)×Φ_R(X、Y)×Φ_V1(X、Y))。前述のように、Φ_V1(X、Y)はΦ_R(X、Y)と位相共役の関係を有し、Φ_R(X、Y)×Φ_V1(X、Y)=1である。このため、計算後の画像99の光複素振幅は、結局A_SR(X、Y)となる。このように、生成部82が実施する計算処理は、「拡散部材の透過関数の位相共役関数」の一例である仮想ランダム拡散板107の透過関数Φ_V1(X、Y)を含む。
The calculation unit 96 multiplies the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107 by the optical complex amplitude A_SR (X, Y) × Φ_R (X, Y) of the image 98 after the fast Fourier inverse transform (X, Y). A_SR (X, Y) x Φ_R (X, Y) x Φ_V1 (X, Y)). As described above, Φ_V1 (X, Y) has a phase conjugation relationship with Φ_R (X, Y), and Φ_R (X, Y) × Φ_V1 (X, Y) = 1. Therefore, the calculated optical complex amplitude of the image 99 eventually becomes A_SR (X, Y). As described above, the calculation process performed by the generation unit 82 includes the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
高速フーリエ逆変換部97は、仮想フーリエ変換レンズ108の複素フーリエ逆変換関数F_V1-1{*}を用いて、計算後の画像99に対して高速フーリエ逆変換を施す。これにより第1中間画像88が生成される。第1中間画像88の光複素振幅は、
F_V1-1{A_SR(X、Y)}=a_SR(x、y)
と表せる。 Inverse fastFourier transform unit 97, using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 108 {*} is subjected to inverse fast Fourier transform to calculate the image after 99. As a result, the first intermediate image 88 is generated. The optical complex amplitude of the first intermediate image 88 is
F_V1 -1 {A_SR (X, Y )} = a_SR (x, y)
Can be expressed as.
F_V1-1{A_SR(X、Y)}=a_SR(x、y)
と表せる。 Inverse fast
F_V1 -1 {A_SR (X, Y )} = a_SR (x, y)
Can be expressed as.
図12は、第1中間画像生成部85の処理と等価な第1仮想光学系105を示す。第1仮想光学系105は、仮想フーリエ変換レンズ106、仮想ランダム拡散板107、および仮想フーリエ変換レンズ108で構成される。仮想フーリエ変換レンズ106の作用は、高速フーリエ逆変換部95の処理と等価である。仮想ランダム拡散板107の作用は、計算部96の処理と等価である。仮想フーリエ変換レンズ108の作用は、高速フーリエ逆変換部97の処理と等価である。
FIG. 12 shows a first virtual optical system 105 equivalent to the processing of the first intermediate image generation unit 85. The first virtual optical system 105 includes a virtual Fourier transform lens 106, a virtual random diffuser plate 107, and a virtual Fourier transform lens 108. The action of the virtual Fourier transform lens 106 is equivalent to the processing of the fast Fourier inverse transform unit 95. The action of the virtual random diffuser 107 is equivalent to the processing of the calculation unit 96. The action of the virtual Fourier transform lens 108 is equivalent to the processing of the fast Fourier inverse transform unit 97.
図13および図14に示すように、第2中間画像生成部86は、高速フーリエ変換部110、計算部111、高速フーリエ変換部112、計算部113、高速フーリエ逆変換部114、計算部115、および高速フーリエ逆変換部116を有している。
As shown in FIGS. 13 and 14, the second intermediate image generation unit 86 includes a fast Fourier transform unit 110, a calculation unit 111, a fast Fourier transform unit 112, a calculation unit 113, a fast Fourier inverse transform unit 114, and a calculation unit 115. And has a fast Fourier transform unit 116.
図13に示すように、高速フーリエ変換部110は、仮想フーリエ変換レンズ131の複素フーリエ変換関数F_V2{*}を用いて、基準画像61に対して高速フーリエ変換を施す。高速フーリエ変換後の画像117の光複素振幅は、
F_V2{b_R(x、y)}=B_R(X、Y)
と表せる。なお、基準画像61は、部分透過マスク29の透過部35に収まるように集光された被計測物体15の入力光波を計測したものであるため、高速フーリエ変換部110に入力される前に、光複素振幅画像54と同じサイズに変更されている。 As shown in FIG. 13, the fastFourier transform unit 110 performs a fast Fourier transform on the reference image 61 using the complex Fourier transform function F_V2 {*} of the virtual Fourier transform lens 131. The optical complex amplitude of the image 117 after the fast Fourier transform is
F_V2 {b_R (x, y)} = B_R (X, Y)
Can be expressed as. Since thereference image 61 measures the input light wave of the object to be measured 15 focused so as to fit in the transmission portion 35 of the partial transmission mask 29, it is before being input to the fast Fourier transform unit 110. It has been changed to the same size as the optical complex amplitude image 54.
F_V2{b_R(x、y)}=B_R(X、Y)
と表せる。なお、基準画像61は、部分透過マスク29の透過部35に収まるように集光された被計測物体15の入力光波を計測したものであるため、高速フーリエ変換部110に入力される前に、光複素振幅画像54と同じサイズに変更されている。 As shown in FIG. 13, the fast
F_V2 {b_R (x, y)} = B_R (X, Y)
Can be expressed as. Since the
計算部111は、仮想ランダム拡散板132の透過関数Φ_V2(X、Y)を、高速フーリエ変換後の画像117の光複素振幅B_R(X、Y)に乗算する。計算後の画像118の光複素振幅は、
B_R(X、Y)×Φ_V2(X、Y)
と表せる。 Thecalculation unit 111 multiplies the transmission function Φ_V2 (X, Y) of the virtual random diffusion plate 132 by the optical complex amplitude B_R (X, Y) of the image 117 after the fast Fourier transform. The calculated optical complex amplitude of image 118 is
B_R (X, Y) x Φ_V2 (X, Y)
Can be expressed as.
B_R(X、Y)×Φ_V2(X、Y)
と表せる。 The
B_R (X, Y) x Φ_V2 (X, Y)
Can be expressed as.
高速フーリエ変換部112は、仮想フーリエ変換レンズ133の複素フーリエ変換関数F_V2{*}を用いて、計算後の画像118に対して高速フーリエ変換を施す。高速フーリエ変換後の画像119の光複素振幅は、
F_V2{B_R(X、Y)×Φ_V2(X、Y)}
と表せる。 The fastFourier transform unit 112 performs a fast Fourier transform on the calculated image 118 by using the complex Fourier transform function F_V2 {*} of the virtual Fourier transform lens 133. The optical complex amplitude of image 119 after the fast Fourier transform is
F_V2 {B_R (X, Y) x Φ_V2 (X, Y)}
Can be expressed as.
F_V2{B_R(X、Y)×Φ_V2(X、Y)}
と表せる。 The fast
F_V2 {B_R (X, Y) x Φ_V2 (X, Y)}
Can be expressed as.
図14において、計算部113は、仮想部分透過マスク134の透過関数Rect_V{*}を用いて、高速フーリエ変換後の画像119に対して、部分透過マスク29と等価な計算を施す。計算後の画像120の光複素振幅は、
Rect_V{F_V2{B_R(X、Y)×Φ_V2(X、Y)}}
と表せる。 In FIG. 14, thecalculation unit 113 uses the transmission function Rect_V {*} of the virtual partial transparency mask 134 to perform a calculation equivalent to that of the partial transparency mask 29 on the image 119 after the fast Fourier transform. The calculated optical complex amplitude of the image 120 is
Rec_V {F_V2 {B_R (X, Y) x Φ_V2 (X, Y)}}
Can be expressed as.
Rect_V{F_V2{B_R(X、Y)×Φ_V2(X、Y)}}
と表せる。 In FIG. 14, the
Rec_V {F_V2 {B_R (X, Y) x Φ_V2 (X, Y)}}
Can be expressed as.
高速フーリエ逆変換部114は、第1中間画像生成部85の高速フーリエ逆変換部95と同様に、仮想フーリエ変換レンズ106の複素フーリエ逆変換関数F_V1-1{*}を用いて、計算後の画像120に対して高速フーリエ逆変換を施す。
Inverse fast Fourier transform unit 114, like the fast Fourier inverse transform unit 95 of the first intermediate image generating unit 85, using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 106 {*}, after calculation A fast Fourier transform is applied to the image 120.
第1中間画像生成部85の場合と同様に、高速フーリエ逆変換後の画像121の光複素振幅は、近似的に、
F_V1-1{Rect_V{F_V2{B_R(X、Y)×Φ_V2(X、Y)}}}≒B_NSR(X、Y)×Φ_V2(X、Y)
と表せる。B_NSR(X、Y)は、超解像成分を含んでいない。 As in the case of the first intermediateimage generation unit 85, the optical complex amplitude of the image 121 after the fast Fourier inverse transform is approximately
F_V1 -1 {Rect_V {F_V2 {B_R (X, Y) × Φ_V2 (X, Y)}}} ≒ B_NSR (X, Y) × Φ_V2 (X, Y)
Can be expressed as. B_NSR (X, Y) does not contain a super-resolution component.
F_V1-1{Rect_V{F_V2{B_R(X、Y)×Φ_V2(X、Y)}}}≒B_NSR(X、Y)×Φ_V2(X、Y)
と表せる。B_NSR(X、Y)は、超解像成分を含んでいない。 As in the case of the first intermediate
F_V1 -1 {Rect_V {F_V2 {B_R (X, Y) × Φ_V2 (X, Y)}}} ≒ B_NSR (X, Y) × Φ_V2 (X, Y)
Can be expressed as. B_NSR (X, Y) does not contain a super-resolution component.
計算部115は、仮想ランダム拡散板107の透過関数Φ_V1(X、Y)を、高速フーリエ逆変換後の画像121の光複素振幅B_NSR(X、Y)×Φ_V2(X、Y)に乗算する(B_NSR(X、Y)×Φ_V2(X、Y)×Φ_V1(X、Y))。前述のように、Φ_V1(X、Y)はΦ_R(X、Y)と位相共役の関係を有し、また、Φ_V2(X、Y)はΦ_R(X、Y)と等価である。このため、Φ_V2(X、Y)×Φ_V1(X、Y)=1となり、計算後の画像122の光複素振幅は、結局B_NSR(X、Y)となる。
The calculation unit 115 multiplies the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107 by the optical complex amplitude B_NSR (X, Y) × Φ_V2 (X, Y) of the image 121 after the fast Fourier inverse transform (X, Y). B_NSR (X, Y) x Φ_V2 (X, Y) x Φ_V1 (X, Y)). As described above, Φ_V1 (X, Y) has a phase conjugation relationship with Φ_R (X, Y), and Φ_V2 (X, Y) is equivalent to Φ_R (X, Y). Therefore, Φ_V2 (X, Y) × Φ_V1 (X, Y) = 1, and the calculated optical complex amplitude of the image 122 is B_NSR (X, Y) after all.
高速フーリエ逆変換部116は、仮想フーリエ変換レンズ108の複素フーリエ逆変換関数F_V1-1{*}を用いて、計算後の画像122に対して高速フーリエ逆変換を施す。これにより第2中間画像89が生成される。第2中間画像89の光複素振幅は、
F_V1-1{B_NSR(X、Y)}=b_NSR(x、y)
と表せる。 Inverse fastFourier transform unit 116, using the complex Fourier inverse transform function F_V1 -1 virtual Fourier transform lens 108 {*} is subjected to inverse fast Fourier transform to calculate the image after 122. As a result, the second intermediate image 89 is generated. The optical complex amplitude of the second intermediate image 89 is
F_V1 -1 {B_NSR (X, Y )} = b_NSR (x, y)
Can be expressed as.
F_V1-1{B_NSR(X、Y)}=b_NSR(x、y)
と表せる。 Inverse fast
F_V1 -1 {B_NSR (X, Y )} = b_NSR (x, y)
Can be expressed as.
図15は、第2中間画像生成部86の処理と等価な第2仮想光学系130を示す。第2仮想光学系130は、第1仮想光学系105の仮想フーリエ変換レンズ106、仮想ランダム拡散板107、および仮想フーリエ変換レンズ108に加えて、仮想フーリエ変換レンズ131、仮想ランダム拡散板132、仮想フーリエ変換レンズ133、および仮想部分透過マスク134で構成される。仮想フーリエ変換レンズ131の作用は、高速フーリエ変換部110の処理と等価である。仮想ランダム拡散板132の作用は、計算部111の処理と等価である。仮想フーリエ変換レンズ133の作用は、高速フーリエ変換部112の処理と等価である。仮想部分透過マスク134の作用は、計算部113の処理と等価である。
FIG. 15 shows a second virtual optical system 130 equivalent to the processing of the second intermediate image generation unit 86. In the second virtual optical system 130, in addition to the virtual Fourier transform lens 106, the virtual random diffuser plate 107, and the virtual Fourier transform lens 108 of the first virtual optical system 105, the virtual Fourier transform lens 131, the virtual random diffuser plate 132, and the virtual It is composed of a Fourier transform lens 133 and a virtual partial transmission mask 134. The action of the virtual Fourier transform lens 131 is equivalent to the processing of the fast Fourier transform unit 110. The action of the virtual random diffuser 132 is equivalent to the processing of the calculation unit 111. The action of the virtual Fourier transform lens 133 is equivalent to the processing of the fast Fourier transform unit 112. The action of the virtual partial transparency mask 134 is equivalent to the processing of the calculation unit 113.
仮想フーリエ変換レンズ131、仮想ランダム拡散板132、仮想フーリエ変換レンズ133、および仮想部分透過マスク134は、実光学系12のフーリエ変換レンズ26、ランダム拡散板22、フーリエ変換レンズ27、および部分透過マスク29と等価である。すなわち、第2中間画像生成部86の高速フーリエ変換部110、計算部111、高速フーリエ変換部112、および計算部113の処理は、実光学系12の光複素振幅画像取得光路20の作用を仮想化した処理に等しい。
The virtual Fourier transform lens 131, the virtual random diffuser 132, the virtual Fourier transform lens 133, and the virtual partial transmission mask 134 are the Fourier transform lens 26, the random diffuser 22, the Fourier transform lens 27, and the partial transmission mask of the real optical system 12. Equivalent to 29. That is, the processing of the fast Fourier transform unit 110, the calculation unit 111, the fast Fourier transform unit 112, and the calculation unit 113 of the second intermediate image generation unit 86 virtualizes the action of the optical complex amplitude image acquisition optical path 20 of the real optical system 12. Equivalent to the transformed process.
図16に示すように、減算部87は、第1中間画像88の光複素振幅a_SR(x、y)から、第2中間画像89の光複素振幅b_NSR(x、y)を減算する。より詳しくは、a_SR(x、y)とb_NSR(x、y)の実数部同士および虚数部同士を減算する。これにより、元の入力画像50の光複素振幅a_R(x、y)とb_R(x、y)の実数部同士および虚数部同士を減算したものに近似した出力画像76(O(x、y))を得ることができる。すなわち、
Re{a_R(x、y)-b_R(x、y)}≒Re{a_SR(x、y)-b_NSR(x、y)}
Im{a_R(x、y)-b_R(x、y)}≒Im{a_SR(x、y)-b_NSR(x、y)}
である。なお、Re{*}は実数部、Im{*}は虚数部をそれぞれ表す。 As shown in FIG. 16, thesubtraction unit 87 subtracts the optical complex amplitude b_NSR (x, y) of the second intermediate image 89 from the optical complex amplitude a_SR (x, y) of the first intermediate image 88. More specifically, the real and imaginary parts of a_SR (x, y) and b_NSR (x, y) are subtracted. As a result, the output image 76 (O (x, y)) approximates the original input image 50 by subtracting the real and imaginary parts of the optical complex amplitudes a_R (x, y) and b_R (x, y). ) Can be obtained. That is,
Re {a_R (x, y) -b_R (x, y)} ≒ Re {a_SR (x, y) -b_NSR (x, y)}
Im {a_R (x, y) -b_R (x, y)} ≒ Im {a_SR (x, y) -b_NSR (x, y)}
Is. Re {*} represents the real part and Im {*} represents the imaginary part.
Re{a_R(x、y)-b_R(x、y)}≒Re{a_SR(x、y)-b_NSR(x、y)}
Im{a_R(x、y)-b_R(x、y)}≒Im{a_SR(x、y)-b_NSR(x、y)}
である。なお、Re{*}は実数部、Im{*}は虚数部をそれぞれ表す。 As shown in FIG. 16, the
Re {a_R (x, y) -b_R (x, y)} ≒ Re {a_SR (x, y) -b_NSR (x, y)}
Im {a_R (x, y) -b_R (x, y)} ≒ Im {a_SR (x, y) -b_NSR (x, y)}
Is. Re {*} represents the real part and Im {*} represents the imaginary part.
より詳しくは、出力画像76(O(x、y))は、
O(x、y)=C×{a_SR(x、y)-b_NSR(x、y)}+b_R(x、y)
により得られる。 More specifically, the output image 76 (O (x, y)) is
O (x, y) = C × {a_SR (x, y) -b_NSR (x, y)} + b_R (x, y)
Obtained by
O(x、y)=C×{a_SR(x、y)-b_NSR(x、y)}+b_R(x、y)
により得られる。 More specifically, the output image 76 (O (x, y)) is
O (x, y) = C × {a_SR (x, y) -b_NSR (x, y)} + b_R (x, y)
Obtained by
なお、Cは規格化定数である。規格化定数Cは、ランダム拡散板22で拡散された入力光波の一部を、部分透過マスク29により取り込んでセンサ13で計測している関係上、ランダム拡散板22で拡散させずに基準画像61を得る場合とでは強度が大幅に異なる分を補正するための定数である。規格化定数Cは、中心(x0、y0)に点光源a_R(x0、y0)を置いた場合に、第1中間画像88の光複素振幅a_SR(x0、y0)の強度がどれだけ減少するかを事前に調べることで得られる。すなわち、
C2={a_R(x0、y0)}2/{a_SR(x0、y0)}2
規格化定数Cは、RW制御部81によりストレージデバイス65に記憶される。また、規格化定数Cは、RW制御部81によりストレージデバイス65から読み出され、減算部87に出力される。 C is a standardized constant. Since the standardization constant C captures a part of the input light wave diffused by therandom diffuser 22 by the partial transmission mask 29 and measures it by the sensor 13, the reference image 61 is not diffused by the random diffuser 22. It is a constant for correcting the amount that the intensity is significantly different from that in the case of obtaining. The normalization constant C is how much the intensity of the optical complex amplitude a_SR (x0, y0) of the first intermediate image 88 decreases when the point light source a_R (x0, y0) is placed at the center (x0, y0). Can be obtained by examining in advance. That is,
C 2 = {a_R (x0, y0)} 2 / {a_SR (x0, y0)} 2
The standardization constant C is stored in thestorage device 65 by the RW control unit 81. Further, the standardized constant C is read from the storage device 65 by the RW control unit 81 and output to the subtraction unit 87.
C2={a_R(x0、y0)}2/{a_SR(x0、y0)}2
規格化定数Cは、RW制御部81によりストレージデバイス65に記憶される。また、規格化定数Cは、RW制御部81によりストレージデバイス65から読み出され、減算部87に出力される。 C is a standardized constant. Since the standardization constant C captures a part of the input light wave diffused by the
C 2 = {a_R (x0, y0)} 2 / {a_SR (x0, y0)} 2
The standardization constant C is stored in the
第1実施形態においては、出力画像76を生成するための計算処理は、第1中間画像生成部85において、図10で示した、仮想フーリエ変換レンズ106、108の複素フーリエ逆変換関数F_V1-1{*}、仮想ランダム拡散板107の透過関数Φ_V1(X、Y)を用いて第1中間画像88を生成する処理、第2中間画像生成部86において、図10で示した全ての伝達関数を用いて第2中間画像89を生成する処理、および減算部87において、第1中間画像88から第2中間画像89を減算した画像に基づいて、出力画像76を生成する処理が対応する。
In the first embodiment, calculation processing for generating the output image 76, the first intermediate image generating unit 85, shown in FIG. 10, the complex inverse Fourier transform of the virtual Fourier transform lens 106 functions F_V1 -1 {*}, Processing to generate the first intermediate image 88 using the transmission function Φ_V1 (X, Y) of the virtual random diffuser 107, and all the transfer functions shown in FIG. 10 in the second intermediate image generation unit 86. The process of generating the second intermediate image 89 by using the image, and the process of generating the output image 76 based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88 in the subtraction unit 87 correspond.
次に、上記構成による作用について、図17に示すフローチャートを参照して説明する。まず、被計測物体15がステージ11にセットされる(ステップST100)。そして、図2で示したように、シャッタ36が退避位置、シャッタ37が進入位置とされ、被計測物体15の入力光波が光複素振幅画像取得光路20に導かれる(ステップST110)。
Next, the operation of the above configuration will be described with reference to the flowchart shown in FIG. First, the object to be measured 15 is set on the stage 11 (step ST100). Then, as shown in FIG. 2, the shutter 36 is set to the retracted position and the shutter 37 is set to the approach position, and the input light wave of the object to be measured 15 is guided to the optical complex amplitude image acquisition optical path 20 (step ST110).
被計測物体15の入力光波は、フーリエ変換レンズ26を透過し、フーリエ変換レンズ26により複素フーリエ変換される。複素フーリエ変換後の入力光波は、ランダム拡散板22を透過し、ランダム拡散板22により拡散される。拡散された入力光波は、フーリエ変換レンズ27を透過し、フーリエ変換レンズ27により複素フーリエ変換される。複素フーリエ変換後の入力光波は、部分透過マスク29の透過部35を透過して、センサ13に至る。これによりセンサ13から光複素振幅画像54が出力される(ステップST120)。
The input light wave of the object to be measured 15 passes through the Fourier transform lens 26 and is subjected to complex Fourier transform by the Fourier transform lens 26. The input light wave after the complex Fourier transform passes through the random diffuser plate 22 and is diffused by the random diffuser plate 22. The diffused input light wave passes through the Fourier transform lens 27 and is subjected to complex Fourier transform by the Fourier transform lens 27. The input light wave after the complex Fourier transform passes through the transmission portion 35 of the partial transmission mask 29 and reaches the sensor 13. As a result, the optical complex amplitude image 54 is output from the sensor 13 (step ST120).
光複素振幅画像54は、センサ13から画像取得部80に送られ、画像取得部80にて取得される。そして、RW制御部81によりストレージデバイス65に記憶される(ステップST130)。
The optical complex amplitude image 54 is sent from the sensor 13 to the image acquisition unit 80 and acquired by the image acquisition unit 80. Then, it is stored in the storage device 65 by the RW control unit 81 (step ST130).
続いて図3で示したように、シャッタ36が進入位置、シャッタ37が退避位置とされ、被計測物体15の入力光波が基準画像取得光路21に導かれる(ステップST140)。
Subsequently, as shown in FIG. 3, the shutter 36 is set to the approach position and the shutter 37 is set to the retracted position, and the input light wave of the object to be measured 15 is guided to the reference image acquisition optical path 21 (step ST140).
被計測物体15の入力光波は、レンズ31、32を透過し、レンズ31、32により部分透過マスク29の透過部35に収まるサイズに集光される。集光された被計測物体15の入力光波は、部分透過マスク29の透過部35を透過して、センサ13に至る。これによりセンサ13から基準画像61が出力される(ステップST150)。
The input light wave of the object to be measured 15 passes through the lenses 31 and 32, and is focused by the lenses 31 and 32 to a size that fits in the transmission portion 35 of the partial transmission mask 29. The collected input light wave of the object to be measured 15 passes through the transmission portion 35 of the partial transmission mask 29 and reaches the sensor 13. As a result, the reference image 61 is output from the sensor 13 (step ST150).
基準画像61は、センサ13から画像取得部80に送られ、画像取得部80にて取得される。そして、RW制御部81によりストレージデバイス65に記憶される(ステップST160)。
The reference image 61 is sent from the sensor 13 to the image acquisition unit 80 and acquired by the image acquisition unit 80. Then, it is stored in the storage device 65 by the RW control unit 81 (step ST160).
RW制御部81により、ストレージデバイス65から光複素振幅画像54および基準画像61が読み出され、生成部82に出力される。生成部82においては、図11および図12で示したように、第1中間画像生成部85によって第1中間画像88が生成される(ステップST170)。また、図13~図15で示したように、第2中間画像生成部86によって第2中間画像89が生成される(ステップST180)。第1中間画像88および第2中間画像89は、減算部87に出力される。
The RW control unit 81 reads out the optical complex amplitude image 54 and the reference image 61 from the storage device 65 and outputs them to the generation unit 82. In the generation unit 82, as shown in FIGS. 11 and 12, the first intermediate image generation unit 85 generates the first intermediate image 88 (step ST170). Further, as shown in FIGS. 13 to 15, the second intermediate image generation unit 86 generates the second intermediate image 89 (step ST180). The first intermediate image 88 and the second intermediate image 89 are output to the subtraction unit 87.
図16で示したように、減算部87によって、第1中間画像88から第2中間画像89が減算され、出力画像76が生成される(ステップST190)。より詳しくは、第1中間画像88から第2中間画像89を減算したものに規格化定数Cが乗算され、さらに基準画像61が加算されることで、出力画像76が生成される。出力画像76は、減算部87からRW制御部81に送られ、RW制御部81によりストレージデバイス65に記憶される(ステップST200)。ストレージデバイス65に記憶された出力画像76は、通信部68を介して他の装置に送信されたり、ディスプレイ69に表示されたりする。
As shown in FIG. 16, the subtraction unit 87 subtracts the second intermediate image 89 from the first intermediate image 88, and the output image 76 is generated (step ST190). More specifically, the output image 76 is generated by multiplying the first intermediate image 88 minus the second intermediate image 89 by the normalization constant C and further adding the reference image 61. The output image 76 is sent from the subtraction unit 87 to the RW control unit 81, and is stored in the storage device 65 by the RW control unit 81 (step ST200). The output image 76 stored in the storage device 65 is transmitted to another device via the communication unit 68 or displayed on the display 69.
以上説明したように、超解像計測装置10は、実光学系12と、センサ13と、生成部82とを備える。実光学系12は、被計測物体15の入力光波を拡散させるランダム拡散板22、および拡散された入力光波を部分的に透過させる部分透過マスク29を含む。センサ13は、部分透過マスク29を透過した入力光波の強度と位相を計測して、光複素振幅画像54を出力する。生成部82は、センサ13の解像度を超える解像度の成分である超解像成分を含む被計測物体15の入力光波を、光複素振幅画像54から再生する計算処理であって、ランダム拡散板22の透過関数の位相共役関数Φ_V1(X、Y)を含む計算処理をコンピュータ上で施して、出力画像76を生成する。部分透過マスク29によって、センサ13に入射する被計測物体15の入力光波を制限するので、超解像の出力画像76に生じる画質劣化を低減することが可能となる。
As described above, the super-resolution measuring device 10 includes an actual optical system 12, a sensor 13, and a generation unit 82. The real optical system 12 includes a random diffuser 22 that diffuses the input light wave of the object 15 to be measured, and a partial transmission mask 29 that partially transmits the diffused input light wave. The sensor 13 measures the intensity and phase of the input light wave transmitted through the partial transmission mask 29, and outputs an optical complex amplitude image 54. The generation unit 82 is a calculation process for reproducing the input light wave of the object to be measured 15 including the super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13 from the optical complex amplitude image 54, and is a calculation process of the random diffuser plate 22. The output image 76 is generated by performing a calculation process including the phase conjugation function Φ_V1 (X, Y) of the transmission function on a computer. Since the partial transmission mask 29 limits the input light wave of the object to be measured 15 incident on the sensor 13, it is possible to reduce the image quality deterioration that occurs in the super-resolution output image 76.
部分透過マスク29によれば、ランダム拡散板22によって拡散された被計測物体15の入力光波が制限される。そして、この制限された入力光波を、基準画像61を得る場合と同じ解像度で、センサ13を用いて計測する。このため、より細かな入力光波の波面の変化を取得することができる。換言すれば、入力画像50を等倍で撮影した場合と比較すると、等価的に2倍あるいは4倍高い解像度で撮影していることになる。超解像成分は、より細かな入力光波の波面の変化の中に織り込まれている。したがって、部分透過マスク29によって被計測物体15の入力光波を制限することで、超解像成分を効率的に得ることが可能となる。
According to the partial transmission mask 29, the input light wave of the object to be measured 15 diffused by the random diffuser plate 22 is limited. Then, the limited input light wave is measured by using the sensor 13 at the same resolution as when the reference image 61 is obtained. Therefore, it is possible to obtain a finer change in the wave surface of the input light wave. In other words, compared to the case where the input image 50 is photographed at the same magnification, the resolution is equivalently twice or four times higher. The super-resolution component is woven into the finer changes in the wave surface of the input light wave. Therefore, by limiting the input light wave of the object to be measured 15 by the partial transmission mask 29, it is possible to efficiently obtain the super-resolution component.
生成部82は、光複素振幅画像54から第1中間画像88を生成し、基準画像61から第2中間画像89を生成する。そして、第1中間画像88から第2中間画像89を減算した画像に基づいて、出力画像76を生成する。
The generation unit 82 generates the first intermediate image 88 from the optical complex amplitude image 54, and generates the second intermediate image 89 from the reference image 61. Then, the output image 76 is generated based on the image obtained by subtracting the second intermediate image 89 from the first intermediate image 88.
図18に概念的に示すように、第1中間画像88は、超解像成分に加えてノイズ成分を含んでいる。ノイズ成分は、主として、センサ13の大きさおよび解像度が有限であることによる計測エラーに起因する。計測エラーは、入力画像50をランダム拡散板22で拡散しているため、顕著に現れる。ノイズ成分は、上記の計測エラーに加えて、高速フーリエ逆変換部95、計算部96等の処理のエラー、および部分透過マスク29の透過部35を透過した入力光波に加わる歪み等にも起因する。対して第2中間画像89は、超解像成分は含んでいないが、ノイズ成分は含んでいる。このため、第1中間画像88から第2中間画像89を減算することで、第1中間画像88からノイズ成分のみを取り除くことができる。したがって、出力画像76の画質劣化をさらに低減させることができる。
As conceptually shown in FIG. 18, the first intermediate image 88 contains a noise component in addition to the super-resolution component. The noise component is mainly due to a measurement error due to the finite size and resolution of the sensor 13. The measurement error is noticeable because the input image 50 is diffused by the random diffuser 22. In addition to the above measurement errors, the noise component is also caused by processing errors of the fast Fourier inverse transform unit 95, the calculation unit 96, etc., and distortion applied to the input light wave transmitted through the transmission unit 35 of the partial transmission mask 29. .. On the other hand, the second intermediate image 89 does not contain a super-resolution component, but contains a noise component. Therefore, by subtracting the second intermediate image 89 from the first intermediate image 88, only the noise component can be removed from the first intermediate image 88. Therefore, the deterioration of the image quality of the output image 76 can be further reduced.
実光学系12は、ランダム拡散板22を経てセンサ13に至る光複素振幅画像取得光路20と、ランダム拡散板22を通さずにセンサ13に至る基準画像取得光路21とを有する。したがって、光複素振幅画像54と基準画像61を短時間で簡単に得ることができる。
The actual optical system 12 has an optical complex amplitude image acquisition optical path 20 that reaches the sensor 13 via the random diffuser plate 22 and a reference image acquisition optical path 21 that reaches the sensor 13 without passing through the random diffuser plate 22. Therefore, the optical complex amplitude image 54 and the reference image 61 can be easily obtained in a short time.
光複素振幅画像取得光路20のランダム拡散板22よりも被計測物体15側から、基準画像取得光路21が分岐している。また、光複素振幅画像取得光路20のランダム拡散板22よりもセンサ13側で、基準画像取得光路21が合流している。したがって、光複素振幅画像取得光路20用と基準画像取得光路21用に2台のセンサ13を用意する必要がなく、1台のセンサ13で賄うことができる。
The reference image acquisition optical path 21 is branched from the object to be measured 15 side of the random diffusion plate 22 of the optical complex amplitude image acquisition optical path 20. Further, the reference image acquisition optical path 21 merges on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 20. Therefore, it is not necessary to prepare two sensors 13 for the optical complex amplitude image acquisition optical path 20 and the reference image acquisition optical path 21, and one sensor 13 can cover the problem.
基準画像取得光路21においては、レンズ31、32によって、被計測物体15の入力光波が部分透過マスク29の透過部35に収まるサイズに集光される。したがって、基準画像取得光路21を用いて基準画像61を得る場合に、部分透過マスク29を光路から退避させ、光複素振幅画像取得光路20を用いて光複素振幅画像54を得る場合には、部分透過マスク29を光路に戻す機構を設ける必要がない。実光学系12に掛かる部品コストの増大、および実光学系12の大型化を避けることができる。
In the reference image acquisition optical path 21, the lenses 31 and 32 collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 of the partial transmission mask 29. Therefore, when the reference image 61 is obtained using the reference image acquisition optical path 21, the partial transmission mask 29 is retracted from the optical path, and when the optical complex amplitude image acquisition optical path 20 is used to obtain the optical complex amplitude image 54, the partial transmission mask 29 is retracted from the optical path. It is not necessary to provide a mechanism for returning the transmission mask 29 to the optical path. It is possible to avoid an increase in component cost for the real optical system 12 and an increase in the size of the real optical system 12.
実光学系12は、入力光波をランダムに拡散させるランダム拡散板22を拡散部材として用いている。したがって、被計測物体15の入力光波を、センサ13のより多数のピクセル40に入射させることができ、出力画像76に含まれる超解像成分をより多くすることができる。
The actual optical system 12 uses a random diffuser plate 22 that randomly diffuses an input light wave as a diffuser member. Therefore, the input light wave of the object to be measured 15 can be incident on a larger number of pixels 40 of the sensor 13, and the super-resolution component included in the output image 76 can be increased.
[実施例1]
以下では、上記第1実施形態の超解像計測装置10の数値シミュレーションに基づく実施例1を示す。 [Example 1]
In the following, Example 1 based on the numerical simulation of thesuper-resolution measuring device 10 of the first embodiment will be shown.
以下では、上記第1実施形態の超解像計測装置10の数値シミュレーションに基づく実施例1を示す。 [Example 1]
In the following, Example 1 based on the numerical simulation of the
図19は、実施例1における各種パラメータを示す表140である。使用光波長は、センサ13の光源から発せられる光の波長であり、532nmである。入力光波のピクセル数は512×512、ピクセルサイズは20μmである。入力光波に対する計算上のオーバーサンプリングレートは4、ゼロパディングレートは2である。ランダム拡散板22のピクセル数は4096×4096、ピクセルサイズは5μm、位相階調は256である。センサ13のピクセル数は256×256、ピクセルサイズは5μmである。入力光波のピクセル数が512×512、センサのピクセル数が256×256であるので、入力画像50はセンサ13の4倍の解像度を有していることになる。なお、以下の実施例2および実施例3においても、これら各種パラメータは踏襲される。
FIG. 19 is Table 140 showing various parameters in the first embodiment. The light wavelength used is the wavelength of light emitted from the light source of the sensor 13, and is 532 nm. The number of pixels of the input light wave is 512 × 512, and the pixel size is 20 μm. The calculated oversampling rate for the input light wave is 4, and the zero padding rate is 2. The number of pixels of the random diffuser 22 is 4096 × 4096, the pixel size is 5 μm, and the phase gradation is 256. The number of pixels of the sensor 13 is 256 × 256, and the pixel size is 5 μm. Since the number of pixels of the input light wave is 512 × 512 and the number of pixels of the sensor is 256 × 256, the input image 50 has four times the resolution of the sensor 13. It should be noted that these various parameters are also followed in the following Examples 2 and 3.
図20に、実施例1で用いた入力画像50(a_R(x、y))の一例を示す。図20Aは入力画像50の強度画像50AM、図20Bは入力画像50の位相画像50PHをそれぞれ示す。入力画像50は、強度画像50AMによって示すように、白地の中心部分と左隅部分に微小なマーク145が付された画像である。マーク145は、黒地の点を黒地の正方形状の枠で囲んだものである。なお、図20Aの符号146は強度スケール、図20Bの符号147は位相スケールをそれぞれ示す。
FIG. 20 shows an example of the input image 50 (a_R (x, y)) used in the first embodiment. FIG. 20A shows the intensity image 50AM of the input image 50, and FIG. 20B shows the phase image 50PH of the input image 50. The input image 50 is an image in which minute marks 145 are attached to the central portion and the left corner portion of the white background, as shown by the intensity image 50AM. The mark 145 is a black dot surrounded by a black square frame. Reference numeral 146 in FIG. 20A indicates an intensity scale, and reference numeral 147 in FIG. 20B indicates a phase scale.
図21は、図20で示した入力画像50を、図2で示したように光複素振幅画像取得光路20を用いてセンサ13で計測して得られた光複素振幅画像54(Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}})である。図21Aは光複素振幅画像54の強度画像54AM、図21Bは光複素振幅画像54の位相画像54PHをそれぞれ示す。中央の正方形状の領域は、部分透過マスク29の透過部35を透過した部分を示す。透過部35は、入力画像50の(1/8)×(1/8)のサイズである。
FIG. 21 shows an optical complex amplitude image 54 (Rect_R {F_R {) obtained by measuring the input image 50 shown in FIG. 20 with the sensor 13 using the optical complex amplitude image acquisition optical path 20 as shown in FIG. A_R (X, Y) × Φ_R (X, Y)}}). 21A shows the intensity image 54AM of the optical complex amplitude image 54, and FIG. 21B shows the phase image 54PH of the optical complex amplitude image 54, respectively. The square-shaped region in the center indicates a portion that has passed through the transparent portion 35 of the partial transparent mask 29. The transparent portion 35 has a size of (1/8) × (1/8) of the input image 50.
図22は、図21で示した光複素振幅画像54に対して、図11および図12で示した処理を施して生成された第1中間画像88(a_SR(x、y))を示す。図22Aは第1中間画像88の強度画像88AM、図22Bは第1中間画像88の位相画像88PHをそれぞれ示す。第1中間画像88は、もはや入力画像50の原形をとどめていないが、超解像成分を含んでいる。また、第1中間画像88は、ノイズ成分も含んでいる。
FIG. 22 shows a first intermediate image 88 (a_SR (x, y)) generated by subjecting the optical complex amplitude image 54 shown in FIG. 21 to the processing shown in FIGS. 11 and 12. FIG. 22A shows the intensity image 88AM of the first intermediate image 88, and FIG. 22B shows the phase image 88PH of the first intermediate image 88. The first intermediate image 88 no longer retains the original form of the input image 50, but contains a super-resolution component. The first intermediate image 88 also includes a noise component.
図23は、図3で示したように基準画像取得光路21を用いてセンサ13で計測して得られた基準画像61(b_R(x、y))である。図23Aは基準画像61の強度画像61AM、図23Bは基準画像61の位相画像61PHをそれぞれ示す。基準画像61は、センサ13の解像度をそのまま体現した画像であって、入力画像50に含まれる超解像成分が欠落した画像である。このため、基準画像61は、図20で示した入力画像50と比べて、マーク145の輪郭が不鮮明な画像、あるいはマーク145自体が映っていない画像となる。
FIG. 23 is a reference image 61 (b_R (x, y)) obtained by measuring with the sensor 13 using the reference image acquisition optical path 21 as shown in FIG. FIG. 23A shows the intensity image 61AM of the reference image 61, and FIG. 23B shows the phase image 61PH of the reference image 61. The reference image 61 is an image that embodies the resolution of the sensor 13 as it is, and is an image in which the super-resolution component included in the input image 50 is missing. Therefore, the reference image 61 is an image in which the outline of the mark 145 is unclear or the mark 145 itself is not reflected as compared with the input image 50 shown in FIG.
図24は、図23で示した基準画像61に対して、図13~図15で示した処理を施して生成された第2中間画像89(b_NSR(x、y))である。図24Aは第2中間画像89の強度画像89AM、図24Bは第2中間画像89の位相画像89PHをそれぞれ示す。第1中間画像88と同じく、第2中間画像89も、もはや基準画像61の原形をとどめていないが、ノイズ成分を含んでいる。
FIG. 24 is a second intermediate image 89 (b_NSR (x, y)) generated by performing the processes shown in FIGS. 13 to 15 with respect to the reference image 61 shown in FIG. 23. FIG. 24A shows the intensity image 89AM of the second intermediate image 89, and FIG. 24B shows the phase image 89PH of the second intermediate image 89. Like the first intermediate image 88, the second intermediate image 89 no longer retains the original shape of the reference image 61, but contains a noise component.
図25は、図22で示した第1中間画像88から、図24で示した第2中間画像89を減算した画像(以下、減算画像という)148である。図25Aは減算画像148の強度画像148AM、図25Bは減算画像148の位相画像148PHをそれぞれ示す。
FIG. 25 is an image (hereinafter referred to as a subtracted image) 148 obtained by subtracting the second intermediate image 89 shown in FIG. 24 from the first intermediate image 88 shown in FIG. 22. FIG. 25A shows the intensity image 148AM of the subtraction image 148, and FIG. 25B shows the phase image 148PH of the subtraction image 148.
図26は、図16で示した処理を施して得られた出力画像76(O(x、y))である。図26Aは出力画像76の強度画像76AM、図26Bは出力画像76の位相画像76PHをそれぞれ示す。図26によれば、マーク145が、強度、位相ともに精度よく再現されていることが分かる。
FIG. 26 is an output image 76 (O (x, y)) obtained by performing the processing shown in FIG. FIG. 26A shows the intensity image 76AM of the output image 76, and FIG. 26B shows the phase image 76PH of the output image 76. According to FIG. 26, it can be seen that the mark 145 is accurately reproduced in both intensity and phase.
図27に、入力画像50(a_R(x、y))の他の例を示す。図27Aは入力画像50の強度画像50AM、図27Bは入力画像50の位相画像50PHをそれぞれ示す。入力画像50は、強度画像50AMによって示すように、黒地の中心部分と左隅部分に微小なマーク150が付された画像である。マーク150は、強度画像50AMによって示すように、正方形状の白地の枠の中央に黒地の横線が引かれたものである。マーク150は、位相画像50PHにおいては、黒地の点を黒地の正方形状の枠で囲んだものとなる。
FIG. 27 shows another example of the input image 50 (a_R (x, y)). 27A shows the intensity image 50AM of the input image 50, and FIG. 27B shows the phase image 50PH of the input image 50. The input image 50 is an image in which minute marks 150 are added to the central portion and the left corner portion of the black background, as shown by the intensity image 50AM. The mark 150 is a square horizontal line drawn in the center of a white frame as shown by the intensity image 50AM. The mark 150 is a phase image 50PH in which a black point is surrounded by a black square frame.
図27に示す入力画像50は、図20で示した入力画像50と比較して、入力光波中の信号光波(画素値が0でない部分)の占める割合が極端に少ない。こうした入力画像50の場合の基準画像61には、図28に示す黒のベタ画像が用いられる。こうした黒のベタ画像の基準画像61の光複素振幅b_R(x、y)は0である。
The input image 50 shown in FIG. 27 has an extremely small proportion of the signal light wave (the portion where the pixel value is not 0) in the input light wave as compared with the input image 50 shown in FIG. As the reference image 61 in the case of such an input image 50, a solid black image shown in FIG. 28 is used. The optical complex amplitude b_R (x, y) of the reference image 61 of such a solid black image is 0.
図29は、図27で示した入力画像50を、図2で示したように光複素振幅画像取得光路20を用いてセンサ13で計測して得られた光複素振幅画像54(Rect_R{F_R{A_R(X、Y)×Φ_R(X、Y)}})である。図29Aは光複素振幅画像54の強度画像54AM、図29Bは光複素振幅画像54の位相画像54PHをそれぞれ示す。図21と同じく、中央の正方形状の領域は、部分透過マスク29の透過部35を透過した部分を示す。
FIG. 29 shows an optical complex amplitude image 54 (Rect_R {F_R {) obtained by measuring the input image 50 shown in FIG. 27 with the sensor 13 using the optical complex amplitude image acquisition optical path 20 as shown in FIG. A_R (X, Y) × Φ_R (X, Y)}}). FIG. 29A shows the intensity image 54AM of the optical complex amplitude image 54, and FIG. 29B shows the phase image 54PH of the optical complex amplitude image 54. Similar to FIG. 21, the central square region indicates a portion that has passed through the transparent portion 35 of the partial transparent mask 29.
この場合の第2中間画像89の光複素振幅b_NSR(x、y)は、基準画像61が図28で示した黒のベタ画像(b_R(x、y)=0)であるため、0となることは数学的に自明である。したがってこの場合は、図13~図15で示した第2中間画像生成部86による処理、および図16で示した減算部87における処理は行われない。
In this case, the optical complex amplitude b_NSR (x, y) of the second intermediate image 89 is 0 because the reference image 61 is a solid black image (b_R (x, y) = 0) shown in FIG. 28. That is mathematically self-evident. Therefore, in this case, the processing by the second intermediate image generation unit 86 shown in FIGS. 13 to 15 and the processing by the subtraction unit 87 shown in FIG. 16 are not performed.
図30は、図27の入力画像50の場合の出力画像76(O(x、y))である。図30Aは出力画像76の強度画像76AM、図30Bは出力画像76の位相画像76PHをそれぞれ示す。図30によれば、図26の場合と同じく、入力画像50のマーク150が、強度、位相ともに精度よく再現されていることが分かる。
FIG. 30 is an output image 76 (O (x, y)) in the case of the input image 50 of FIG. 27. FIG. 30A shows the intensity image 76AM of the output image 76, and FIG. 30B shows the phase image 76PH of the output image 76. According to FIG. 30, it can be seen that the mark 150 of the input image 50 is accurately reproduced in terms of both intensity and phase, as in the case of FIG. 26.
以上示したように、上記第1実施形態の超解像計測装置10を用いれば、超解像の出力画像76に生じる画質劣化を低減することが可能、という効果を奏することを実証することができた。
As shown above, it is possible to demonstrate that the super-resolution measuring device 10 of the first embodiment has the effect of reducing the image quality deterioration that occurs in the super-resolution output image 76. did it.
図31および図32は、実光学系の他の例を示す。
31 and 32 show other examples of real optical systems.
図31に示す実光学系160は、実光学系12と同じく、光複素振幅画像取得光路161のランダム拡散板22よりも被計測物体15側から、基準画像取得光路162が分岐している。ただし、光複素振幅画像取得光路161のランダム拡散板22よりもセンサ13側で、基準画像取得光路162が合流していない。そして、センサ13は、光複素振幅画像取得光路161用のセンサ13Aと基準画像取得光路162用のセンサ13Bの2台設けられている。なお、センサ13Aの計測ピクセル数(解像度)と、センサ13Bの計測ピクセル数(解像度)は等しい。
In the real optical system 160 shown in FIG. 31, the reference image acquisition optical path 162 is branched from the object to be measured 15 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 161 as in the real optical system 12. However, the reference image acquisition optical path 162 does not merge on the sensor 13 side of the random diffuser plate 22 of the optical complex amplitude image acquisition optical path 161. Two sensors 13 are provided, a sensor 13A for the optical complex amplitude image acquisition optical path 161 and a sensor 13B for the reference image acquisition optical path 162. The number of measurement pixels (resolution) of the sensor 13A is equal to the number of measurement pixels (resolution) of the sensor 13B.
光複素振幅画像取得光路161は、ビームスプリッタ28およびシャッタ36が配置されていない以外は、実光学系12の光複素振幅画像取得光路20と同じ構成である。対して基準画像取得光路162は、実光学系12の基準画像取得光路21とは根本的に異なる構成である。より詳しくは、基準画像取得光路162は、ミラー33およびシャッタ37が配置されていない。また、レンズ163、164は、実光学系12の基準画像取得光路21のレンズ31、32のような、被計測物体15の入力光波を、部分透過マスク29の透過部35に収まるサイズに集光する作用を有するレンズではない。
The optical complex amplitude image acquisition optical path 161 has the same configuration as the optical complex amplitude image acquisition optical path 20 of the actual optical system 12 except that the beam splitter 28 and the shutter 36 are not arranged. On the other hand, the reference image acquisition optical path 162 has a configuration fundamentally different from that of the reference image acquisition optical path 21 of the actual optical system 12. More specifically, the reference image acquisition optical path 162 does not have the mirror 33 and the shutter 37 arranged. Further, the lenses 163 and 164 collect the input light waves of the object to be measured 15, such as the lenses 31 and 32 of the reference image acquisition optical path 21 of the actual optical system 12, into a size that fits in the transmission portion 35 of the partial transmission mask 29. It is not a lens that has the effect of
実光学系160によれば、実光学系12のように、シャッタ36、37、およびシャッタ36、37を進入位置と退避位置に移動させる機構を設けずに済む。また、特別な作用を有するレンズ31、32を使わずに済む。さらに、光複素振幅画像54と基準画像61を同時に得ることができる。
According to the real optical system 160, unlike the real optical system 12, it is not necessary to provide a mechanism for moving the shutters 36, 37 and the shutters 36, 37 to the approach position and the retract position. Further, it is not necessary to use lenses 31 and 32 having a special action. Further, the optical complex amplitude image 54 and the reference image 61 can be obtained at the same time.
図32に示す実光学系170は、光複素振幅画像取得光路171と基準画像取得光路172とが兼用されている。ランダム拡散板22および部分透過マスク29は、光複素振幅画像取得光路171および基準画像取得光路172内に進入した実線で示す進入位置と、光複素振幅画像取得光路171および基準画像取得光路172から退避した破線で示す退避位置とに移動される。図32に示す、ランダム拡散板22および部分透過マスク29が進入位置にある場合は、光複素振幅画像取得光路171として機能する。一方、ランダム拡散板22および部分透過マスク29が退避位置にある場合は、基準画像取得光路172として機能する。
In the real optical system 170 shown in FIG. 32, the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172 are used in combination. The random diffuser plate 22 and the partial transmission mask 29 are retracted from the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172, and the approach position indicated by the solid line that has entered the optical complex amplitude image acquisition optical path 171 and the reference image acquisition optical path 172. It is moved to the retracted position indicated by the broken line. When the random diffusion plate 22 and the partial transmission mask 29 shown in FIG. 32 are in the approach position, they function as an optical complex amplitude image acquisition optical path 171. On the other hand, when the random diffuser plate 22 and the partial transmission mask 29 are in the retracted position, they function as the reference image acquisition optical path 172.
実光学系170によれば、実光学系12のビームスプリッタ25、28、ミラー30、33といった光学部材は一切必要なくなる。また、実光学系12のように、シャッタ36、37、およびシャッタ36、37を進入位置と退避位置に移動させる機構を設けずに済む。さらに、実光学系12と同じく、1台のセンサ13で賄うことができる。
According to the real optical system 170, no optical members such as beam splitters 25 and 28 and mirrors 30 and 33 of the real optical system 12 are required. Further, unlike the actual optical system 12, it is not necessary to provide a mechanism for moving the shutters 36, 37 and the shutters 36, 37 to the approach position and the retract position. Further, as with the actual optical system 12, one sensor 13 can cover the problem.
なお、基準画像61を予め用意することが可能な場合は、基準画像取得光路21を用いて基準画像61を得る工程は不要である。基準画像61を予め用意することが可能な場合は、図28で示した黒のベタ画像を基準画像61とする場合が考えられる。あるいは、鏡面加工した製品の表面の、センサ13の解像度では判別不可能な微細なキズを検出する製品欠陥検査に、超解像計測装置10を用いる場合も考えられる。この場合は、キズがない場合の製品の設計データが存在するため、当該設計データを基準画像61に転用することが可能である。
If the reference image 61 can be prepared in advance, the step of obtaining the reference image 61 using the reference image acquisition optical path 21 is unnecessary. When the reference image 61 can be prepared in advance, the solid black image shown in FIG. 28 may be used as the reference image 61. Alternatively, it is also conceivable to use the super-resolution measuring device 10 for product defect inspection for detecting minute scratches on the surface of a mirror-processed product that cannot be discriminated by the resolution of the sensor 13. In this case, since there is product design data when there are no scratches, the design data can be diverted to the reference image 61.
また、基準画像61を生成し得る実物体が被計測物体15としてある場合は、例えば、実光学系12の光複素振幅画像取得光路20を用いて、基準画像61を生成し得る実物体を被計測物体15として計測してもよい。これにより、センサ13において、Rect_R{F_R{B_R(X、Y)×Φ_R(X、Y)}}で表される画像を得ることができる。この画像は、第2中間画像生成部86の計算部113から出力された、Rect_V{F_V2{B_R(X、Y)×Φ_V2(X、Y)}}で表される画像120と等価である。
When the real object capable of generating the reference image 61 is the object to be measured 15, for example, the real object capable of generating the reference image 61 is covered by using the optical complex amplitude image acquisition optical path 20 of the real optical system 12. It may be measured as the measuring object 15. As a result, in the sensor 13, an image represented by Rec_R {F_R {B_R (X, Y) × Φ_R (X, Y)}} can be obtained. This image is equivalent to the image 120 represented by Rect_V {F_V2 {B_R (X, Y) × Φ_V2 (X, Y)}} output from the calculation unit 113 of the second intermediate image generation unit 86.
この基準画像61を生成し得る実物体が被計測物体15としてある場合も、基準画像取得光路21を用いて基準画像61を得る工程は不要である。また、この場合は、前述のように、センサ13においてRect_R{F_R{B_R(X、Y)×Φ_R(X、Y)}}で表される画像を得ることができるので、図13~図15で示した第2中間画像生成部86の処理のうち、高速フーリエ変換部110、計算部111、高速フーリエ変換部112、および計算部113の処理は不要である。
Even when the actual object capable of generating the reference image 61 is the object to be measured 15, the step of obtaining the reference image 61 using the reference image acquisition optical path 21 is unnecessary. Further, in this case, as described above, the sensor 13 can obtain an image represented by Rec_R {F_R {B_R (X, Y) × Φ_R (X, Y)}}, and therefore FIGS. 13 to 15 Of the processes of the second intermediate image generation unit 86 shown in the above, the processes of the fast Fourier transform unit 110, the calculation unit 111, the fast Fourier transform unit 112, and the calculation unit 113 are unnecessary.
なお、被計測物体15は、図20等で例示したような実画像でもよいし、ディスプレイに表示された画像でもよい。
The object to be measured 15 may be an actual image as illustrated in FIG. 20 or the like, or may be an image displayed on a display.
ランダム拡散板22は、凹凸面23が形成されたものに限らない。微粒子を含む樹脂が表面に塗布されたものでもよい。要は、被計測物体15の入力光波を十分に拡散することができるものであればよい。また、フーリエ変換レンズ26、27は省略してもよい。
The random diffusion plate 22 is not limited to the one on which the uneven surface 23 is formed. A resin containing fine particles may be applied to the surface. In short, it suffices as long as it can sufficiently diffuse the input light wave of the object to be measured 15. Further, the Fourier transform lenses 26 and 27 may be omitted.
部分透過マスク29を、光複素振幅画像取得光路20内に進入した進入位置と、光複素振幅画像取得光路20から退避した退避位置とに移動させてもよい。この場合、光複素振幅画像取得光路20を用いて光複素振幅画像54を得る場合は部分透過マスク29を進入位置に移動させ、基準画像取得光路21を用いて基準画像61を得る場合は部分透過マスク29を退避位置に移動させる。こうすれば、レンズ31、32により、被計測物体15の入力光波を部分透過マスク29の透過部35に収まるサイズに集光させる必要はなくなる。
The partial transmission mask 29 may be moved to an approach position that has entered the optical complex amplitude image acquisition optical path 20 and a retracted position that has been retracted from the optical complex amplitude image acquisition optical path 20. In this case, when the optical complex amplitude image acquisition optical path 20 is used to obtain the optical complex amplitude image 54, the partial transmission mask 29 is moved to the approach position, and when the reference image acquisition optical path 21 is used to obtain the reference image 61, the partial transmission mask 29 is partially transmitted. The mask 29 is moved to the retracted position. In this way, it is not necessary for the lenses 31 and 32 to collect the input light wave of the object to be measured 15 into a size that fits in the transmission portion 35 of the partial transmission mask 29.
以下に示す第2~第4実施形態は、基準画像61自体を不要とする構成である。
The second to fourth embodiments shown below are configured to eliminate the need for the reference image 61 itself.
[第2実施形態]
図33~図35に示す第2実施形態では、入力光波の拡散特性を変更可能な空間光変調器182を拡散部材として用いる。 [Second Embodiment]
In the second embodiment shown in FIGS. 33 to 35, a spatiallight modulator 182 capable of changing the diffusion characteristics of the input light wave is used as the diffusion member.
図33~図35に示す第2実施形態では、入力光波の拡散特性を変更可能な空間光変調器182を拡散部材として用いる。 [Second Embodiment]
In the second embodiment shown in FIGS. 33 to 35, a spatial
図33において、第2実施形態の実光学系180は、フーリエ変換レンズ26、ビームスプリッタ181、空間光変調器182、フーリエ変換レンズ27、および部分透過マスク29を有している。ビームスプリッタ181は、フーリエ変換レンズ26を透過した被計測物体15の入力光波を、空間光変調器182に向けて透過させる。また、ビームスプリッタ181は、空間光変調器182に表示される位相パターンよって位相変調された入力光波を、フーリエ変換レンズ27に向けて90°反射させる。
In FIG. 33, the real optical system 180 of the second embodiment has a Fourier transform lens 26, a beam splitter 181 and a spatial light modulator 182, a Fourier transform lens 27, and a partial transmission mask 29. The beam splitter 181 transmits the input light wave of the object to be measured 15 that has passed through the Fourier transform lens 26 toward the spatial light modulator 182. Further, the beam splitter 181 reflects the input light wave phase-modulated by the phase pattern displayed on the spatial light modulator 182 toward the Fourier transform lens 27 by 90 °.
空間光変調器182は、SLM(Spatial Light Modulator)とも呼ばれ、例えば、LCD(Liquid Crystal Display)、LCOS(Liquid Crystal ON Silicon)、DMD(Digital Mirror Device)等で構成される。空間光変調器182は、表示する位相パターンを種々変更することができる。
The spatial light modulator 182 is also called an SLM (Spatial Light Modulator), and is composed of, for example, an LCD (Liquid Crystal Display), an LCOS (Liquid Crystal on Silicon), a DMD (Digital Mirror Device), or the like. The spatial light modulator 182 can change the displayed phase pattern in various ways.
図34において、第2実施形態の生成部185は、上記第1実施形態の生成部82と同様に、センサ13の解像度を超える解像度の成分である超解像成分を含む被計測物体15の入力光波を、光複素振幅画像54から再生する計算処理をコンピュータ上で施して、出力画像76を生成する。すなわち、生成部185は、本開示の技術に係る「仮想光学系」の一例である。
In FIG. 34, the generation unit 185 of the second embodiment is the same as the generation unit 82 of the first embodiment, and is an input of the object to be measured 15 including a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13. The output image 76 is generated by performing a calculation process on a computer to reproduce the light wave from the optical complex amplitude image 54. That is, the generation unit 185 is an example of the "virtual optical system" according to the technique of the present disclosure.
生成部185は、中間画像生成部186と加算平均部187とを有する。中間画像生成部186は、上記第1実施形態の第1中間画像生成部85における仮想ランダム拡散板107の透過関数Φ_V1(X、Y)の代わりに、空間光変調器182の透過関数の位相共役関数を用いて、光複素振幅画像54から、上記第1実施形態の第1中間画像88に相当する中間画像188(a_SR(x、y))を生成する。中間画像生成部186は、中間画像188を加算平均部187に出力する。加算平均部187は、中間画像188を加算平均する。加算平均部187は、加算平均した画像を出力画像76として出力する。
The generation unit 185 has an intermediate image generation unit 186 and an addition averaging unit 187. The intermediate image generation unit 186 uses the phase conjugation of the transmission function of the spatial light modulator 182 instead of the transmission function Φ_V1 (X, Y) of the virtual random diffuser 107 in the first intermediate image generation unit 85 of the first embodiment. Using the function, an intermediate image 188 (a_SR (x, y)) corresponding to the first intermediate image 88 of the first embodiment is generated from the optical complex amplitude image 54. The intermediate image generation unit 186 outputs the intermediate image 188 to the addition averaging unit 187. The addition averaging unit 187 adds and averages the intermediate image 188. The addition averaging unit 187 outputs the averaging image as an output image 76.
図35に示すように、第2実施形態においては、空間光変調器182において表示する位相パターンを、位相パターン1、2、3、・・・、Nと変更する。センサ13は、空間光変調器182において表示する位相パターンを変更する度に、光複素振幅画像54_1、54_2、54_3、・・・、54_Nを出力する。中間画像生成部186は、空間光変調器182において表示する位相パターンを変更する度に、中間画像188_1、188_2、188_3、・・・、188_Nを生成する。加算平均部187は、符号189に示すように、中間画像188_1、188_2、188_3、・・・、188_Nを加算する。次いで、加算平均部187は、符号190に示すように、加算した画像をNで除算する。この場合の出力画像76の光複素振幅は、
O(x、y)=(1/N)×Σ(a_SR(x、y))
と表せる。 As shown in FIG. 35, in the second embodiment, the phase pattern displayed in the spatiallight modulator 182 is changed to phase patterns 1, 2, 3, ..., N. The sensor 13 outputs optical complex amplitude images 54_1, 54_2, 54_3, ..., 54_N each time the phase pattern displayed by the spatial light modulator 182 is changed. The intermediate image generation unit 186 generates intermediate images 188_1, 188_2, 188_3, ..., 188_N each time the phase pattern displayed in the spatial light modulator 182 is changed. As shown by reference numeral 189, the addition averaging unit 187 adds the intermediate images 188_1, 188_2, 188_3, ..., 188_N. Next, the addition averaging unit 187 divides the added image by N as shown by reference numeral 190. The optical complex amplitude of the output image 76 in this case is
O (x, y) = (1 / N) x Σ (a_SR (x, y))
Can be expressed as.
O(x、y)=(1/N)×Σ(a_SR(x、y))
と表せる。 As shown in FIG. 35, in the second embodiment, the phase pattern displayed in the spatial
O (x, y) = (1 / N) x Σ (a_SR (x, y))
Can be expressed as.
第2実施形態においては、生成部185が実施する、出力画像76を生成するための計算処理は、中間画像生成部186において中間画像188を生成する処理、および加算平均部187において中間画像188の加算平均を算出する処理が対応する。なお、生成部185が実施する計算処理は、「拡散部材の透過関数の位相共役関数」の一例である空間光変調器182の透過関数の位相共役関数を含む。
In the second embodiment, the calculation process for generating the output image 76 performed by the generation unit 185 is the process of generating the intermediate image 188 in the intermediate image generation unit 186 and the intermediate image 188 in the addition averaging unit 187. The process of calculating the added average corresponds. The calculation process performed by the generation unit 185 includes the phase conjugation function of the transmission function of the spatial light modulator 182, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
このように、第2実施形態では、入力光波の拡散特性を変更可能な空間光変調器182を拡散部材として用いる。センサ13は、空間光変調器182により拡散特性が変更された複数種の入力光波の光複素振幅を計測して、入力光波の拡散特性が異なる複数の光複素振幅画像54_1~54_Nを出力する。生成部185は、複数の光複素振幅画像54_1~54_Nの各々から中間画像188_1~188_Nを生成し、複数の中間画像188_1~188_Nを加算平均した画像を、出力画像76とする。
As described above, in the second embodiment, the spatial light modulator 182 capable of changing the diffusion characteristics of the input light wave is used as the diffusion member. The sensor 13 measures the optical complex amplitudes of a plurality of types of input light waves whose diffusion characteristics are changed by the spatial light modulator 182, and outputs a plurality of optical complex amplitude images 54_1 to 54_N having different diffusion characteristics of the input light waves. The generation unit 185 generates intermediate images 188_1 to 188_N from each of the plurality of optical complex amplitude images 54_1 to 54_N, and adds and averages the plurality of intermediate images 188_1 to 188_N to obtain the output image 76.
空間光変調器182において表示する位相パターンを変更する度に、被計測物体15の入力光波が入射するセンサ13のピクセル40の数および/または位置を変更することができる。これにより、光複素振幅画像54_1~54_N、ひいては中間画像188_1~188_Nには、内容が微妙に異なる超解像成分が含まれることになる。そして、こうした中間画像188_1~188_Nの加算平均である出力画像76には、1枚のランダム拡散板22を用いた場合の出力画像76と比べて、鮮明な超解像成分が含まれることになる。したがって、基準画像61を用いずとも、出力画像76に生じる画質劣化を十分に低減することができる。
Every time the phase pattern displayed by the spatial light modulator 182 is changed, the number and / or position of the pixels 40 of the sensor 13 on which the input light wave of the object to be measured 15 is incident can be changed. As a result, the optical complex amplitude images 54_1 to 54_N, and thus the intermediate images 188_1 to 188_N, contain super-resolution components having slightly different contents. Then, the output image 76, which is the addition average of the intermediate images 188_1 to 188_N, contains a clear super-resolution component as compared with the output image 76 when one random diffusion plate 22 is used. .. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
[実施例2]
図36は、上記第2実施形態の数値シミュレーションによる出力画像76の位相画像76PHの例(位相パターンの変更数N=200)である。この位相画像76PHからも分かるように、基準画像61を用いずとも、出力画像76に生じる画質劣化を十分に低減することができている。 [Example 2]
FIG. 36 is an example of the phase image 76PH of theoutput image 76 by the numerical simulation of the second embodiment (number of changes in the phase pattern N = 200). As can be seen from the phase image 76PH, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced without using the reference image 61.
図36は、上記第2実施形態の数値シミュレーションによる出力画像76の位相画像76PHの例(位相パターンの変更数N=200)である。この位相画像76PHからも分かるように、基準画像61を用いずとも、出力画像76に生じる画質劣化を十分に低減することができている。 [Example 2]
FIG. 36 is an example of the phase image 76PH of the
[第3実施形態]
図37および図38に示す第3実施形態では、遮光部202が中央部分に設けられたランダム拡散板201を拡散部材として用いる。 [Third Embodiment]
In the third embodiment shown in FIGS. 37 and 38, arandom diffusion plate 201 in which the light-shielding portion 202 is provided in the central portion is used as the diffusion member.
図37および図38に示す第3実施形態では、遮光部202が中央部分に設けられたランダム拡散板201を拡散部材として用いる。 [Third Embodiment]
In the third embodiment shown in FIGS. 37 and 38, a
図37において、第3実施形態の実光学系200は、フーリエ変換レンズ26、ランダム拡散板201、フーリエ変換レンズ27、および部分透過マスク29を有している。図38にも示すように、ランダム拡散板201の中央部分には、遮光部202が設けられている。この遮光部202によって、フーリエ変換レンズ26による低空間周波数成分がカットされ、フーリエ変換レンズ26による高空間周波数成分のみが拡散される。この場合、センサ13から出力される光複素振幅画像54は、被計測物体15の入力光波の低空間周波数成分が欠落したものとなる。なお、符号203は凹凸面である。
In FIG. 37, the real optical system 200 of the third embodiment has a Fourier transform lens 26, a random diffuser 201, a Fourier transform lens 27, and a partial transmission mask 29. As shown in FIG. 38, a light-shielding portion 202 is provided in the central portion of the random diffusion plate 201. The light-shielding portion 202 cuts the low spatial frequency component by the Fourier transform lens 26, and diffuses only the high spatial frequency component by the Fourier transform lens 26. In this case, the optical complex amplitude image 54 output from the sensor 13 lacks the low spatial frequency component of the input light wave of the object to be measured 15. Reference numeral 203 is an uneven surface.
第3実施形態の生成部は、仮想ランダム拡散板107の透過関数Φ_V1(X、Y)の代わりに、ランダム拡散板201の透過関数の位相共役関数を用いる他は、上記第1実施形態の第1中間画像生成部85と同じである。このため、第3実施形態の生成部の図示は省略する。第3実施形態の生成部は、上記第1実施形態の生成部82および上記第2実施形態の生成部185と同様に、センサ13の解像度を超える解像度の成分である超解像成分を含む被計測物体15の入力光波を、光複素振幅画像54から再生する計算処理をコンピュータ上で施して、出力画像76を生成する。すなわち、第3実施形態の生成部は、本開示の技術に係る「仮想光学系」の一例である。
The generation unit of the third embodiment uses the phase conjugation function of the transmission function of the random diffusion plate 201 instead of the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107. 1 It is the same as the intermediate image generation unit 85. Therefore, the illustration of the generation unit of the third embodiment is omitted. The generation unit of the third embodiment, like the generation unit 82 of the first embodiment and the generation unit 185 of the second embodiment, includes a super-resolution component which is a component having a resolution exceeding the resolution of the sensor 13. The output image 76 is generated by performing a calculation process on the computer to reproduce the input light wave of the measurement object 15 from the optical complex amplitude image 54. That is, the generation unit of the third embodiment is an example of the "virtual optical system" according to the technique of the present disclosure.
第3実施形態の生成部は、センサ13から出力された光複素振幅画像54に対して、第1中間画像生成部85と略同じ処理を施す。そして、これにより得られた画像を出力画像76として出力する。第3実施形態においては、生成部が実施する、出力画像76を生成するための計算処理は、光複素振幅画像54に対して行う、第1中間画像生成部85と略同じ処理が対応する。なお、第3実施形態の生成部が実施する計算処理は、「拡散部材の透過関数の位相共役関数」の一例であるランダム拡散板201の透過関数の位相共役関数を含む。
The generation unit of the third embodiment performs substantially the same processing as the first intermediate image generation unit 85 on the optical complex amplitude image 54 output from the sensor 13. Then, the image obtained by this is output as the output image 76. In the third embodiment, the calculation process for generating the output image 76 performed by the generation unit corresponds to substantially the same processing as the first intermediate image generation unit 85 performed on the optical complex amplitude image 54. The calculation process performed by the generation unit of the third embodiment includes the phase conjugation function of the transmission function of the random diffusion plate 201, which is an example of the “phase conjugation function of the transmission function of the diffusion member”.
このように、第3実施形態では、拡散部材として、被計測物体15側に配されたフーリエ変換レンズ26による低空間周波数成分をカットする遮光部202が、中央部分に設けられたランダム拡散板201を用いる。
As described above, in the third embodiment, as the diffusion member, the light-shielding portion 202 for cutting the low spatial frequency component by the Fourier transform lens 26 arranged on the object to be measured 15 side is provided in the central portion of the random diffusion plate 201. Is used.
ここで、超解像成分は、低空間周波数成分にはほとんど含まれておらず、高空間周波数成分に大部分が含まれていると考えられる。例えば図20で示した入力画像50は、マーク145がない白地の部分は、フーリエ変換レンズ26により低空間周波数成分となり、マーク145の部分は、フーリエ変換レンズ26により高空間周波数成分となると考えられる。このため、超解像成分にほとんど寄与しないと考えられる低空間周波数成分を、遮光部202によってカットすれば、センサ13の解像力のほとんどを高空間周波数成分に割くことができる。したがって、基準画像61を用いずとも、出力画像76に生じる画質劣化を十分に低減することができる。
Here, it is considered that the super-resolution component is hardly contained in the low-spatial frequency component, and most of it is contained in the high-spatial frequency component. For example, in the input image 50 shown in FIG. 20, it is considered that the white background portion without the mark 145 becomes a low spatial frequency component due to the Fourier transform lens 26, and the portion marked 145 becomes a high spatial frequency component due to the Fourier transform lens 26. .. Therefore, if the low-spatial frequency component that is considered to hardly contribute to the super-resolution component is cut by the light-shielding unit 202, most of the resolution of the sensor 13 can be allocated to the high-spatial frequency component. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
[実施例3]
図39は、上記第3実施形態の数値シミュレーションに用いた入力画像50を示す。図39Aは入力画像50の強度画像50AM、図39Bは入力画像50の位相画像50PHをそれぞれ示す。入力画像50は、図20で示した入力画像50と同じく、白地の中心部分と左隅部分にマーク210が付された画像である。 [Example 3]
FIG. 39 shows aninput image 50 used in the numerical simulation of the third embodiment. FIG. 39A shows the intensity image 50AM of the input image 50, and FIG. 39B shows the phase image 50PH of the input image 50. The input image 50 is an image in which the mark 210 is attached to the central portion and the left corner portion of the white background, similarly to the input image 50 shown in FIG.
図39は、上記第3実施形態の数値シミュレーションに用いた入力画像50を示す。図39Aは入力画像50の強度画像50AM、図39Bは入力画像50の位相画像50PHをそれぞれ示す。入力画像50は、図20で示した入力画像50と同じく、白地の中心部分と左隅部分にマーク210が付された画像である。 [Example 3]
FIG. 39 shows an
図40は、図39で示した入力画像50の高空間周波数成分205を示す。図40Aは高空間周波数成分205の強度画像205AM、図40Bは高空間周波数成分205の位相画像205PHをそれぞれ示す。高空間周波数成分205においても、マーク210を確認することができる。
FIG. 40 shows the high spatial frequency component 205 of the input image 50 shown in FIG. 39. FIG. 40A shows the intensity image 205AM of the high spatial frequency component 205, and FIG. 40B shows the phase image 205PH of the high spatial frequency component 205. The mark 210 can also be confirmed in the high spatial frequency component 205.
図41は、図39で示した入力画像50に対する出力画像76である。図41Aは出力画像76の強度画像76AM、図41Bは出力画像76の位相画像76PHをそれぞれ示す。この出力画像76によれば、マーク210が精度よく再現されていることが分かる。
FIG. 41 is an output image 76 with respect to the input image 50 shown in FIG. 39. FIG. 41A shows the intensity image 76AM of the output image 76, and FIG. 41B shows the phase image 76PH of the output image 76. According to the output image 76, it can be seen that the mark 210 is accurately reproduced.
[第4実施形態]
図42~図44に示す第4実施形態では、開口223が中央部分に設けられたランダム拡散板221を拡散部材として用いる。 [Fourth Embodiment]
In the fourth embodiment shown in FIGS. 42 to 44, arandom diffusion plate 221 having an opening 223 provided in the central portion is used as the diffusion member.
図42~図44に示す第4実施形態では、開口223が中央部分に設けられたランダム拡散板221を拡散部材として用いる。 [Fourth Embodiment]
In the fourth embodiment shown in FIGS. 42 to 44, a
図42において、第4実施形態の実光学系220は、フーリエ変換レンズ26、ランダム拡散板221、集光レンズ222、フーリエ変換レンズ27、および部分透過マスク29を有している。図43にも示すように、ランダム拡散板221の中央部分には、開口223が設けられている。この開口223によって、フーリエ変換レンズ26による低空間周波数成分が、拡散されることなく透過され、フーリエ変換レンズ26による高空間周波数成分のみが拡散される。集光レンズ222は、開口223を透過した低空間周波数成分を、部分透過マスク29の透過部35に集光する。この場合、センサ13から出力される光複素振幅画像54は、被計測物体15の入力光波の、拡散された高空間周波数成分と、拡散されていない低空間周波数成分とを含むものとなる。なお、符号224は凹凸面である。
In FIG. 42, the actual optical system 220 of the fourth embodiment has a Fourier transform lens 26, a random diffuser plate 221 and a condenser lens 222, a Fourier transform lens 27, and a partial transmission mask 29. As shown in FIG. 43, an opening 223 is provided in the central portion of the random diffuser plate 221. Through this aperture 223, the low spatial frequency component by the Fourier transform lens 26 is transmitted without being diffused, and only the high spatial frequency component by the Fourier transform lens 26 is diffused. The condenser lens 222 concentrates the low spatial frequency component transmitted through the aperture 223 on the transmission portion 35 of the partial transmission mask 29. In this case, the optical complex amplitude image 54 output from the sensor 13 includes a diffused high spatial frequency component and an undiffused low spatial frequency component of the input light wave of the object to be measured 15. Reference numeral 224 is an uneven surface.
図44において、第4実施形態の生成部230は、上記第1実施形態の生成部82、上記第2実施形態の生成部185、および上記第3実施形態の生成部と同様に、センサ13の解像度を超える解像度の成分である超解像成分を含む被計測物体15の入力光波を、光複素振幅画像54から再生する計算処理をコンピュータ上で施して、出力画像76を生成する。すなわち、生成部230は、本開示の技術に係る「仮想光学系」の一例である。
In FIG. 44, the generation unit 230 of the fourth embodiment is the same as the generation unit 82 of the first embodiment, the generation unit 185 of the second embodiment, and the generation unit of the third embodiment of the sensor 13. The output image 76 is generated by performing a calculation process on a computer to reproduce the input light wave of the object to be measured 15 including the super-resolution component which is a component having a resolution exceeding the resolution from the optical complex amplitude image 54. That is, the generation unit 230 is an example of the "virtual optical system" according to the technique of the present disclosure.
生成部230は、成分分離部231、第1処理部232、第2処理部233、および合成部234を有する。成分分離部231は、センサ13から出力された光複素振幅画像54の高空間周波数成分205と低空間周波数成分235とを分離する。成分分離部231は、高空間周波数成分205を第1処理部232、低空間周波数成分235を第2処理部233にそれぞれ出力する。
The generation unit 230 has a component separation unit 231, a first processing unit 232, a second processing unit 233, and a synthesis unit 234. The component separation unit 231 separates the high spatial frequency component 205 and the low spatial frequency component 235 of the optical complex amplitude image 54 output from the sensor 13. The component separation unit 231 outputs the high spatial frequency component 205 to the first processing unit 232 and the low spatial frequency component 235 to the second processing unit 233, respectively.
第1処理部232は、光複素振幅画像54の高空間周波数成分205に対して、第1中間画像生成部85と略同じ処理を施す。そして、これにより得られた処理済み高空間周波数成分205PRを合成部234に出力する。同様に、第2処理部233は、光複素振幅画像54の低空間周波数成分235に対して、第1中間画像生成部85と略同じ処理を施す。そして、これにより得られた処理済み低空間周波数成分235PRを合成部234に出力する。第1処理部232および第2処理部233は、仮想ランダム拡散板107の透過関数Φ_V1(X、Y)の代わりに、ランダム拡散板221の透過関数の位相共役関数を用いる。
The first processing unit 232 performs substantially the same processing as the first intermediate image generation unit 85 on the high spatial frequency component 205 of the optical complex amplitude image 54. Then, the processed high spatial frequency component 205PR obtained thereby is output to the synthesis unit 234. Similarly, the second processing unit 233 performs substantially the same processing as the first intermediate image generation unit 85 on the low spatial frequency component 235 of the optical complex amplitude image 54. Then, the processed low spatial frequency component 235PR obtained thereby is output to the synthesis unit 234. The first processing unit 232 and the second processing unit 233 use the phase conjugation function of the transmission function of the random diffusion plate 221 instead of the transmission function Φ_V1 (X, Y) of the virtual random diffusion plate 107.
合成部234は、処理済み高空間周波数成分205PRと処理済み低空間周波数成分235PRを合成し、合成した画像を出力画像76として出力する。
The synthesis unit 234 synthesizes the processed high spatial frequency component 205PR and the processed low spatial frequency component 235PR, and outputs the combined image as an output image 76.
第4実施形態においては、生成部230が実施する、出力画像76を生成するための計算処理は、成分分離部231において光複素振幅画像54の高空間周波数成分205と低空間周波数成分235とを分離する処理、第1処理部232において高空間周波数成分205に対して行う、第1中間画像生成部85と略同じ処理、第2処理部233において低空間周波数成分235に対して行う、第1中間画像生成部85と略同じ処理、および合成部234において処理済み高空間周波数成分205PRと処理済み低空間周波数成分235PRを合成する処理が対応する。なお、生成部230が実施する計算処理は、「拡散部材の透過関数の位相共役関数」の一例であるランダム拡散板221の透過関数の位相共役関数を含む。
In the fourth embodiment, the calculation process for generating the output image 76, which is carried out by the generation unit 230, is performed by the component separation unit 231 with the high spatial frequency component 205 and the low spatial frequency component 235 of the optical complex amplitude image 54. The first processing, which is performed in the first processing unit 232 for the high spatial frequency component 205, is substantially the same as that in the first intermediate image generation unit 85, and is performed in the second processing unit 233 for the low spatial frequency component 235. The processing substantially the same as that of the intermediate image generation unit 85, and the processing of synthesizing the processed high spatial frequency component 205PR and the processed low spatial frequency component 235PR in the compositing unit 234 correspond. The calculation process performed by the generation unit 230 includes the phase conjugation function of the transmission function of the random diffusion plate 221 which is an example of the "phase conjugation function of the transmission function of the diffusion member".
このように、第4実施形態では、被計測物体15側に配されたフーリエ変換レンズ26による低空間周波数成分を透過する開口223が、中央部分に設けられたランダム拡散板221を拡散部材として用いる。そして、開口223を透過した低空間周波数成分を、部分透過マスク29の透過部35に集光する集光レンズ222を、実光学系220に配する。
As described above, in the fourth embodiment, the opening 223 that transmits the low spatial frequency component by the Fourier transform lens 26 arranged on the object to be measured 15 side uses the random diffusion plate 221 provided in the central portion as the diffusion member. .. Then, a condenser lens 222 that concentrates the low spatial frequency component transmitted through the aperture 223 on the transmission portion 35 of the partial transmission mask 29 is arranged in the actual optical system 220.
フーリエ変換レンズ26による低空間周波数成分は、超解像成分をほとんど含んでいないとはいえ、被計測物体15の入力光波の一部であることに変わりはない。このため、本第4実施形態では、上記第3実施形態においてカットしていた低空間周波数成分を、開口223および集光レンズ222によってセンサ13に取り込んでいる。ただし、低空間周波数成分を、高空間周波数成分と同じようにランダム拡散板221によって拡散させてしまうと、ただでさえ少ないセンサ13の解像力が、低空間周波数成分で無駄に使われてしまう。そこで、本第4実施形態では、低空間周波数成分を透過させる開口223をランダム拡散板221に形成し、低空間周波数成分を拡散させずにセンサ13に導いている。したがって、基準画像61を用いずとも、出力画像76に生じる画質劣化を十分に低減することができる。
Although the low spatial frequency component by the Fourier transform lens 26 contains almost no super-resolution component, it is still a part of the input light wave of the object to be measured 15. Therefore, in the fourth embodiment, the low spatial frequency component cut in the third embodiment is taken into the sensor 13 by the aperture 223 and the condenser lens 222. However, if the low spatial frequency component is diffused by the random diffuser plate 221 in the same manner as the high spatial frequency component, the resolution of the sensor 13 which is already small is wasted in the low spatial frequency component. Therefore, in the fourth embodiment, an opening 223 for transmitting the low spatial frequency component is formed in the random diffuser plate 221 and is guided to the sensor 13 without diffusing the low spatial frequency component. Therefore, even if the reference image 61 is not used, the image quality deterioration that occurs in the output image 76 can be sufficiently reduced.
上記各実施形態で示した、透過部35が穴で形成された部分透過マスク29は一例であり、これに限定されない。図45に示す部分透過マスク240のように、周辺を遮光性の材料でコーティングし、中央部分は遮光性の材料でコーティングせずに透過部241とする態様でもよい。
The partial transmission mask 29 in which the transmission portion 35 is formed of holes, which is shown in each of the above embodiments, is an example, and is not limited thereto. As in the partial transmission mask 240 shown in FIG. 45, the peripheral portion may be coated with a light-shielding material, and the central portion may be formed as a transmission portion 241 without being coated with a light-shielding material.
また、透過部は必ずしも部分透過マスクの中央部分に形成されていなくてもよい。例えば図46に示す部分透過マスク250のように、透過部251が中央部分からずれていてもよい。
Further, the transmissive portion does not necessarily have to be formed in the central portion of the partial transmissive mask. For example, as in the partial transmission mask 250 shown in FIG. 46, the transmission portion 251 may be displaced from the central portion.
本開示の技術は、上述の種々の実施形態および/または種々の変形例を適宜組み合わせることも可能である。また、上記各実施形態に限らず、要旨を逸脱しない限り種々の構成を採用し得ることはもちろんである。
The technique of the present disclosure can be appropriately combined with the various embodiments described above and / or various modifications. In addition, not limited to each of the above embodiments, it goes without saying that various configurations can be adopted as long as they do not deviate from the gist.
以上に示した記載内容および図示内容は、本開示の技術に係る部分についての詳細な説明であり、本開示の技術の一例に過ぎない。例えば、上記の構成、機能、作用、および効果に関する説明は、本開示の技術に係る部分の構成、機能、作用、および効果の一例に関する説明である。よって、本開示の技術の主旨を逸脱しない範囲内において、以上に示した記載内容および図示内容に対して、不要な部分を削除したり、新たな要素を追加したり、置き換えたりしてもよいことはいうまでもない。また、錯綜を回避し、本開示の技術に係る部分の理解を容易にするために、以上に示した記載内容および図示内容では、本開示の技術の実施を可能にする上で特に説明を要しない技術常識等に関する説明は省略されている。
The description and illustration shown above are detailed explanations of the parts related to the technology of the present disclosure, and are merely an example of the technology of the present disclosure. For example, the above description of the configuration, function, action, and effect is an example of the configuration, function, action, and effect of a portion of the art of the present disclosure. Therefore, unnecessary parts may be deleted, new elements may be added, or replacements may be made to the described contents and illustrated contents shown above within a range that does not deviate from the gist of the technique of the present disclosure. Needless to say. In addition, in order to avoid complications and facilitate understanding of the parts relating to the technology of the present disclosure, the above-mentioned description and illustrations require special explanation in order to enable the implementation of the technology of the present disclosure. The explanation about common technical knowledge is omitted.
本明細書において、「Aおよび/またはB」は、「AおよびBのうちの少なくとも1つ」と同義である。つまり、「Aおよび/またはB」は、Aだけであってもよいし、Bだけであってもよいし、AおよびBの組み合わせであってもよい、という意味である。また、本明細書において、3つ以上の事柄を「および/または」で結び付けて表現する場合も、「Aおよび/またはB」と同様の考え方が適用される。
In the present specification, "A and / or B" is synonymous with "at least one of A and B". That is, "A and / or B" means that it may be A alone, B alone, or a combination of A and B. Further, in the present specification, when three or more matters are connected and expressed by "and / or", the same concept as "A and / or B" is applied.
本明細書に記載された全ての文献、特許出願および技術規格は、個々の文献、特許出願および技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。
All documents, patent applications and technical standards described herein are to the same extent as if the individual documents, patent applications and technical standards were specifically and individually stated to be incorporated by reference. Incorporated by reference in the book.
Claims (13)
- 被計測物体の入力光波を拡散させる拡散部材、および拡散された前記入力光波を部分的に透過させる部分透過マスクを含む実光学系と、
前記部分透過マスクを透過した前記入力光波の強度と位相を計測して光複素振幅画像を出力するセンサと、
前記センサの解像度を超える解像度の成分である超解像成分を含む前記被計測物体の入力光波を、前記光複素振幅画像から再生する計算処理であって、前記拡散部材の透過関数の位相共役関数を含む計算処理をコンピュータ上で施して、出力画像を生成する仮想光学系と、
を備える超解像計測装置。 An actual optical system including a diffusing member that diffuses the input light wave of the object to be measured and a partially transmitted mask that partially transmits the diffused input light wave.
A sensor that measures the intensity and phase of the input light wave that has passed through the partial transmission mask and outputs an optical complex amplitude image.
This is a calculation process for reproducing the input light wave of the object to be measured containing the super-resolution component which is a component having a resolution exceeding the resolution of the sensor from the optical complex amplitude image, and is a phase conjugation function of the transmission function of the diffusion member. A virtual optical system that generates an output image by performing calculation processing including
Super-resolution measuring device equipped with. - 前記仮想光学系は、前記光複素振幅画像から第1中間画像を生成し、
前記被計測物体の基準画像から第2中間画像を生成し、
前記第1中間画像から前記第2中間画像を減算した画像に基づいて、前記出力画像を生成する請求項1に記載の超解像計測装置。 The virtual optical system generates a first intermediate image from the optical complex amplitude image.
A second intermediate image is generated from the reference image of the object to be measured.
The super-resolution measuring apparatus according to claim 1, wherein the output image is generated based on an image obtained by subtracting the second intermediate image from the first intermediate image. - 前記実光学系は、
前記拡散部材を経て前記センサに至る、前記光複素振幅画像を得るための光複素振幅画像取得光路と、
前記拡散部材を通さずに前記センサに至る、前記基準画像を得るための基準画像取得光路とを有する請求項2に記載の超解像計測装置。 The actual optical system is
An optical complex amplitude image acquisition optical path for obtaining the optical complex amplitude image, which reaches the sensor via the diffusion member, and
The super-resolution measuring device according to claim 2, further comprising a reference image acquisition optical path for obtaining the reference image, which reaches the sensor without passing through the diffusion member. - 前記光複素振幅画像取得光路の前記拡散部材よりも前記被計測物体側から、前記基準画像取得光路が分岐している請求項3に記載の超解像計測装置。 The super-resolution measuring device according to claim 3, wherein the reference image acquisition optical path branches from the object to be measured side of the diffusion member of the optical complex amplitude image acquisition optical path.
- 前記光複素振幅画像取得光路の前記拡散部材よりも前記センサ側で、前記基準画像取得光路が合流している請求項4に記載の超解像計測装置。 The super-resolution measuring device according to claim 4, wherein the reference image acquisition optical path merges on the sensor side of the diffusion member of the optical complex amplitude image acquisition optical path.
- 前記基準画像取得光路においては、前記入力光波が前記部分透過マスクの透過部に収まるサイズに集光される請求項5に記載の超解像計測装置。 The super-resolution measuring device according to claim 5, wherein in the reference image acquisition optical path, the input light wave is focused to a size that fits in the transmission portion of the partial transmission mask.
- 前記光複素振幅画像取得光路と前記基準画像取得光路とは兼用されており、
前記拡散部材および前記部分透過マスクは、前記光複素振幅画像取得光路内に進入した進入位置と、前記光複素振幅画像取得光路から退避した退避位置とに移動され、
前記光複素振幅画像取得光路は、前記拡散部材および前記部分透過マスクが前記退避位置にある場合に、前記基準画像取得光路として機能する請求項3に記載の超解像計測装置。 The optical complex amplitude image acquisition optical path and the reference image acquisition optical path are also used.
The diffusion member and the partial transmission mask are moved to an approach position that has entered the optical complex amplitude image acquisition optical path and a retracted position that is retracted from the optical complex amplitude image acquisition optical path.
The super-resolution measuring device according to claim 3, wherein the optical complex amplitude image acquisition optical path functions as the reference image acquisition optical path when the diffusion member and the partial transmission mask are in the retracted position. - 前記拡散部材は、前記入力光波をランダムに拡散させるランダム拡散板である請求項1から請求項7のいずれか1項に記載の超解像計測装置。 The super-resolution measuring device according to any one of claims 1 to 7, wherein the diffusing member is a random diffusing plate that randomly diffuses the input light wave.
- 前記拡散部材は、前記入力光波の拡散特性を変更可能な空間光変調器であり、
前記センサは、前記空間光変調器により拡散特性が変更された複数種の前記入力光波の強度と位相を計測して、前記入力光波の拡散特性が異なる複数の前記光複素振幅画像を出力し、
前記仮想光学系は、複数の前記光複素振幅画像の各々から中間画像を生成し、複数の前記中間画像を加算平均した画像を、前記出力画像とする請求項1に記載の超解像計測装置。 The diffusion member is a spatial light modulator capable of changing the diffusion characteristics of the input light wave.
The sensor measures the intensity and phase of a plurality of types of input light waves whose diffusion characteristics have been changed by the spatial light modulator, and outputs a plurality of optical complex amplitude images having different diffusion characteristics of the input light waves.
The super-resolution measuring device according to claim 1, wherein the virtual optical system generates an intermediate image from each of the plurality of optical complex amplitude images, and uses an image obtained by adding and averaging the plurality of the intermediate images as the output image. .. - 前記実光学系は、前記拡散部材よりも前記被計測物体側、および前記拡散部材よりも前記センサ側に配された、一対のフーリエ変換レンズを含む請求項1に記載の超解像計測装置。 The super-resolution measuring device according to claim 1, wherein the actual optical system includes a pair of Fourier transform lenses arranged on the object to be measured side of the diffusing member and on the sensor side of the diffusing member.
- 前記拡散部材は、前記入力光波をランダムに拡散させるランダム拡散板であって、前記被計測物体側に配された前記フーリエ変換レンズによる低空間周波数成分をカットする遮光部が、中央部分に設けられたランダム拡散板である請求項10に記載の超解像計測装置。 The diffusing member is a random diffusing plate that randomly diffuses the input light wave, and a light-shielding portion for cutting low spatial frequency components by the Fourier transform lens arranged on the object to be measured is provided in the central portion. The super-resolution measuring device according to claim 10, which is a random diffusion plate.
- 前記拡散部材は、前記入力光波をランダムに拡散させるランダム拡散板であって、前記被計測物体側に配された前記フーリエ変換レンズによる低空間周波数成分を透過する開口が、中央部分に設けられたランダム拡散板であり、
前記実光学系には、前記開口を透過した前記低空間周波数成分を、前記部分透過マスクの透過部に集光する集光レンズが配されている請求項10に記載の超解像計測装置。 The diffusing member is a random diffusing plate that randomly diffuses the input light wave, and an opening that transmits a low spatial frequency component by the Fourier transform lens arranged on the object to be measured is provided in the central portion. It is a random diffuser and
The super-resolution measuring apparatus according to claim 10, wherein the actual optical system is provided with a condensing lens that condenses the low spatial frequency component transmitted through the aperture on a transmitting portion of the partial transmission mask. - 被計測物体の入力光波を拡散させる拡散部材、および拡散された前記入力光波を部分的に透過させる部分透過マスクを含む実光学系を用い、
センサによって、前記部分透過マスクを透過した前記入力光波の強度と位相を計測して光複素振幅画像を出力させ、
仮想光学系において、前記センサの解像度を超える解像度の成分である超解像成分を含む前記被計測物体の入力光波を、前記光複素振幅画像から再生する計算処理であって、前記拡散部材の透過関数の位相共役関数を含む計算処理をコンピュータ上で施して、出力画像を生成する、
超解像計測装置の作動方法。 Using a real optical system including a diffusing member that diffuses the input light wave of the object to be measured and a partially transmitted mask that partially transmits the diffused input light wave.
The sensor measures the intensity and phase of the input light wave transmitted through the partial transmission mask and outputs an optical complex amplitude image.
In the virtual optical system, it is a calculation process for reproducing the input light wave of the object to be measured including the super-resolution component which is a component having a resolution exceeding the resolution of the sensor from the optical complex amplitude image, and is the transmission of the diffusion member. Generate an output image by performing a calculation process including the phase conjugation function of the function on a computer.
How to operate the super-resolution measuring device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021543950A JP7195556B2 (en) | 2019-09-03 | 2020-05-01 | Super-resolution measurement device and method of operating the super-resolution measurement device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-160696 | 2019-09-03 | ||
JP2019160696 | 2019-09-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021044670A1 true WO2021044670A1 (en) | 2021-03-11 |
Family
ID=74853161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/018465 WO2021044670A1 (en) | 2019-09-03 | 2020-05-01 | Super-resolution measurement device and super-resolution measurement device operation method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7195556B2 (en) |
WO (1) | WO2021044670A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006017488A (en) * | 2004-06-30 | 2006-01-19 | Nikon Corp | Microscope observation method, microscope device, and image processing device |
WO2006137326A1 (en) * | 2005-06-20 | 2006-12-28 | Matsushita Electric Industrial Co., Ltd. | 2-dimensional image display device, illumination light source, and exposure illumination device |
JP2015036799A (en) * | 2013-08-15 | 2015-02-23 | 国立大学法人北海道大学 | Complex amplitude image reproduction device and complex amplitude image reproduction method, and scattered phase image creation device and scattered phase image creation method |
WO2015068834A1 (en) * | 2013-11-11 | 2015-05-14 | 国立大学法人北海道大学 | Apparatus and method for generating complex amplitude image |
US20190049896A1 (en) * | 2017-08-08 | 2019-02-14 | National Taiwan Normal University | Method and Apparatus of Structured Illumination Digital Holography |
-
2020
- 2020-05-01 WO PCT/JP2020/018465 patent/WO2021044670A1/en active Application Filing
- 2020-05-01 JP JP2021543950A patent/JP7195556B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006017488A (en) * | 2004-06-30 | 2006-01-19 | Nikon Corp | Microscope observation method, microscope device, and image processing device |
WO2006137326A1 (en) * | 2005-06-20 | 2006-12-28 | Matsushita Electric Industrial Co., Ltd. | 2-dimensional image display device, illumination light source, and exposure illumination device |
JP2015036799A (en) * | 2013-08-15 | 2015-02-23 | 国立大学法人北海道大学 | Complex amplitude image reproduction device and complex amplitude image reproduction method, and scattered phase image creation device and scattered phase image creation method |
WO2015068834A1 (en) * | 2013-11-11 | 2015-05-14 | 国立大学法人北海道大学 | Apparatus and method for generating complex amplitude image |
US20190049896A1 (en) * | 2017-08-08 | 2019-02-14 | National Taiwan Normal University | Method and Apparatus of Structured Illumination Digital Holography |
Non-Patent Citations (3)
Title |
---|
ATSUSHI ET AL.: "OKAMOTO Control and Optical Information Processing Using Spatial Optical Modulator.", PROCEEDINGS OF THE 2019 IEICE GENERAL CONFERENCE (ELECTRONICS 1).), 2019 * |
GOTO, YUTA ET AL.: "Experiment on digital image multiplexing/demultiplexing using virtual phase conjugation for high density holographic memory.", ITE TECHNICAL REPORT, vol. 42, no. 4, 2018, pages 147 - 152 * |
GOTO, YUTA ET AL.: "Numericalanalysis of optical tomographic imaging using virtual phase conjugation.", ITE TECHNICAL REPORT, vol. 39, no. 7, 2015, pages 1 - 13,9-14 * |
Also Published As
Publication number | Publication date |
---|---|
JP7195556B2 (en) | 2022-12-26 |
JPWO2021044670A1 (en) | 2021-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4664031B2 (en) | Optical pattern forming method and apparatus, and optical tweezers | |
JP6179902B2 (en) | Digital holography apparatus and digital holography reproduction method | |
JP6230047B2 (en) | Complex amplitude image display method, scattered phase image generation apparatus, and scattered phase image generation method | |
WO2014050141A1 (en) | Method for measuring optical phase, device for measuring optical phase, and optical communication device | |
Bernet et al. | Lensless digital holography with diffuse illumination through a pseudo-random phase mask | |
Kelly et al. | Digital holographic capture and optoelectronic reconstruction for 3D displays | |
KR20080031126A (en) | Recording device and phase modulation device | |
WO2021044670A1 (en) | Super-resolution measurement device and super-resolution measurement device operation method | |
Hofmann et al. | Extended holographic wave front printer setup employing two spatial light modulators | |
JP2013195802A (en) | Holographic stereogram recording device and method | |
Mastiani et al. | Practical considerations for high-fidelity wavefront shaping experiments | |
CN205483259U (en) | Way is from reference light interferometer altogether | |
JP4826635B2 (en) | Optical regeneration method and optical regeneration apparatus | |
JP6614636B2 (en) | Method for manufacturing hologram screen | |
KR20220101940A (en) | Apparatus for holographic image evaluation and method thereof | |
JP4936959B2 (en) | Hologram information recording / reproducing method, hologram information recording / reproducing apparatus, hologram information recording apparatus, and hologram information reproducing apparatus | |
JP5746606B2 (en) | Hologram reproducing method, apparatus and hologram recording / reproducing apparatus | |
JP6436753B2 (en) | Phase difference interference microscope | |
JP2013195801A (en) | Holographic stereogram recording device and method | |
González Hernández et al. | High sampling rate single-pixel digital holography system employing a DMD and encoder phase-encoded patterns | |
JP6554411B2 (en) | Hologram recording apparatus and method | |
KR20180138514A (en) | Method and apparatus for recording and projecting 3 dimensional holographic image using scattering layer | |
JP6960143B2 (en) | Hologram recording equipment and hologram manufacturing equipment | |
JP2009009629A (en) | Hologram recording device, hologram reproducing device, hologram recording method and hologram reproducing method | |
JP2022112904A (en) | Shape measurement method and interferometer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20860859 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021543950 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20860859 Country of ref document: EP Kind code of ref document: A1 |