WO2023189793A1 - 医療用観察装置及び情報処理装置 - Google Patents
医療用観察装置及び情報処理装置 Download PDFInfo
- Publication number
- WO2023189793A1 WO2023189793A1 PCT/JP2023/010801 JP2023010801W WO2023189793A1 WO 2023189793 A1 WO2023189793 A1 WO 2023189793A1 JP 2023010801 W JP2023010801 W JP 2023010801W WO 2023189793 A1 WO2023189793 A1 WO 2023189793A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- medical observation
- observation device
- image
- image sensor
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/13—Ophthalmic microscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
- A61B3/1225—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
Definitions
- the present disclosure relates to a medical observation device and an information processing device.
- OCT Optical Coherence Tomography
- FFOCT Full-Field Optical Coherence Tomography
- Conventional general OCT uses a light source that emits coherent light (hereinafter also referred to as a coherent light source) in which the phase relationship of light waves at any two points within a light beam is kept constant over time. Therefore, when observing a subject that may have aberrations, such as a human eye, there is a possibility that the image quality (for example, resolution) of the observed image will deteriorate due to the influence of the aberrations.
- a coherent light source coherent light
- the present disclosure proposes a medical observation device and an information processing device that can suppress deterioration in image quality.
- a medical observation device includes: a light source that emits at least spatially incoherent light; an image sensor that acquires an image of the light that is emitted from the light source and reflected by a subject; A signal processing unit that corrects the signal amount determined from the first image data acquired by the sensor based on the wavefront aberration of the light reflected by the subject.
- FIG. 1 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to a first embodiment.
- FIG. 3 is a diagram for explaining an example of a reflective object existing in the Z direction (depth direction).
- FIG. 6 is a diagram showing an example of the amplitude of the signal strength in the Z direction obtained at each pixel.
- FIG. 3 is a block diagram for explaining correction processing according to the first embodiment. It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning a 2nd embodiment.
- FIG. 7 is a block diagram for explaining correction processing according to a second embodiment.
- FIG. 7 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to a third embodiment.
- FIG. 12 is a diagram showing an example of volume acquisition time in each case where the number of pixels in the Y direction of the drive area is changed according to the sixth embodiment. It is a schematic diagram which shows the modification of the medical observation apparatus based on 6th Embodiment.
- FIG. 7 It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning a 7th embodiment. It is a figure showing an example of an image rotator concerning a 7th embodiment. It is a figure which shows another example of the image rotator based on 7th Embodiment. It is a schematic diagram which shows the modification of the medical observation apparatus based on 7th Embodiment. It is a figure showing an example of a drive area concerning an 8th embodiment. It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning a 9th embodiment. It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning a 10th embodiment.
- FIG. 10 It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning a 7th embodiment.
- FIG. 7 is a partially enlarged view of a pixel array section in a polarization image sensor according to a tenth embodiment. It is a schematic diagram showing an example of a schematic structure of a medical observation device concerning an 11th embodiment.
- FIG. 1 is a block diagram illustrating an example of a hardware configuration according to an embodiment of the present disclosure.
- First embodiment 1.1 Schematic configuration example of medical observation device 1.2 Regarding generation of tomographic image data 1.3 Correction processing 1.4 Summary 2.
- Second embodiment 2.1 Schematic configuration example of medical observation device 2.2 Correction processing 3.
- Third embodiment 4.
- Fourth embodiment 5.
- Fifth embodiment 6.
- Sixth embodiment 6.1 Schematic configuration example of medical observation device 6.2 Summary 6.3 Modifications 7. Seventh embodiment 7.1 Modification 8.
- Eighth embodiment 9. Ninth embodiment 10.
- Tenth embodiment 11. Eleventh embodiment 12.
- each embodiment is applied to a surgical microscope, an ophthalmoscope, etc. used in human eye surgery or diagnosis such as glaucoma treatment as a medical observation device and information processing device.
- a surgical microscope, an ophthalmoscope, etc. used in human eye surgery or diagnosis such as glaucoma treatment
- the present disclosure is not limited thereto, and the present disclosure is not limited to this, and the case where light (reflected light, transmitted light, scattered light, etc.) from an object to be examined (subject) is applied is exemplified. It is possible to target various observation devices that may have aberrations.
- FIG. 1 is a schematic diagram showing a schematic configuration example of a medical observation device according to the present embodiment.
- the medical observation apparatus 1 includes an incoherent light source 101, beam splitters 102 and 103, objective lenses 104 and 105, a vibration mechanism 106, and imaging lenses 107 and 109. , a wavefront sensor 108 , an image sensor 110 , and a signal processing section 120 .
- the subject 130 to be examined is a human eye, and a three-dimensional tomographic image of the human eye is acquired for the purpose of ophthalmological surgery or ophthalmological examination.
- the incoherent light source 101 is a light source that emits at least spatially incoherent light (hereinafter also referred to as spatially incoherent light), and may be any of various light sources that can emit spatially incoherent light, such as a halogen lamp. good.
- the incoherent light source 101 includes, for example, a collimator lens that collimates the light emitted from the light source.
- Spatial incoherent light (hereinafter also referred to as emitted light to distinguish it from other light) emitted from the incoherent light source 101 enters the beam splitter 102 and is split into two optical paths.
- the beam splitter 102 may be configured using an optical element, such as a half mirror, that transmits part of the light and reflects at least part of the remaining light.
- the emitted light that has passed through the beam splitter 102 enters the light incident surface of the vibration mechanism 106 via the objective lens 105, for example.
- the vibration mechanism 106 is composed of, for example, a piezo element, and a reflection mirror that moves along the optical axis as the vibration mechanism 106 vibrates is provided on its light incident surface. Therefore, the optical path length of the light transmitted through the beam splitter 102 changes depending on the vibration of the vibration mechanism 106. At least a portion of the light reflected by the reflection mirror of the vibration mechanism 106 (hereinafter also referred to as reference light) enters the beam splitter 102 again, is reflected, and is imaged on the image sensor 110 via an imaging lens 109, which will be described later. be done.
- the beam splitter 103 may be configured using an optical element, such as a half mirror, that transmits a portion of the light and reflects at least a portion of the remaining light.
- the observation light reflected by the beam splitter 103 enters the wavefront sensor 108 via the imaging lens 107, for example.
- the wavefront sensor 108 detects the wavefront of the incident light, that is, the observation light reflected by the subject 130, and inputs the detection result (that is, a spatial distribution map of wavefront aberration) to the signal processing unit 120.
- At least a portion of the observation light reflected by the subject 130 and transmitted through the beam splitter 103 is imaged on the image sensor 110 via the beam splitter 102 and the imaging lens 109.
- the optical axis of the observation light transmitted through the beam splitter 102 substantially coincides with the optical axis of the reference light reflected by the vibration mechanism 106 and reflected by the beam splitter 102.
- the optical path length from the beam splitter 102 to the vibration mechanism 106 when the light incidence surface of the vibration mechanism 106 is at the reference position (for example, the position where the vibration mechanism 106 is not vibrating) and the optical path length from the beam splitter 102 to the vibration mechanism 106 are shown.
- the optical path length up to the specimen 130 is approximately the same. Therefore, the image formed on the light receiving surface of the image sensor 110 is an image obtained by interference between the observation light and the reference light.
- the image of the subject 130 along the optical axis (hereinafter also referred to as the Z axis or Z direction) is (Optical coherence tomography: OCT).
- OCT optical coherence tomography
- the Z-axis may be an axis parallel to the optical axis of light incident on each of the incoherent light source 101, the vibration mechanism 106, the wavefront sensor 108, the image sensor 110, and the subject 130.
- the image sensor 110 includes a pixel array section in which a plurality of pixels each photoelectrically converting incident light to generate a brightness value (also referred to as a pixel value) are arranged in a two-dimensional lattice shape, and the incident observation light and reference light are Two-dimensional image data (hereinafter also simply referred to as image data) of the interfered image is output. That is, the medical observation device 1 according to the present embodiment uses FFOCT (Full-Field OCT) that can acquire a three-dimensional tomographic image of the subject 130 without requiring scanning in the horizontal direction (XY plane direction).
- FFOCT Full-Field OCT
- the image sensor 110 outputs image data at a predetermined frame rate while the vibration mechanism 106 is vibrating (that is, while the reflecting mirror is moving along the optical axis (Z-axis)), the three-dimensional image of the subject 130 is It becomes possible to acquire tomographic images.
- the signal processing unit 120 generates a three-dimensional tomographic image of the subject 130 using image data input from the image sensor 110 at a predetermined frame rate.
- the amplitude of the oscillation of the signal amount at each pixel in several neighboring frames in the Z direction is a signal indicating the reflection intensity (corresponding to the brightness or light intensity of observation light) at the fault at the Z position. Calculated as a quantity.
- tomographic image data of the tomographic section at the Z position is generated.
- a three-dimensional tomographic image is then generated by stacking the generated tomographic image data in the Z direction.
- the signal processing unit 120 corrects the signal amount of the tomographic image data used to generate the three-dimensional tomographic image based on a spatial distribution map of wavefront aberration (hereinafter also referred to as a spatial aberration map).
- a spatial aberration map a spatial distribution map of wavefront aberration
- the detection of the spatial aberration map by the wavefront sensor 108 may be executed each time a three-dimensional tomographic image is generated in the signal processing unit 120, or when the positional relationship between the subject 130 and the objective lens 104 is changed. may be executed. In the latter case, if the positional relationship between the subject 130 and the objective lens 104 is not changed, the previously acquired spatial aberration map can be used for later generation of the three-dimensional tomographic image. Therefore, it is possible to reduce the processing amount and processing time in the subsequent three-dimensional tomographic image generation processing.
- the change in the positional relationship between the subject 130 and the objective lens 104 may be input manually by the user, or by using a sensor provided on the subject 130 and/or the objective lens 104. Alternatively, the determination may be made based on whether initialization processing has been performed manually or automatically after adjusting the positional relationship between the subject 130 and the objective lens 104.
- tomographic image data is generated by scanning an interference image between an observation light and a reference light in the Z direction (also referred to as the depth direction in this explanation). It is generated based on the image data obtained by
- FIG. 2 when there is a reflective object at each of two points in the Z direction, the intensity of the signal obtained as a result of scanning in the Z direction with respect to a certain pixel on the image sensor 110 is illustrated in FIG. 3.
- vibration occurs in the Z direction due to interference between the observation light and the reference light.
- the amplitude of this vibration reflects the reflectivity of the object. That is, the magnitude of the amplitude represents the brightness of the observed image reflected by the reflective object.
- the signal processing unit 120 generates tomographic image data at the Z position by calculating the amplitude value at each position in the Z direction for each pixel as the signal amount of each pixel at the Z position.
- FIG. 4 is a block diagram for explaining the correction processing according to this embodiment.
- the wavefront sensor 108 detects a spatial aberration map every time the signal processing unit 120 generates a three-dimensional tomographic image.
- the signal processing section 120 includes a correction section 121 and a correction amount calculation section 122.
- the signal processing unit 120 receives two-dimensional image data from the image sensor 110 at a predetermined frame rate, and also receives a spatial aberration map from the wavefront sensor 108.
- the signal processing unit 120 generates tomographic image data using signal amounts as pixel values from image data input at a predetermined frame rate.
- the generated tomographic image data is input to the correction unit 121.
- the spatial aberration map input to the signal processing section 120 is input to the correction amount calculation section 122.
- the spatial aberration map may be input from the wavefront sensor 108 in parallel with the input of image data from the image sensor 110, or may be input from the wavefront sensor 108 before or after the input of image data from the image sensor 110. It's okay.
- the correction amount calculation unit 122 calculates the correction amount of the signal amount in each region (for example, each pixel) in the tomographic image data based on the spatial aberration map input from the wavefront sensor 108. For example, the correction amount calculation unit 122 calculates that the signal amount correction amount (e.g., amplification amount) is large in an area where the wavefront aberration is large, and the signal amount correction amount (e.g., amplification amount) is small in an area where the wavefront aberration is small. Calculate the correction amount as follows.
- the signal amount correction amount e.g., amplification amount
- the present invention is not limited to this, and it is also possible to calculate a correction amount that amplifies the signal amount in an area where the wavefront aberration is large, and calculate a correction amount that reduces the signal amount in an area where the wavefront aberration is small.
- a correction amount may be calculated such that the signal amount correction amount (for example, reduction amount) is small in a region where the wavefront aberration is large, and the signal amount correction amount (for example, reduction amount) is large in a region where the wavefront aberration is small.
- the correction amount calculated for each region is input to the correction unit 121.
- the correction unit 121 corrects the signal amount of each region in the tomographic image data based on the correction amount input from the correction amount calculation unit 122. Then, the correction unit 121 generates a three-dimensional tomographic image using the corrected tomographic image data, and outputs the generated three-dimensional tomographic image to the outside.
- the tomographic image data used for generating the three-dimensional tomographic image can be adjusted. image quality is improved. Thereby, it becomes possible to generate a three-dimensional tomographic image with higher image quality.
- the medical observation device 1 includes the incoherent light source 101 that emits at least spatially incoherent light, and the incoherent light source 101 that emits light that is emitted by the subject 130.
- An image sensor 110 that acquires two-dimensional image data of an interference image between reflected observation light and a predetermined reference light, and an image sensor 110 that detects the observation light to determine how the influence of wavefront aberration of the object 130 is spatially distributed. It includes a wavefront sensor 108 that measures a spatial aberration map that indicates whether or not the object is moving, and a signal processing unit 120 that corrects the signal amount of the tomographic image data determined from the image data of the interference image based on the measured spatial aberration map.
- the influence of the wavefront aberration of the subject 130 such as a human eye can be caused not by a decrease in resolution but by a decrease in signal amount. I can do it.
- the wavefront sensor 108 it is possible to measure how the influence of wavefront aberration of the subject 130 is spatially distributed. By combining these, it becomes possible to grasp the extent to which the signal amount decrease occurs spatially, so signal processing of the tomographic image data obtained from the image data can be used to detect the decrease in signal amount due to wavefront aberration. Can be corrected.
- the medical observation device 1 having the above configuration to a surgical microscope, an ophthalmoscope, etc., it becomes possible to share many of the optical parts from the eyepiece lens to the image sensor. , it becomes possible to simplify and downsize the entire device configuration of a medical device such as a surgical microscope or an ophthalmoscope.
- FIG. 5 is a schematic diagram showing a schematic configuration example of the medical observation device according to the present embodiment.
- the medical observation apparatus 2 according to the present embodiment has the same configuration as the medical observation apparatus 1 described using FIG. 1 in the first embodiment, including a beam splitter 103, an imaging lens, 107 and the wavefront sensor 108 are omitted, and the signal processing section 120 is replaced with a signal processing section 220.
- the signal processing unit 220 uses the incoherent light source 101 as a light source, the influence of aberrations of the subject 130 such as the eye appears in the tomographic image data as a decrease in signal amount. Therefore, in this embodiment, the signal amount of each region (for example, each pixel ) to correct the signal amount.
- a region with little texture inside the subject 130 is photographed in advance, and each region in the tomographic image data obtained from the image data thus obtained is Various methods may be employed, such as calculating or estimating the correction amount for each region based on the signal amount.
- FIG. 6 is a block diagram for explaining the correction processing according to this embodiment. Note that here, an area with less texture in the subject 130 is imaged in advance, and the amount of correction for each area is calculated based on the signal amount of each area in the tomographic image data obtained from the image data obtained.
- the amount of correction for each area is calculated based on the signal amount of each area in the tomographic image data obtained from the image data obtained.
- the signal processing section 220 includes a correction section 221 and a correction amount calculation section 222.
- the correction unit 221 may be the same as the correction unit 121 according to the first embodiment.
- the signal processing unit 220 receives image data of a region with little texture in the subject 130 that has been photographed in advance, and image data output from the image sensor 110 at a predetermined frame rate.
- the signal processing unit 220 generates tomographic image data (also referred to as first tomographic image data) in advance from image data of a region with little texture that has been photographed in advance. Further, the signal processing unit 220 generates tomographic image data (also referred to as second tomographic image data) from the image data output at a predetermined frame rate.
- the first tomographic image data obtained from the image data captured in advance is input to the correction amount calculation unit 222, and the second tomographic image data obtained from the image data output at a predetermined frame rate is input to the correction unit 222. 221.
- which region within the subject 130 has less texture may be determined manually by the operator, or by recognition of one or more pieces of image data covering the entire inside of the eyeball or the first tomographic image data. It may be automatically identified through processing or the like.
- the correction amount calculation unit 222 calculates each region (for example, each pixel) in the second tomographic image data obtained from the image data output at a predetermined frame rate based on the first tomographic image data obtained in advance. Calculate the signal amount correction amount at .
- the correction amount calculation unit 222 uses the maximum value of the signal amount in the entire first tomographic image data as a reference, and the correction amount (for example, amplification amount) of the signal amount becomes larger in an area where the signal amount is smaller than this maximum value.
- the correction amount may be calculated as follows. However, the present invention is not limited to this, and various modifications may be made, such as calculating the correction amount based on the average value or minimum value of the signal amount in the entire first tomographic image data.
- the correction amount calculated for each region is input to the correction unit 221. Similar to the correction unit 121 according to the first embodiment, the correction unit 221 converts the signal amount of each region in the second tomographic image data obtained from the image data input from the image sensor 110 at a predetermined frame rate into a correction amount. Correction is performed based on the correction amount input from the calculation unit 222. Then, the correction unit 221 generates a three-dimensional tomographic image using the corrected second tomographic image data, and outputs the generated three-dimensional tomographic image to the outside.
- the tomographic image data used to generate the three-dimensional tomographic image can be improved. Image quality is improved. Thereby, it becomes possible to generate a three-dimensional tomographic image with higher image quality.
- FIG. 7 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- the medical observation device 3 according to the present embodiment has the same configuration as the medical observation device 1 described using FIG. 1 in the first embodiment, including an incoherent light source 101, a beam splitter, 102 and 103, objective lenses 104 and 105, vibration mechanism 106, imaging lenses 107 and 109, wavefront sensor 108, and image sensor 110 are movable in at least one of the X direction, the Y direction, and the Z direction. has a fixed configuration.
- the Z direction may be a direction parallel to the optical axis (Z axis), and both the X direction and the Y direction may be directions perpendicular to the Z direction, for example,
- the direction may be parallel to the row direction of the pixel arrangement in the pixel array section of the image sensor 110, and the Y direction may be parallel to the column direction of the pixel arrangement in the pixel array section.
- the measurement system consisting of the incoherent light source 101, beam splitters 102 and 103, objective lenses 104 and 105, vibration mechanism 106, imaging lenses 107 and 109, wavefront sensor 108, and image sensor 110 is mounted on a movable stage. 301, it is possible to change the relative position between the object 130 and the measurement system, so the measurement area on the object 130 can be changed, and the measurement system can be scanned to cover a wider area than one shot. It becomes possible to acquire image data of the subject 130 (including the entire subject).
- the wavefront sensor 108 may detect the spatial aberration map in conjunction with the movement of the stage 301. Thereby, even when measuring the object 130 in a wider area (including the entire area) than in one shot, it is possible to obtain a spatial aberration map for the entire measurement area.
- FIG. 8 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- the medical observation device 4 according to the present embodiment has the same configuration as the medical observation device 1 described in the first embodiment using FIG. It has a configuration in which a moving mechanism 414 that moves the lens 104 along the optical axis and a moving mechanism 415 that moves the objective lens 105 facing the vibration mechanism 106 along the optical axis are added.
- FIG. 9 is a schematic diagram showing an example of the schematic configuration of the observation device according to the present embodiment.
- the human eye is exemplified as the subject 130.
- the present disclosure is not limited to this, and for example, as in the observation device 5 illustrated in FIG. It is possible to do so.
- a so-called time-domain type is used in which a three-dimensional tomographic image of the subject 130 or 530 (hereinafter, the subject 130 is exemplified) is obtained by moving the reflection mirror in the Z direction using the vibration mechanism 106.
- FFOCT Although OCT images in the xy plane (hereinafter also referred to as en-face images) can be obtained at high speed, OCT images in the XZ plane (B-scan images) are a stack of en-face images in the Z direction. It is difficult to obtain three-dimensional volume data at high speed because it is necessary to once obtain the three-dimensional volume data and then cut out and create the three-dimensional volume data in the necessary planes.
- a medical observation device and an information processing device that can shorten the time required to obtain a B-scan image will be described using examples.
- a medical observation device is constructed as a time domain type FFOCT will be exemplified similarly to the above embodiment, but the present invention is not limited to this, and wavelength It is also possible to configure the medical observation device as a sweep type FFOCT.
- FIG. 10 is a schematic diagram showing a schematic configuration example of the medical observation device according to the present embodiment.
- the medical observation device 6 according to the present embodiment has the same configuration as the medical observation device 1 described in the first embodiment using FIG.
- a rotation mechanism 613 for rotating about the Z-axis as a rotation axis is added, and the signal processing section 120 is replaced with a signal processing section 620.
- the rotation mechanism 613 may be an example of an adjustment mechanism that adjusts the rotation angle of the image sensor with respect to the image of the observation light using the optical axis of the observation light as the rotation axis.
- FIG. 10 shows a case where the optical system (beam splitter 103, imaging lens 107, and wavefront sensor 108) for detecting the wavefront of the observation light is omitted, the present invention is not limited to this. Similar to the signal processing unit 120 according to the embodiment described above, the processing unit 620 may correct the signal amount of the tomographic image data based on the spatial aberration map input from the wavefront sensor 108.
- FIG. 11 is a diagram showing an example of the drive area of the image sensor according to the present embodiment.
- an area 112 to be read (also referred to as a drive area) in the image sensor 110 is narrowed down in order to directly acquire the necessary B-scan image in the XZ plane.
- the image sensor 110 includes a pixel array unit 111 in which a plurality of pixels are two-dimensionally arranged in a matrix, and reads image data in units of pixels (hereinafter also referred to as row units or lines) arranged in a line in the X direction.
- the drive area 112 may be, for example, a rectangular area that is long in the X direction and is composed of one or several lines.
- the drive area 112 is not limited to this, and may be a rectangular area that is long in the Y direction, or may be a rectangular area that is long in a direction inclined with respect to the X and Y directions.
- the driving method of the image sensor 110 is not limited to the rolling shutter method, but may be a so-called global shutter method that allows simultaneous reading of all pixels. In the following description, for the sake of simplicity, a case will be exemplified in which the drive area 112 is a rectangular area that is long in the X direction.
- FIG. 12 is a diagram showing an example of the volume acquisition time when the number of pixels in the Y direction of the drive area according to the present embodiment is changed.
- the smaller the number of pixels in the Y direction of the drive area 112 in the image sensor 110 the shorter the readout time when reading out one unfazed image, and therefore the higher the frame rate. It becomes possible to do so.
- the volume acquisition time is 4.716 seconds, but by setting the number of pixels in the Y direction of the drive area 112 to 16 pixels, It becomes possible to shorten the volume acquisition time to 0.195 seconds.
- Image data (unfazed image) long in the X direction output from the image sensor 110 is input to the signal processing unit 620.
- the three-dimensional volume data obtained by stacking unfazed images long in the X direction in the Z direction substantially corresponds to a B-scan image obtained by slicing the three-dimensional volume data of the subject 130 in the XZ plane. Therefore, in this embodiment, a B-scan image of the subject 130 is generated by stacking tomographic image data obtained from image data output from the image sensor 110 in the Z direction.
- the drive area 112 in the image sensor 110 it becomes possible to directly acquire a B-scan image of the subject 130.
- the rotation mechanism 613 by using the rotation mechanism 613, it is possible to rotate the image sensor 110, for example, using the optical axis as the rotation axis. Therefore, in this embodiment, by rotating the image sensor 110 using the rotation mechanism 613, it is possible to obtain a B-scan image on any plane passing through the optical axis.
- the rotation of the image sensor 110 by the rotation mechanism 613 may be controlled by, for example, a control unit (not shown), and this control may be controlled by an operation input from a user such as a surgeon. It's okay.
- a B-scan image of an arbitrary XZ plane of the subject 130 can be acquired. It is also possible to do so.
- the drive area 112 in the image sensor 110 is limited to a rectangular area that is long in one direction, making it possible to directly and quickly acquire a B-scan image. It becomes possible. This makes it possible to shorten the operation time and examination time for the subject 130, thereby reducing the burden on the subject 130.
- the image sensor 110 By making the image sensor 110 rotatable, it is possible to obtain a B-scan image of any XZ plane passing through the optical axis. Furthermore, by making the measurement system including the image sensor 110 movable, it is also possible to obtain a B-scan image of an arbitrary XZ plane of the subject 130.
- FIG. 13 is a schematic diagram showing a modification of the medical observation device according to the sixth embodiment.
- the incoherent light source 101 is used as the light source is illustrated, but the light source according to this embodiment is not limited to this, and may be used as the medical observation device 6A illustrated in FIG. 13.
- a coherent light source 601 that emits at least spatially coherent light (hereinafter also referred to as spatially coherent light).
- an SLD Superluminescent Diode
- the coherent light source 601 as a light source, it is possible to omit the objective lens 105 disposed on the light incident surface of the vibration mechanism 106, as illustrated in FIG. It becomes possible to simplify the optical system of the device 6A.
- the image sensor 110 is rotated using the rotation mechanism 613 to change the XZ plane from which the B-scan image is acquired.
- the XZ plane from which the B-scan image is acquired is changed by rotating the image formed on the image sensor 110.
- FIG. 14 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- the medical observation device 7 according to this embodiment has the same configuration as the medical observation device 6 described using FIG. 10 in the sixth embodiment, but the rotation mechanism 613 is omitted, Instead, an image rotator 720 that rotates the image around the optical axis is arranged on the optical path from the beam splitter 102 to the imaging lens 109.
- the image rotator 720 may be an example of an adjustment mechanism that adjusts the rotation angle of the image sensor with respect to the image of the observation light using the optical axis of the observation light as the rotation axis. Note that in this embodiment as well, similarly to the sixth embodiment or its modification, a rectangular drive area 112 that is long in one direction is set in the image sensor 110.
- the image rotator 720 according to this embodiment is, for example, one configured with a plurality of (for example, three) mirrors 721 to 723 as illustrated in FIG. 15, or a dove prism as illustrated in FIG.
- Various methods may be adopted, such as one using G.724.
- the rotation mechanism 613 can be omitted, thereby simplifying the configuration of the medical observation device 7. becomes possible.
- FIG. 17 is a schematic diagram showing a modification of the medical observation device according to the seventh embodiment.
- the incoherent light source 101 is used as the light source is illustrated, but the light source according to the present embodiment is not limited to this, and similarly to the modification of the sixth embodiment, an SLD A coherent light source 601 such as the following may be used.
- the coherent light source 601 as a light source, it is possible to omit the objective lens 105 disposed on the light entrance surface of the vibration mechanism 106, so it is possible to simplify the optical system of the medical observation device 7A. becomes.
- FIG. 18 is a diagram showing an example of the drive area according to the present embodiment.
- the driving area 112 of the image sensor 110 is fixed.
- a global shutter type image sensor is adopted as the image sensor 110, it is also possible to freely change the drive area 112a while suppressing redundancy of the readout time, as illustrated in FIG. . Therefore, it is possible to obtain a B-scan image of any XZ plane without requiring the rotation mechanism 613, image rotator 720, or stage 301.
- FIG. 19 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- the image sensor 110 that detects light in the visible light range and generates image data is used as an image sensor that captures an interference image between observation light and reference light is illustrated, but the present invention is not limited to this. It is not something that will be done.
- an image sensor 910 that detects light outside the light range and generates image data is also possible.
- the light-receiving sensitivity of the image sensor 110 that detects visible light is in the wavelength range from 400 nm (nanometers) to around 900 nm
- the light-receiving sensitivity of the image sensor 910 that can detect SWIR light is in the wavelength range from 400 nm to around 1700 nm. It may be a wavelength band.
- the wavelength bands of light used in general OCT include the 850 nm band, the 1 ⁇ m (micrometer) band, and the 1.3 ⁇ m band.
- light in the 850 nm band is used, for example, to observe the anterior segment and fundus of the eye, but scattering can be reduced by using light with a longer wavelength (for example, light in the 1 ⁇ m band). Therefore, the depth of tissue penetration into the fundus of the eye can be improved.
- light in the 1.3 ⁇ m band is used for observing the anterior segment of the eye.
- an image sensor 910 that can observe light with a longer wavelength as in this embodiment, it is possible to realize a medical observation device 9 that can perform observation with higher precision.
- the image sensor 910 that can observe light in the 1.0 ⁇ m band it is possible to improve the depth of tissue penetration into the fundus of the eye.
- the image sensor 910 that can observe light in the 1.3 ⁇ m band it is possible to sharpen the OCT image of the anterior segment of the eye.
- by expanding the wavelength width of the light receiving sensitivity of the image sensor 910 it is also possible to improve the resolution in the axial direction.
- the present embodiment it is possible to select the wavelength and wavelength width according to the observation target (for example, human cell tissue), so it is possible to obtain a clearer image. . This makes it possible to obtain appropriate diagnostic results.
- the observation target for example, human cell tissue
- FIG. 20 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- FIG. 21 is a partially enlarged view of the pixel array section in the polarization image sensor according to this embodiment.
- the medical observation device 10 As shown in FIG. 20, the medical observation device 10 according to the present embodiment has the same configuration as the medical observation device 6 described in the sixth embodiment using FIG. It has a configuration replaced by the image sensor 1010.
- the polarized image sensor 1010 includes, for example, a pixel 21 that has light reception sensitivity to light polarized in the horizontal direction (for example, the a pixel 22 that has light-receiving sensitivity to light polarized in the diagonally upper left direction, a pixel 23 that has light-receiving sensitivity to light polarized in the diagonally upper-right direction, and a pixel 24 that has light-receiving sensitivity to light polarized in the diagonally upper right direction. are arranged in a 2 ⁇ 2 pattern, and this arrangement is repeated in the matrix direction.
- FIG. 21 illustrates a polarization image sensor 1010 that can obtain interference images in one shot in each of four polarization directions: horizontal direction, vertical direction, diagonally upper left direction, and diagonally upper right direction.
- the embodiment is not limited to this, and may be modified in various ways, such as having two polarization directions, a horizontal direction and a vertical direction.
- retinal nerve fibers can be cited as intraocular birefringent tissue, and it is known that in glaucoma, retinal nerve fibers are destroyed by intraocular pressure, and interference of multiple polarization directions as in this embodiment is known. By acquiring images, it is possible to improve the accuracy of lesion diagnosis.
- FIG. 22 is a schematic diagram showing an example of a schematic configuration of a medical observation device according to this embodiment.
- the medical observation device 11 according to the present embodiment has the same configuration as the medical observation device 6 described in the sixth embodiment using FIG. (Event-based Vision Sensor) 1110.
- EVS is, for example, detecting the coordinates, direction (polarity), and time (event data) of a pixel that detected a change in brightness (also called light intensity) as an address event, and synchronizing or synchronizing the results (event data). It is an image sensor that outputs asynchronously.
- an angiography image is generated by acquiring a plurality of two-dimensional OCT images or three-dimensional OCT images and extracting an image of a blood vessel from the difference information between them.
- the EVS 1110 no signal is output from pixels for which no change in brightness is detected, and signals are output from pixels for which a change in brightness has occurred due to movement such as blood flow. Therefore, by acquiring an unfazed image using the EVS 1110, it is possible to directly acquire an image of a moving blood vessel or the like.
- the frame rate of EVS is generally around 1000 fps (frames per second), which is significantly higher than that of a normal image sensor. Therefore, by using the EVS1110, it is also possible to generate angiography images at high speed.
- FIG. 23 is a hardware configuration diagram showing an example of a computer 2000 that implements the functions of the signal processing units 120 and 620.
- the computer 2000 has a CPU 2100, a RAM 2200, a ROM (Read Only Memory) 2300, an HDD (Hard Disk Drive) 2400, a communication interface 2500, and an input/output interface 2600. Each part of the computer 2000 is connected by a bus 2050.
- the CPU 2100 operates based on a program stored in the ROM 2300 or the HDD 2400 and controls each part. For example, the CPU 2100 loads programs stored in the ROM 2300 or HDD 2400 into the RAM 2200, and executes processes corresponding to various programs.
- the ROM 2300 stores boot programs such as BIOS (Basic Input Output System) that are executed by the CPU 2100 when the computer 2000 is started, programs that depend on the hardware of the computer 2000, and the like.
- BIOS Basic Input Output System
- the HDD 2400 is a computer-readable recording medium that non-temporarily records programs executed by the CPU 2100 and data used by the programs. Specifically, the HDD 2400 is a recording medium that records a program for executing each operation according to the present disclosure, which is an example of the program data 2450.
- the communication interface 2500 is an interface for connecting the computer 2000 to an external network 2550 (eg, the Internet).
- the CPU 2100 receives data from other devices or transmits data generated by the CPU 2100 to other devices via the communication interface 2500.
- the input/output interface 2600 is an interface for connecting the input/output device 2650 and the computer 2000.
- the CPU 2100 receives data from an input device such as a keyboard or a mouse via the input/output interface 2600. Further, the CPU 2100 transmits data to an output device such as a display, speaker, or printer via the input/output interface 2600. Further, the input/output interface 2600 may function as a media interface that reads a program or the like recorded on a predetermined recording medium.
- Media includes, for example, optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, semiconductor memory, etc. It is.
- optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable disk)
- magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, semiconductor memory, etc. It is.
- the CPU 2100 of the computer 2000 executes the functions of the signal processing units 120 and 620 by executing a program loaded on the RAM 2200.
- the HDD 2400 stores programs and the like according to the present disclosure. Note that the CPU 2100 reads the program data 2450 from the HDD 2400 and executes it, but as another example, these programs may be acquired from another device via the external network 2550.
- the technical categories that embody the above-mentioned technical idea are not limited.
- the above technical idea may be embodied by a computer program for causing a computer to execute one or more procedures (steps) included in the method of manufacturing or using the above-described device.
- the above-mentioned technical idea may be embodied by a computer-readable non-transitory recording medium on which such a computer program is recorded.
- the present technology can also have the following configuration.
- a light source that emits at least spatially incoherent light; an image sensor that acquires an image of light emitted from the light source and reflected by the subject; a signal processing unit that corrects a signal amount determined from first image data acquired by the image sensor based on a wavefront aberration of light reflected by the subject;
- a medical observation device equipped with (2) a reflecting mirror movable along the optical axis of the light emitted from the light source; The light emitted from the light source is branched into a first light incident on the subject and a second light incident on the reflecting mirror, and the first light reflected by the subject and the second light reflected by the reflecting mirror are separated.
- a beam splitter that combines the two lights
- the medical observation device (1), wherein the image sensor acquires an interference image of the first light and the second light as the first image data.
- the medical observation device (2), further comprising a vibration mechanism that moves the reflecting mirror along the optical axis.
- the signal processing unit calculates the signal amount based on the amplitude of the signal intensity at each pixel determined from the two or more pieces of first image data acquired by the image sensor while the reflecting mirror is moving along the optical axis.
- the medical observation device according to (3) above, wherein the calculated signal amount is corrected based on the wavefront aberration.
- an objective lens that irradiates the subject with light emitted from the light source; a moving mechanism that moves the objective lens along an optical axis of light emitted from the light source;
- the medical observation device according to any one of (1) to (6) above, further comprising: (9) Any one of (1) to (8) above, further comprising an adjustment mechanism that adjusts a rotation angle of the image sensor with respect to an image of light emitted from the light source and reflected by the subject, using an optical axis of the light as a rotation axis.
- the medical observation device according to item 1. (10) The medical observation device according to (9), wherein the adjustment mechanism rotates the image sensor using an optical axis of light emitted from the light source as a rotation axis.
- the medical observation device includes a pixel array section in which a plurality of pixels are two-dimensionally arranged in a matrix, and generates the first image data by driving a rectangular area that is a part of the pixel array section and is long in one direction.
- the medical observation device according to any one of (9) to (11) above.
- the image sensor includes pixels that are sensitive to light having wavelengths outside the visible light range.
- the image sensor includes two or more pixels that are sensitive to light polarized in different directions.
- the medical observation device according to any one of (9) to (12), wherein the image sensor is an EVS (Event-based Vision Sensor) that outputs event data indicating a pixel that has detected a change in brightness.
- the medical observation device according to any one of (1) to (15) above, which is a surgical microscope or an ophthalmoscope.
- a signal processing unit that corrects a signal amount determined from image data of an image of light emitted from a light source that emits at least spatially incoherent light and reflected by a subject based on the wavefront aberration of the light reflected by the subject.
- An information processing device comprising: (18) a light source that emits light; an image sensor that acquires an image of light emitted from the light source and reflected by the subject; an adjustment mechanism that adjusts a rotation angle of the image sensor with respect to an image of light emitted from the light source and reflected by the subject, using an optical axis of the light as a rotation axis; A medical observation device equipped with. (19) a reflecting mirror movable along the optical axis of the light emitted from the light source; The light emitted from the light source is branched into a first light incident on the subject and a second light incident on the reflecting mirror, and the first light reflected by the subject and the second light reflected by the reflecting mirror are separated.
- a beam splitter that combines the two lights, Furthermore, The medical observation device according to (18), wherein the image sensor acquires an interference image of the first light and the second light as image data. (20) The medical observation device according to (19), further comprising a vibration mechanism that moves the reflecting mirror along the optical axis. (21)
- the image sensor includes a pixel array section in which a plurality of pixels are two-dimensionally arranged in a matrix, and generates the image data by driving a rectangular area that is a part of the pixel array section and is long in one direction.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
Description
1.第1の実施形態
1.1 医療用観察装置の概略構成例
1.2 断層画像データの生成について
1.3 補正処理
1.4 まとめ
2.第2の実施形態
2.1 医療用観察装置の概略構成例
2.2 補正処理
3.第3の実施形態
4.第4の実施形態
5.第5の実施形態
6.第6の実施形態
6.1 医療用観察装置の概略構成例
6.2 まとめ
6.3 変形例
7.第7の実施形態
7.1 変形例
8.第8の実施形態
9.第9の実施形態
10.第10の実施形態
11.第11の実施形態
12.ハードウエア構成
まず、本開示の第1の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。本実施形態及び後述する実施形態では、医療用観察装置及び情報処理装置として、緑内障治療などのヒトの眼を手術又は診断する際に使用される手術顕微鏡や眼底鏡等に対して各実施形態に係る医療用観察装置及び情報処理装置を適用した場合を例示するが、本開示はこれに限定されず、検査対象の物体(被検体)からの光(反射光、透過光、散乱光など)が収差を持ち得る種々の観察装置を対象とすることが可能である。
図1は、本実施形態に係る医療用観察装置の概略構成例を示す模式図である。図1に示すように、本実施形態に係る医療用観察装置1は、インコヒーレント光源101と、ビームスプリッタ102及び103と、対物レンズ104及び105と、振動機構106と、結像レンズ107及び109と、波面センサ108と、イメージセンサ110と、信号処理部120とを備える。なお、本実施形態では、検査対象である被検体130をヒトの眼であるとし、眼科手術又は眼科検査を目的としてヒトの眼の3次元断層画像を取得する場合を想定する。
上述においても触れたが、本実施形態において断層画像データは、観察光と参照光との干渉像をZ方向(本説明において深さ方向ともいう)に走査することで得られた画像データに基づいて生成される。ここで、図2に例示するように、Z方向の2点それぞれに反射物がある場合、イメージセンサ110上のある画素に関してZ方向の走査結果として得られる信号の強度は、図3に例示するように、観察光と参照光の干渉によってZ方向に振動することとなる。この振動の振幅は、対象の反射率を反映している。すなわち、振幅の大きさは、反射物で反射した観察像の輝度を表している。
つづいて、信号処理部120による空間収差マップに基づいた信号量の補正処理について説明する。図4は、本実施形態に係る補正処理を説明するためのブロック図である。なお、ここでは、信号処理部120において3次元断層画像を生成する度に波面センサ108において空間収差マップの検出が実行される場合について例を挙げる。
以上のように、本実施形態に係る医療用観察装置1は、少なくとも空間的にインコヒーレントな光を出射するインコヒーレント光源101と、インコヒーレント光源101から出射して被検体130で反射した観察光と所定の参照光との干渉像の2次元画像データを取得するイメージセンサ110と、観察光を検出することで被検体130の波面収差の影響が空間的にどのように分布しているかを示す空間収差マップを計測する波面センサ108と、計測された空間収差マップに基づいて干渉像の画像データから求まる断層画像データの信号量を補正する信号処理部120とを備える。
次に、本開示の第2の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
図5は、本実施形態に係る医療用観察装置の概略構成例を示す模式図である。図5に示すように、本実施形態に係る医療用観察装置2は、第1の実施形態において図1を用いて説明した医療用観察装置1と同様の構成において、ビームスプリッタ103、結像レンズ107及び波面センサ108が省略され、信号処理部120が信号処理部220に置き換えられた構成を備える。
つづいて、信号処理部220による補正処理について説明する。図6は、本実施形態に係る補正処理を説明するためのブロック図である。なお、ここでは、被検体130内のテクスチャの少ない領域を事前に撮影しておき、それにより得られた画像データから求まる断層画像データにおける各領域の信号量に基づいて各領域の補正量を算出しておいた場合について例を挙げる。
次に、本開示の第3の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
次に、本開示の第4の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
次に、本開示の第5の実施形態に係る観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
次に、本開示の第6の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
図10は、本実施形態に係る医療用観察装置の概略構成例を示す模式図である。図10に示すように、本実施形態に係る医療用観察装置6は、第1の実施形態において図1を用いて説明した医療用観察装置1と同様の構成において、イメージセンサ110を光軸(Z軸)を回転軸として回動させるための回転機構613が追加され、信号処理部120が信号処理部620に置き換えられた構成を備える。回転機構613は、観察光の像に対するイメージセンサの回転角を観察光の光軸を回転軸として調整する調整機構の一例であってよい。
以上のように、本実施形態によれば、イメージセンサ110における駆動エリア112が一方向に長い矩形領域に制限されることで、Bスキャン画像を直接的かつ高速に取得することが可能となる。それにより、被検体130に対する手術時間や検査時間を短縮することが可能となるため、被検体130への負担を軽減することが可能となる。
図13は、第6の実施形態に係る医療用観察装置の変形例を示す模式図である。上述の第6の実施形態では、光源としてインコヒーレント光源101を用いた場合を例示したが、本実施形態に係る光源はこれに限定されず、図13に例示する医療用観察装置6Aのように、少なくとも空間的にコヒーレントな光(以下、空間コヒーレント光ともいう)を出射するコヒーレント光源601とすることも可能である。コヒーレント光源601としては、SLD(Superluminescent Diode)などを用いることができる。
次に、本開示の第7の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
図17は、第7の実施形態に係る医療用観察装置の変形例を示す模式図である。上述の第7の実施形態では、光源としてインコヒーレント光源101を用いた場合を例示したが、本実施形態に係る光源はこれに限定されず、第6の実施形態の変形例と同様に、SLDなどのコヒーレント光源601が用いられてもよい。光源としてコヒーレント光源601を使用することで、振動機構106の光入射面に配置される対物レンズ105を省略することが可能となるため、医療用観察装置7Aの光学系を簡略化することが可能となる。
次に、本開示の第8の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。
次に、本開示の第9の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。なお、以下の説明では、第6の実施形態において図10を用いて説明した医療用観察装置6をベースとした場合を例示するが、これに限定されず、他の実施形態又はその変形例に係る医療用観察装置をベースとすることも可能である。
次に、本開示の第10の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。なお、以下の説明では、第6の実施形態において図10を用いて説明した医療用観察装置6をベースとした場合を例示するが、これに限定されず、他の実施形態又はその変形例に係る医療用観察装置をベースとすることも可能である。
次に、本開示の第11の実施形態に係る医療用観察装置及び情報処理装置について、図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果についてはそれらを引用することで、重複する説明を省略する。なお、以下の説明では、第6の実施形態において図10を用いて説明した医療用観察装置6をベースとした場合を例示するが、これに限定されず、他の実施形態又はその変形例に係る医療用観察装置をベースとすることも可能である。
上述してきた実施形態及びその変形例に係る信号処理部120、620は、例えば図23に示すような構成のコンピュータ2000によって実現され得る。図23は、信号処理部120、620の機能を実現するコンピュータ2000の一例を示すハードウエア構成図である。コンピュータ2000は、CPU2100、RAM2200、ROM(Read Only Memory)2300、HDD(Hard Disk Drive)2400、通信インタフェース2500、及び入出力インタフェース2600を有する。コンピュータ2000の各部は、バス2050によって接続される。
(1)
少なくとも空間的にインコヒーレントな光を出射する光源と、
前記光源から出射して被検体で反射した光の像を取得するイメージセンサと、
前記イメージセンサで取得された第1画像データから求まる信号量を前記被検体で反射した光の波面収差に基づいて補正する信号処理部と、
を備える医療用観察装置。
(2)
前記光源から出射した光の光軸に沿って移動可能な反射ミラーと、
前記光源から出射した光を前記被検体に入射する第1光と前記反射ミラーに入射する第2光と分岐するとともに、前記被検体で反射した前記第1光と前記反射ミラーで反射した前記第2光とを合波するビームスプリッタと、
をさらに備え、
前記イメージセンサは、前記第1光と前記第2光との干渉像を前記第1画像データとして取得する
前記(1)に記載の医療用観察装置。
(3)
前記反射ミラーを前記光軸に沿って移動させる振動機構をさらに備える
前記(2)に記載の医療用観察装置。
(4)
前記信号処理部は、前記反射ミラーが前記光軸に沿って移動中に前記イメージセンサで取得された2以上の前記第1画像データから求まる各画素での信号強度の振幅に基づいて前記信号量を計算し、計算された前記信号量を前記波面収差に基づいて補正する
前記(3)に記載の医療用観察装置。
(5)
前記被検体で反射した光の波面を検出する波面センサをさらに備え、
前記信号処理部は、前記波面センサで検出された前記波面に基づいて前記信号量を補正する
前記(1)~(4)の何れか1つに記載の医療用観察装置。
(6)
前記信号処理部は、前記イメージセンサで取得された前記被検体の第2画像データに基づいて特定されるテクスチャの少ない領域に基づいて前記信号量を補正する
前記(1)~(5)の何れか1つに記載の医療用観察装置。
(7)
前記光源と前記イメージセンサとが固定され、前記被検体に対して移動可能なステージをさらに備える
前記(1)~(6)の何れか1つに記載の医療用観察装置。
(8)
前記光源から出射した光を前記被検体に照射する対物レンズと、
前記対物レンズを前記光源から出射した光の光軸に沿って移動させる移動機構と、
をさらに備える前記(1)~(6)の何れか1つに記載の医療用観察装置。
(9)
前記光源から出射して前記被検体で反射した光の像に対する前記イメージセンサの回転角を前記光の光軸を回転軸として調整する調整機構をさらに備える
前記(1)~(8)の何れか1つに記載の医療用観察装置。
(10)
前記調整機構は、前記光源から出射した光の光軸を回転軸として前記イメージセンサを回転させる
前記(9)に記載の医療用観察装置。
(11)
前記調整機構は、前記被検体で反射して前記イメージセンサに入射する光の像を回転させる
前記(9)に記載の医療用観察装置。
(12)
前記イメージセンサは、複数の画素が行列状に2次元配列する画素アレイ部を備え、前記画素アレイ部における一部であって一方向に長い矩形領域を駆動することで前記第1画像データを生成する
前記(9)~(11)の何れか1つに記載の医療用観察装置。
(13)
前記イメージセンサは、可視光域以外の波長の光に対して受光感度を持つ画素を含む
前記(9)~(12)の何れか1つに記載の医療用観察装置。
(14)
前記イメージセンサは、互いに異なる方向に偏光された光に対して受光感度を持つ2以上の画素を含む
前記(9)~(12)の何れか1つに記載の医療用観察装置。
(15)
前記イメージセンサは、輝度変化を検出した画素を示すイベントデータを出力するEVS(Event-based Vision Sensor)である
前記(9)~(12)の何れか1つに記載の医療用観察装置。
(16)
手術顕微鏡又は眼底鏡である
前記(1)~(15)の何れか1つに記載の医療用観察装置。
(17)
少なくとも空間的にインコヒーレントな光を出射する光源から出射して被検体で反射した光の像の画像データから求まる信号量を前記被検体で反射した光の波面収差に基づいて補正する信号処理部を備える情報処理装置。
(18)
光を出射する光源と、
前記光源から出射して被検体で反射した光の像を取得するイメージセンサと、
前記光源から出射して被検体で反射した光の像に対する前記イメージセンサの回転角を前記光の光軸を回転軸として調整する調整機構と、
を備える医療用観察装置。
(19)
前記光源から出射した光の光軸に沿って移動可能な反射ミラーと、
前記光源から出射した光を前記被検体に入射する第1光と前記反射ミラーに入射する第2光と分岐するとともに、前記被検体で反射した前記第1光と前記反射ミラーで反射した前記第2光とを合波するビームスプリッタと、
をさらに備え、
前記イメージセンサは、前記第1光と前記第2光との干渉像を画像データとして取得する
前記(18)に記載の医療用観察装置。
(20)
前記反射ミラーを前記光軸に沿って移動させる振動機構をさらに備える
前記(19)に記載の医療用観察装置。
(21)
前記イメージセンサは、複数の画素が行列状に2次元配列する画素アレイ部を備え、前記画素アレイ部における一部であって一方向に長い矩形領域を駆動することで前記画像データを生成する
前記(19)又は(20)に記載の医療用観察装置。
(22)
前記調整機構は、前記光源から出射した光の光軸を回転軸として前記イメージセンサを回転させる
前記(18)~(21)の何れか1つに記載の医療用観察装置。
(23)
前記調整機構は、前記被検体で反射して前記イメージセンサに入射する光の像を回転させる
前記(18)~(21)の何れか1つに記載の医療用観察装置。
(24)
前記イメージセンサは、可視光域以外の波長の光に対して受光感度を持つ画素を含む
前記(18)~(23)の何れか1つに記載の医療用観察装置。
(25)
前記イメージセンサは、互いに異なる方向に偏光された光に対して受光感度を持つ2以上の画素を含む
前記(18)~(23)の何れか1つに記載の医療用観察装置。
(26)
前記イメージセンサは、輝度変化を検出した画素を示すイベントデータ出力するEVS(Event-based Vision Sensor)である
前記(18)~(23)の何れか1つに記載の医療用観察装置。
(27)
手術顕微鏡又は眼底鏡である
前記(18)~(26)の何れか1つに記載の医療用観察装置。
5 観察装置
21~24 画素
101 インコヒーレント光源
102、103 ビームスプリッタ
104、105 対物レンズ
106 振動機構
107、109 結像レンズ
108 波面センサ
110、910 イメージセンサ
111 画素アレイ部
112、112a 駆動エリア
120、220、620 信号処理部
121、221 補正部
122、222 補正量計算部
130、530 被検体
301 ステージ
414、415 移動機構
601 コヒーレント光源
613 回転機構
720 イメージローテータ
721~723 ミラー
724 ダブプリズム
1010 偏光イメージセンサ
1110 EVS
Claims (17)
- 少なくとも空間的にインコヒーレントな光を出射する光源と、
前記光源から出射して被検体で反射した光の像を取得するイメージセンサと、
前記イメージセンサで取得された第1画像データから求まる信号量を前記被検体で反射した光の波面収差に基づいて補正する信号処理部と、
を備える医療用観察装置。 - 前記光源から出射した光の光軸に沿って移動可能な反射ミラーと、
前記光源から出射した光を前記被検体に入射する第1光と前記反射ミラーに入射する第2光と分岐するとともに、前記被検体で反射した前記第1光と前記反射ミラーで反射した前記第2光とを合波するビームスプリッタと、
をさらに備え、
前記イメージセンサは、前記第1光と前記第2光との干渉像を前記第1画像データとして取得する
請求項1に記載の医療用観察装置。 - 前記反射ミラーを前記光軸に沿って移動させる振動機構をさらに備える
請求項2に記載の医療用観察装置。 - 前記信号処理部は、前記反射ミラーが前記光軸に沿って移動中に前記イメージセンサで取得された2以上の前記第1画像データから求まる各画素での信号強度の振幅に基づいて前記信号量を計算し、計算された前記信号量を前記波面収差に基づいて補正する
請求項3に記載の医療用観察装置。 - 前記被検体で反射した光の波面を検出する波面センサをさらに備え、
前記信号処理部は、前記波面センサで検出された前記波面に基づいて前記信号量を補正する
請求項1に記載の医療用観察装置。 - 前記信号処理部は、前記イメージセンサで取得された前記被検体の第2画像データに基づいて特定されるテクスチャの少ない領域に基づいて前記信号量を補正する
請求項1に記載の医療用観察装置。 - 前記光源と前記イメージセンサとが固定され、前記被検体に対して移動可能なステージをさらに備える
請求項1に記載の医療用観察装置。 - 前記光源から出射した光を前記被検体に照射する対物レンズと、
前記対物レンズを前記光源から出射した光の光軸に沿って移動させる移動機構と、
をさらに備える請求項1に記載の医療用観察装置。 - 前記光源から出射して前記被検体で反射した光の像に対する前記イメージセンサの回転角を前記光の光軸を回転軸として調整する調整機構をさらに備える
請求項1に記載の医療用観察装置。 - 前記調整機構は、前記光源から出射した光の光軸を回転軸として前記イメージセンサを回転させる
請求項9に記載の医療用観察装置。 - 前記調整機構は、前記被検体で反射して前記イメージセンサに入射する光の像を回転させる
請求項9に記載の医療用観察装置。 - 前記イメージセンサは、複数の画素が行列状に2次元配列する画素アレイ部を備え、前記画素アレイ部における一部であって一方向に長い矩形領域を駆動することで前記第1画像データを生成する
請求項9に記載の医療用観察装置。 - 前記イメージセンサは、可視光域以外の波長の光に対して受光感度を持つ画素を含む
請求項9に記載の医療用観察装置。 - 前記イメージセンサは、互いに異なる方向に偏光された光に対して受光感度を持つ2以上の画素を含む
請求項9に記載の医療用観察装置。 - 前記イメージセンサは、輝度変化を検出した画素を示すイベントデータを出力するEVS(Event-based Vision Sensor)である
請求項9に記載の医療用観察装置。 - 手術顕微鏡又は眼底鏡である
請求項1に記載の医療用観察装置。 - 少なくとも空間的にインコヒーレントな光を出射する光源から出射して被検体で反射した光の像の画像データから求まる信号量を前記被検体で反射した光の波面収差に基づいて補正する信号処理部を備える情報処理装置。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/849,316 US20250194921A1 (en) | 2022-03-29 | 2023-03-20 | Medical observation device and information processing device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022-054384 | 2022-03-29 | ||
JP2022054384 | 2022-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023189793A1 true WO2023189793A1 (ja) | 2023-10-05 |
Family
ID=88201008
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/010801 WO2023189793A1 (ja) | 2022-03-29 | 2023-03-20 | 医療用観察装置及び情報処理装置 |
Country Status (2)
Country | Link |
---|---|
US (1) | US20250194921A1 (ja) |
WO (1) | WO2023189793A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028115A1 (en) * | 2001-05-23 | 2003-02-06 | David Thomas | System and method for reconstruction of aberrated wavefronts |
JP2005224328A (ja) * | 2004-02-10 | 2005-08-25 | Topcon Corp | 収差補正機能付き画像形成装置 |
JP2007252402A (ja) * | 2006-03-20 | 2007-10-04 | Topcon Corp | 眼科測定装置 |
WO2017090361A1 (ja) * | 2015-11-27 | 2017-06-01 | 株式会社トプコン | 角膜検査装置 |
JP2018068707A (ja) * | 2016-10-31 | 2018-05-10 | 株式会社ニデック | 眼科情報処理装置、眼科情報処理プログラム、および眼科手術システム |
JP2021067704A (ja) * | 2019-10-17 | 2021-04-30 | 株式会社デンソーウェーブ | 撮像装置 |
WO2021251146A1 (ja) * | 2020-06-09 | 2021-12-16 | 国立大学法人静岡大学 | 瞳孔検出装置 |
-
2023
- 2023-03-20 US US18/849,316 patent/US20250194921A1/en active Pending
- 2023-03-20 WO PCT/JP2023/010801 patent/WO2023189793A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028115A1 (en) * | 2001-05-23 | 2003-02-06 | David Thomas | System and method for reconstruction of aberrated wavefronts |
JP2005224328A (ja) * | 2004-02-10 | 2005-08-25 | Topcon Corp | 収差補正機能付き画像形成装置 |
JP2007252402A (ja) * | 2006-03-20 | 2007-10-04 | Topcon Corp | 眼科測定装置 |
WO2017090361A1 (ja) * | 2015-11-27 | 2017-06-01 | 株式会社トプコン | 角膜検査装置 |
JP2018068707A (ja) * | 2016-10-31 | 2018-05-10 | 株式会社ニデック | 眼科情報処理装置、眼科情報処理プログラム、および眼科手術システム |
JP2021067704A (ja) * | 2019-10-17 | 2021-04-30 | 株式会社デンソーウェーブ | 撮像装置 |
WO2021251146A1 (ja) * | 2020-06-09 | 2021-12-16 | 国立大学法人静岡大学 | 瞳孔検出装置 |
Also Published As
Publication number | Publication date |
---|---|
US20250194921A1 (en) | 2025-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102970919B (zh) | 光学相干断层图像摄像设备及其方法 | |
KR101496245B1 (ko) | 촬상장치 및 촬상방법 | |
CN104799810B (zh) | 光学相干断层成像设备及其控制方法 | |
US9241627B2 (en) | Method and analysis system for eye examinations | |
JP5970682B2 (ja) | 眼球計測装置、眼球計測方法 | |
KR101787973B1 (ko) | 화상처리장치 및 화상처리방법 | |
JPH08508903A (ja) | 網膜眼病診断システム | |
WO2016120933A1 (en) | Tomographic imaging apparatus, tomographic imaging method, image processing apparatus, image processing method, and program | |
JP2010268990A (ja) | 光干渉断層撮像装置およびその方法 | |
JP2011218155A (ja) | 光断層撮像装置 | |
JP2016127900A (ja) | 光断層撮像装置、その制御方法、及びプログラム | |
JP2017131550A (ja) | 画像処理装置及び画像処理方法 | |
US10123699B2 (en) | Ophthalmologic apparatus and imaging method | |
JP7368581B2 (ja) | 眼科装置、及び眼科情報処理装置 | |
WO2016017664A1 (ja) | 断層像撮影装置 | |
JP6557229B2 (ja) | 断層像撮影装置 | |
JP6929684B2 (ja) | 眼科撮影装置及びその制御方法 | |
JP6606640B2 (ja) | 眼科装置及びその制御方法 | |
JP2017189617A (ja) | 撮像装置の制御方法、コンピューター可読媒体、及び撮像装置を制御するコントローラー | |
WO2016136926A1 (ja) | 断層像撮影装置 | |
WO2023189793A1 (ja) | 医療用観察装置及び情報処理装置 | |
JP7394897B2 (ja) | 眼科装置、及び眼科装置の制御方法 | |
WO2021153086A1 (ja) | 眼科装置、その制御方法、及び記録媒体 | |
JP2020130266A (ja) | 眼科装置 | |
JP7711369B2 (ja) | 眼科装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23779809 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18849316 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 23779809 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 18849316 Country of ref document: US |