CN118196218B - Fundus image processing method, device and equipment - Google Patents
Fundus image processing method, device and equipment Download PDFInfo
- Publication number
- CN118196218B CN118196218B CN202410586406.6A CN202410586406A CN118196218B CN 118196218 B CN118196218 B CN 118196218B CN 202410586406 A CN202410586406 A CN 202410586406A CN 118196218 B CN118196218 B CN 118196218B
- Authority
- CN
- China
- Prior art keywords
- image
- fundus
- fundus image
- gray
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 86
- 238000009826 distribution Methods 0.000 claims abstract description 74
- 238000004458 analytical method Methods 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000008569 process Effects 0.000 claims abstract description 32
- 238000003384 imaging method Methods 0.000 claims abstract description 28
- 230000000694 effects Effects 0.000 claims abstract description 22
- 238000004088 simulation Methods 0.000 claims abstract description 19
- 238000013508 migration Methods 0.000 claims abstract description 15
- 230000005012 migration Effects 0.000 claims abstract description 15
- 208000002177 Cataract Diseases 0.000 claims description 47
- 238000012549 training Methods 0.000 claims description 39
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 238000010191 image analysis Methods 0.000 claims description 18
- 239000003086 colorant Substances 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 11
- 210000004204 blood vessel Anatomy 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 21
- 230000000007 visual effect Effects 0.000 description 14
- 238000000605 extraction Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004382 visual function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000004445 quantitative analysis Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 210000004127 vitreous body Anatomy 0.000 description 2
- 206010007772 Cataract conditions Diseases 0.000 description 1
- 102000014824 Crystallins Human genes 0.000 description 1
- 108010064003 Crystallins Proteins 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000004456 color vision Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 210000000695 crystalline len Anatomy 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000004925 denaturation Methods 0.000 description 1
- 230000036425 denaturation Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a fundus image processing method, a fundus image processing device and fundus image processing equipment, wherein the fundus image processing method comprises the following steps: obtaining a fundus image; performing a first analysis process on the fundus image to computationally determine a sharpness of the fundus image; performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image; obtaining a clear target scene picture; performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture; and processing the image based on the definition, the chromaticity distribution and the blurred version scene image to form a simulation image for simulating the imaging effect of the patient looking at the target scene. The fundus image processing method is used for processing fundus images of a patient to simulate a picture representing imaging quality of the patient under an actual view angle.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a fundus image processing method, a fundus image processing device and fundus image processing equipment.
Background
Cataract is a disease which is multiple and seriously affects the visual function of middle-aged and elderly people, and accurate monitoring of the degree of affecting the visual quality is an important means for maintaining the visual function and the life ability of the elderly people. At present, artificial intelligence algorithms developed based on slit-lamp microscopes and fundus photography and used for cataract monitoring are mainly used for grading the severity of cataract, and the visual quantification of the degree of cataract affecting the visual function of a patient cannot be performed.
Moreover, even after quantifying the degree of lens cloudiness of a patient and its impact on the quality of imaging of the human eye, the resulting data still does not intuitively exhibit the magnitude of the reduction in visual quality of the patient. Therefore, there is a need for visual presentation of the visual quality of cataract patients to assist healthcare workers and patients in more clearly and directly knowing the actual visual quality in cataract conditions.
Disclosure of Invention
The invention provides a fundus image processing method, a fundus image processing device and fundus image processing equipment, which are used for processing fundus images of a patient so as to simulate a picture representing imaging quality of the patient under an actual visual angle.
In order to solve the above technical problem, an embodiment of the present invention provides a fundus image processing method, which is characterized by including:
Obtaining a fundus image including a fundus image of a cataract patient;
performing a first analysis process on the fundus image to computationally determine a sharpness of the fundus image;
Performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
Obtaining a clear target scene picture;
Performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture;
And processing the blurred target scene picture based on the definition and chromaticity distribution to form a simulation picture for simulating the imaging effect of the patient looking at the target scene, wherein the simulation picture is a color picture.
In some embodiments, the performing a first analysis process on the fundus image to computationally determine the sharpness of the fundus image includes:
performing first preprocessing on the fundus images to at least unify the resolutions of different fundus images as first resolution;
performing color space conversion on the fundus image after the first pretreatment to form a first gray image;
Intercepting a region containing blood vessels and video disc textures in the first gray level image to form a first target image;
performing Gaussian blur processing on the first target image to form a second target image;
performing edge detection on the second target image, and counting the number of pixels with pixel values meeting the requirement based on a detection result;
And calculating the definition of each fundus image based on the pixel number and the total pixel number of the fundus image.
In some embodiments, the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
Performing second preprocessing on the fundus images to unify at least the resolutions of different fundus images as second resolutions;
Performing color space conversion on the fundus image after the second pretreatment to form a second gray scale image;
analyzing and determining color component ranges of different colors in the second gray level image;
Calculating and determining the number of pixels under each color component and the number of pixels of the fundus image;
the chromaticity distribution of the fundus image is calculated based on the number of pixels per color component.
In some embodiments, the analyzing determines a range of color components for different colors in the second gray scale image, comprising:
The analysis determines a range of color components of red, orange, yellow, green, blue in the second gray scale image.
In some embodiments, the method further comprises:
Training an initial model based on the fundus image and the corresponding definition and chromaticity distribution as training data to obtain a fundus image analysis model for analyzing and calculating the definition and chromaticity distribution of the fundus image;
the first analysis processing is performed on the fundus image to calculate and determine the sharpness of the fundus image, including:
performing first analysis processing on the fundus image based on the fundus image analysis model to calculate and determine the definition of the fundus image;
the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
And performing second analysis processing on the fundus image based on the fundus image analysis model to calculate and determine chromaticity distribution of the fundus image.
In some embodiments, the method further comprises:
constructing an initial model based on cycleGAN models;
obtaining fundus gray images of common people and fundus gray images of different cataract patients as training data;
Training the initial model based on the training data to obtain a candidate model, wherein the candidate model is used for learning the fundus gray map style of the cataract patient and can transfer the fundus gray map style to the fundus gray map of an ordinary person so as to process and form a target fundus gray map with the fundus gray map style of the cataract patient;
And comparing the target fundus gray map with the fundus gray map of the corresponding cataract patient, and updating the weight parameters in the candidate model based on the comparison result to obtain a target model with higher processing precision than the candidate model.
In some embodiments, the performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture includes:
performing color space conversion on the fundus image to form a third gray scale image;
processing the target scene picture to form a single-channel image, wherein the single-channel image comprises a red channel image, a green channel image and a blue channel image;
Performing color space conversion on the single-channel image to form a single-channel gray scale image;
inputting the single-channel gray level image and the third gray level image into a target model to obtain a blurred single-channel gray level image which is matched with the style of the third gray level image;
and merging the blurred single-channel gray level images to form a blurred target scene image.
In some embodiments, the processing based on the sharpness, chromaticity distribution, and blurred version of the scene image to form a simulated image for simulating an imaging effect of the patient looking at a target scene includes:
And adjusting the definition and chromaticity of the blurred scene picture based on the definition and chromaticity distribution, so as to form a simulation picture for simulating the imaging effect of the patient looking at the target scene.
Another embodiment of the present invention also provides an image data processing and analyzing apparatus, including:
a first obtaining module for obtaining a fundus image including a fundus image of a cataract patient;
A first processing module for performing a first analysis process on the fundus image to calculate and determine a sharpness of the fundus image;
A second processing module for performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
the second obtaining module is used for obtaining a clear target scene picture;
The third processing module is used for carrying out style migration processing on the target scene picture according to the fundus image to form a blurred version target scene picture;
And the fourth processing module is used for processing according to the definition, the chromaticity distribution and the blurred scene picture to form a simulation picture for simulating the imaging effect of the patient looking at the target scene, wherein the simulation picture is a color picture.
Another embodiment of the present invention also provides an electronic device, including:
at least one processor; and
A memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor, the instructions being configured to perform the fundus image processing method as described in any of the embodiments above.
Based on the disclosure of the above embodiment, it can be known that the beneficial effects of the embodiment of the present invention include that the fundus image of the patient can be analyzed and processed to obtain the definition and chromaticity distribution of the corresponding fundus image, then the style migration processing of the corresponding fundus image is performed on the scene image, and the processing result is updated and corrected based on the definition and chromaticity distribution, so as to finally obtain an image capable of simulating the imaging effect of the patient when looking at the scene corresponding to the scene image, based on the image, the medical staff can perform more accurate visual quality assessment on the patient, thereby significantly improving the assessment efficiency and accuracy, and providing high reference value data for the treatment of the patient.
Drawings
Fig. 1 is a flowchart of a fundus image processing method in an embodiment of the present invention.
FIG. 2 is a flowchart of training a target model in another embodiment of the invention.
Fig. 3 is a flowchart of a fundus image processing method in another embodiment of the present invention.
Fig. 4 is a flowchart of a fundus image processing method in another embodiment of the present invention.
Fig. 5 is a block diagram showing the configuration of a fundus image processing apparatus in the embodiment of the present invention.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described in detail with reference to the accompanying drawings, but not limiting the invention.
It should be understood that various modifications may be made to the embodiments disclosed herein. Therefore, the following description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this disclosure will occur to persons of ordinary skill in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the invention will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the invention has been described with reference to some specific examples, a person skilled in the art will certainly be able to achieve many other equivalent forms of the invention, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the disclosure in unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely serve as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a fundus image processing method, including:
S1: obtaining a fundus image including a fundus image of a cataract patient;
s2: performing a first analysis process on the fundus image to computationally determine a sharpness of the fundus image;
S3: performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
s4: obtaining a clear target scene picture;
S5: performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture;
S6: and processing the image based on the definition, the chromaticity distribution and the blurred version scene image to form a simulation image for simulating the imaging effect of the patient looking at the target scene, wherein the simulation image is a color image.
Based on the foregoing, the present embodiment has the advantages that the analysis processing can be performed on the fundus image of the patient to obtain the sharpness and chromaticity distribution of the corresponding fundus image, then the style migration processing is performed on the scene image, and the processing result is updated and corrected based on the sharpness and chromaticity distribution, so as to finally obtain an image capable of simulating the imaging effect of the patient when looking at the scene corresponding to the scene image.
In this embodiment, the fundus image is taken as an example of a fundus image of a cataract patient, and since in the course of cataract, lens protein denaturation will cause lens cortical turbidity and increase of intraocular color-carrying groups, and the deepening of lens color will result in enhanced scattering and filtering absorption of light. Wherein scattering effects lead to a decrease in image edge sharpness, i.e. sharpness, and filtering absorption effects lead to deviations of the imaging color from normal. The fundus camera projects light into fundus through cornea, lens and vitreous body, and the lens captures the reflected light from retina and forms fundus photo. By means of the principle of reversibility of the light path, the clarity of the fundus photo indirectly reflects the filtering and scattering degree of the lens on light under the condition of certain transparency of cornea and vitreous body. Therefore, the degradation of the imaging quality of the human eye in the cataract state can be characterized by the reduction of the sharpness of the texture edges and the deviation of the color from normal in the image by comparing the fundus image of the patient with the fundus image of the healthy human eye with the transparent lens. Based on the image, medical staff can perform more accurate visual quality assessment on the patient, so that assessment efficiency and accuracy can be remarkably improved, and high-reference value data is provided for treatment of the patient.
In an embodiment, the performing a first analysis process on the fundus image to computationally determine the sharpness of the fundus image includes:
s7: performing first preprocessing on the fundus images to at least unify the resolutions of different fundus images as first resolution;
s8: performing color space conversion on the fundus image after the first pretreatment to form a first gray image;
s9: intercepting a region containing blood vessels and video disc textures in the first gray level image to form a first target image;
S10: performing Gaussian blur processing on the first target image to form a second target image;
S11: performing edge detection on the second target image, and counting the number of pixels with pixel values meeting the requirement based on a detection result;
s12: and calculating the definition of each fundus image based on the pixel number and the total pixel number of the fundus image.
Illustratively, the above steps of the present embodiment are to analyze each fundus picture, and calculate the number of identifiable blood vessel textures inside the image to evaluate the edge sharpness inside the image. Specifically, as the definition of the fundus picture is positively correlated with the number of extractable texture features in the fundus picture under the same foreground object, namely, the richer the extractable marginal texture (for example, the edge of a blood vessel) in the image is, the clearer the picture is, and otherwise, the blurring is performed; thus, the present embodiment selects to evaluate the sharpness of fundus photographs using the edge texture of the fundus photographs, and the flow includes:
1. The fundus image is cut from the whole fundus picture with black background, and the resolution of the picture is unified to 600x600 size (specific non-unique);
2. converting the RGB format color fundus picture into a gray scale picture;
3. Taking 295 pixels as radius and taking the center of a picture as circle center, setting a target Region (ROI) of fundus image analysis, thereby removing the influence of the region on an analysis result by using outer edge textures and black regions at the periphery of the picture, and realizing that the edge texture detection connotation is only limited to textures of blood vessels and video discs;
4. carrying out Gaussian blur pretreatment on the picture, and reducing the influence of sharp signals such as artifacts formed by reflection of light in the picture on the picture texture detection result;
5. And (3) carrying out Canny edge detection on the picture, counting the number of pixels with the pixel value of the edge characteristic picture being more than 200 (specific non-unique), dividing the number of pixels by the total number of pixels of the fundus picture to obtain a ratio, and carrying out normalization processing on the ratio to finally obtain a picture definition value.
Further, in another embodiment, the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
s13: performing second preprocessing on the fundus images to unify at least the resolutions of different fundus images as second resolutions;
s14: performing color space conversion on the fundus image after the second pretreatment to form a second gray scale image;
s15: analyzing and determining color component ranges of different colors in the second gray level image;
S16: calculating and determining the number of pixels under each color component and the number of pixels of the fundus image;
S17: the chromaticity distribution of the fundus image is calculated based on the number of pixels per color component.
Wherein the analyzing determines a range of color components for different colors in the second gray scale image, comprising:
S18: the analysis determines a range of color components of red, orange, yellow, green, blue in the second gray scale image.
For example, as an alternative, the present embodiment may form an evaluation scheme based on imaging colors, i.e., chromaticity distribution, by calculating color components of color distribution within HSV spatial color channels of fundus images to optimize for spectral ranges and channel numbers. Specifically, the visual observation shows that the fundus images of the patient and the healthy person have obvious differences in the spectrum color level, so that the fundus images of the patient and the normal fundus images can be clearly distinguished through observation, and the distinction of different fundus images is feasible from the image color condition, namely the quantization of the chromaticity distribution dimension. Normally, the normal fundus photo chromaticity distribution is overall slightly orange, followed by the red component of the red blood vessel; the colour distribution of the fundus picture of the patient is overall yellowish, and secondly orange and green. Through analysis, it was determined that there are 5 colors in total, orange, red, yellow, green and blue, respectively, of the true component colors, i.e., the color components having the reference value, for the fundus photograph of the cataract patient. In the calculation of the chromaticity distribution, the specific flow is referred as follows:
1. cutting off the black peripheral background from the whole fundus picture with the black background to obtain a fundus image to be analyzed, and unifying the resolution of the fundus picture to be 1655x1655 size;
2. The fundus image with uniform resolution is subjected to color space conversion, and is converted into HSV space from RGB space, wherein H represents hue, S represents saturation, and V represents brightness (namely gray-scale picture).
3. According to the corresponding ranges of the 5 colors, the color component values of each color in the corresponding fundus image are counted, and the HSV ranges of the 5 color components are as follows:
red: consists of two ranges, one upper and lower boundary being [0, 43, 46] to [10, 255, 255], and the other upper and lower boundary being [156, 43, 46] to [180, 255, 255].
The upper and lower boundaries of orange are [11, 43, 46] to [20, 255, 255].
The upper and lower yellow borders are [20, 43, 46] to [30, 255, 255].
The green upper and lower boundaries are [35, 43, 46] to [85, 255, 255].
The blue upper and lower boundaries are [100, 43, 46] to [150, 255, 255].
4. And counting the number of pixels of the corresponding color component, wherein the sum of the pixels is the color component of the corresponding color of the fundus picture, and dividing the value by the sum of the pixels of the whole fundus picture to obtain the color ratio of the color in the whole picture, namely the chromaticity distribution value. And finally counting the chromaticity distribution histograms of the 5 colors.
5. In practical application, the color distribution attribute score can be further calculated, and the color distribution attribute score is obtained through cosine similarity calculation. For example, a group (for example, but not limited to, 20) of high-quality normal fundus pictures are found in all the test pictures, then the chromaticity distribution values of the 5 colors in the high-quality pictures are calculated, and then a standard 5-dimensional normal fundus chromaticity distribution attribute template is obtained in an average weighting mode and is recorded as a vectorThe chroma distribution data of other fundus pictures are calculated in the same way to obtain a 5-dimensional vector b, at the moment, the chroma distribution value of the fundus picture can be approximated to the cosine similarity of the template vector, the similarity value range is [ -1,1], the higher the chroma distribution value of the picture is closer to the normal template picture, the lower the value is, the larger the deviation of the image color from the normal range is (the larger the deviation of the spectrum composition of the light entering eyes caused by cataract from the normal range is), and the cataract is relatively serious). The formula for calculating the cosine similarity score of the picture is as follows:
;
In an embodiment, to facilitate the analysis processing of the fundus image, the method further comprises:
S19: training an initial model based on the fundus image and the corresponding definition and chromaticity distribution as training data to obtain a fundus image analysis model for analyzing and calculating the definition and chromaticity distribution of the fundus image;
the first analysis processing is performed on the fundus image to calculate and determine the sharpness of the fundus image, including:
S20: performing first analysis processing on the fundus image based on the fundus image analysis model to calculate and determine the definition of the fundus image;
the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
S21: and performing second analysis processing on the fundus image based on the fundus image analysis model to calculate and determine chromaticity distribution of the fundus image.
For example, the following may be used in training the bottom-of-eye image analysis model:
1. The input data is RGB color fundus picture, and the training input data size is 640x640. Meanwhile, in order to adapt to the input of multi-scene fundus pictures and more accurate analysis effects, a detection module for fundus images is added to the network model, so that the later regression of definition and chromaticity analysis focuses on the current fundus picture. For training data, 400 pieces of training data are produced in total, and the training data are divided into a training set and a verification set according to the proportion of 9:1, which is of course not unique. The definition and chromaticity distribution value of the training label are obtained through the mode, and the position coordinates of the fundus can be obtained through manual labeling.
2. The network architecture in this embodiment is: the input picture module is connected with a CNN-based picture feature extraction layer, and the integral sign extraction of CNN BackBone is divided into 3 layers of feature layers because of the tasks of detection module and multi-attribute regression. And respectively extracting corresponding low-order features, medium-order features and high-order features. The lower the level, the more specific the features, the higher the level, and the more abstract the expression of the features, so the three levels of features of low, medium and high are used together to better express the properties of fundus images in the definition and chromaticity distribution dimension.
Further, the feature cascade layer is connected behind the feature extraction module, 3 layers of features extracted by the features are connected in low, medium and high order, and a new feature layer is formed after cascade operation transformation. The output layer is connected behind the cascade module, the output of the output layer is a two-dimensional matrix of 12x8400, and 12 of the first dimension respectively represents confidence values (1) of eye fundus detection, coordinates (4) of eye fundus detection, resolution regression (1) of eye fundus, chromaticity distribution score regression (1) of eye fundus and regression (5) of corresponding 5 color components. 8400 in the second dimension represents the number of detection candidate targets.
Because cataract has a long course and the lens turbidity has undergone a long change from light to heavy, it is difficult to obtain a clear fundus image before the same individual suffers from the disease and a blurred fundus image in the disease state of the individual at the same time. For this reason, in the subsequent stage of matching processing of the fundus image and the target scene image, an unsupervised learning scheme is adopted, a cycleGAN model is used as a basis, the image generation model for blurring a clear image so as to simulate the visual quality of the cataract patient is trained, CATARACTS dataset is adopted as training data, which is a medical image analysis dataset in the fields of cataract diagnosis and classification, wherein the medical image analysis dataset comprises 300 fundus images of an ordinary person and 100 fundus images of the cataract patient (the training data has flexibility, the dataset adopted in this example is only one of the possible datasets, and other datasets with similar properties can be used as training contents).
Specifically, in training, the method includes:
S22: constructing an initial model based on cycleGAN models;
S23: obtaining fundus gray images of common people and fundus gray images of different cataract patients as training data;
s24: training the initial model based on the training data to obtain a candidate model, wherein the candidate model is used for learning the fundus gray map style of the cataract patient and can transfer the fundus gray map style to the fundus gray map of an ordinary person so as to process and form a target fundus gray map with the fundus gray map style of the cataract patient;
s25: and comparing the target fundus gray map with the fundus gray map of the corresponding cataract patient, and updating the weight parameters in the candidate model based on the comparison result to obtain a target model with higher processing precision than the candidate model.
In addition, as shown in fig. 2, the blurred fundus gray map can be reprocessed to form a clear fundus image, and the clear fundus image is compared with the fundus gray map of the corresponding ordinary person to determine the reduction degree, and the weight parameters of the candidate model can be adjusted based on the reduction degree, so that the processing precision of the obtained target model is better, and the learning effect is better.
Further, the performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture includes:
s26: performing color space conversion on the fundus image to form a third gray scale image;
S27: processing the target scene picture to form a single-channel image, wherein the single-channel image comprises a red channel image, a green channel image and a blue channel image;
S28: performing color space conversion on the single-channel image to form a single-channel gray scale image;
s29: inputting the single-channel gray level image and the third gray level image into a target model to obtain a blurred single-channel gray level image which is matched with the style of the third gray level image;
s30: and merging the blurred single-channel gray level images to form a blurred target scene image.
Specifically, as shown in fig. 3, since the color distribution of the fundus photo of the human eye is mainly red and yellow, and the total colors seen by the human eye cannot be reflected, a fundus gray-scale image is used for model training to obtain an image generation model capable of generating a blurred version of the target scene picture. In the process of using the model, the R-G-B channel images of the target scene images, namely the single channel images, can be respectively blurred by the target model trained by the gray level images, and then the blurred R-G-B single channel images are combined to obtain the blurred target scene images of the color version, so that the first personalized fitting of the vision quality of cataract patients with different severity can be realized. The target scene picture described in the embodiment is not particularly limited, and may be a photo taken in any environment, and the like.
Further, the processing based on the sharpness, chromaticity distribution, and blurred version scene image to form a simulated image for simulating an imaging effect of the patient looking at a target scene includes:
S31: and adjusting the definition and chromaticity of the blurred scene picture based on the definition and chromaticity distribution, so as to form a simulation picture for simulating the imaging effect of the patient looking at the target scene.
That is, through the above steps, the blurred version target scene picture matched with the patient can be subjected to the second personalized fitting through the definition and the chromaticity distribution of the fundus image of the corresponding patient, and finally the simulated picture is generated.
Based on the above embodiments, as shown in fig. 4, the method of the present application is to give the system (model) an input fundus photo and scene photo of the patient on the premise of a given scene, and then obtain a fitting picture for the visual quality of the patient, where the content of the picture is an imaging effect of the scene under the simulated view angle of the patient. In order to achieve this function, the system of the present embodiment has a plurality of models, including a fundus image analysis model, for analyzing and processing fundus images to obtain corresponding sharpness and chromaticity distributions; the target model is used for carrying out style migration, namely individuation treatment on the scene picture so as to enable the scene picture to be at least matched with the imaging style of the fundus image of the patient or be matched with the ambiguity; the image generation model is used for carrying out further personalized processing on the output result of the target model according to the obtained definition and chromaticity distribution so as to meet the actual condition of eyes of a patient, and carrying out further adjustment on the definition and the color of the output result, thereby finally obtaining a simulation image matched with the actual imaging effect of the patient. The analysis model and the target model can be nested in the image generation model, so that the accuracy of the image generation model is improved.
In summary, the embodiment of the application can support the quantitative analysis of splitting the influence of cataract on visual quality into image edge sharpness and chromaticity by utilizing an image processing strategy, so that the cataract eye imaging quality is indirectly reflected through fundus image definition; then, the RGB three-channel color components are respectively processed by the average gray level of the fundus photo, so that the technical difficulty that the fundus photo is single in color and cannot reflect all color spaces and color distribution is avoided, and the simulation of color perception in the visual quality of a patient is realized; in addition, the individual extraction of the fundus picture definition characteristics of each patient is realized through fundus picture quantitative analysis, texture definition and color distribution data are formed, the objective and quantitative image analysis data are used for replacing manual scoring data which are greatly influenced by subjective factors and have lower precision at present, and the manual scoring data are used as marking data of an image generation model to refine and train the model, so that model learning and processing precision can be remarkably improved, and the generated simulated picture is more fit with the actual imaging condition of the patient.
As shown in fig. 5, another embodiment of the present invention also provides a fundus image processing apparatus 100 including:
a first obtaining module for obtaining a fundus image including a fundus image of a cataract patient;
A first processing module for performing a first analysis process on the fundus image to calculate and determine a sharpness of the fundus image;
A second processing module for performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
the second obtaining module is used for obtaining a clear target scene picture;
The third processing module is used for carrying out style migration processing on the target scene picture according to the fundus image to form a blurred version target scene picture;
And the fourth processing module is used for processing according to the definition, the chromaticity distribution and the blurred scene picture to form a simulation picture for simulating the imaging effect of the patient looking at the target scene, wherein the simulation picture is a color picture.
In some embodiments, the performing a first analysis process on the fundus image to computationally determine the sharpness of the fundus image includes:
performing first preprocessing on the fundus images to at least unify the resolutions of different fundus images as first resolution;
performing color space conversion on the fundus image after the first pretreatment to form a first gray image;
Intercepting a region containing blood vessels and video disc textures in the first gray level image to form a first target image;
performing Gaussian blur processing on the first target image to form a second target image;
performing edge detection on the second target image, and counting the number of pixels with pixel values meeting the requirement based on a detection result;
And calculating the definition of each fundus image based on the pixel number and the total pixel number of the fundus image.
In some embodiments, the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
Performing second preprocessing on the fundus images to unify at least the resolutions of different fundus images as second resolutions;
Performing color space conversion on the fundus image after the second pretreatment to form a second gray scale image;
analyzing and determining color component ranges of different colors in the second gray level image;
Calculating and determining the number of pixels under each color component and the number of pixels of the fundus image;
the chromaticity distribution of the fundus image is calculated based on the number of pixels per color component.
In some embodiments, the analyzing determines a range of color components for different colors in the second gray scale image, comprising:
The analysis determines a range of color components of red, orange, yellow, green, blue in the second gray scale image.
In some embodiments, the apparatus further comprises:
The first training module is used for training an initial model according to the fundus image and the corresponding definition and chromaticity distribution as training data so as to obtain a fundus image analysis model for analyzing and calculating the definition and chromaticity distribution of the fundus image;
the first analysis processing is performed on the fundus image to calculate and determine the sharpness of the fundus image, including:
performing first analysis processing on the fundus image based on the fundus image analysis model to calculate and determine the definition of the fundus image;
the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
And performing second analysis processing on the fundus image based on the fundus image analysis model to calculate and determine chromaticity distribution of the fundus image.
In some embodiments, the apparatus further comprises:
the construction module is used for constructing an initial model based on the cycleGAN model;
The second obtaining module is used for obtaining fundus gray maps of common people and fundus gray maps of different cataract patients as training data;
The second training module is used for training the initial model based on the training data to obtain a candidate model, wherein the candidate model is used for learning the fundus gray map style of the cataract patient and can transfer the fundus gray map style to the fundus gray map of an ordinary person so as to process and form a target fundus gray map with the fundus gray map style of the cataract patient;
And the updating module is used for comparing the target fundus gray map with the fundus gray map of the corresponding cataract patient, and updating the weight parameters in the candidate model based on the comparison result to obtain a target model with higher processing precision than the candidate model.
In some embodiments, the performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture includes:
performing color space conversion on the fundus image to form a third gray scale image;
processing the target scene picture to form a single-channel image, wherein the single-channel image comprises a red channel image, a green channel image and a blue channel image;
Performing color space conversion on the single-channel image to form a single-channel gray scale image;
inputting the single-channel gray level image and the third gray level image into a target model to obtain a blurred single-channel gray level image which is matched with the style of the third gray level image;
and merging the blurred single-channel gray level images to form a blurred target scene image.
In some embodiments, the processing based on the sharpness, chromaticity distribution, and blurred version scene pictures to form a simulated picture for simulating a real imaging result of the patient looking at a target scene includes:
And adjusting the definition and chromaticity of the blurred scene picture based on the definition and chromaticity distribution, so as to form a simulation picture for simulating a real imaging result when the patient looks at a target scene.
Another embodiment of the present invention also provides an electronic device, including:
at least one processor; and
A memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor, the instructions being configured to perform the fundus image processing method as described in any of the embodiments above.
Another embodiment of the present invention also provides a storage medium including a stored program, wherein the program, when run, controls an apparatus including the storage medium to execute the fundus image processing method according to any one of the embodiments described above.
Embodiments of the present invention also provide a computer program product tangibly stored on a computer-readable medium and comprising computer-readable instructions that, when executed, cause at least one processor to perform a fundus image processing method such as in the embodiments described above. It should be understood that each solution in this embodiment has a corresponding technical effect in the foregoing method embodiment, which is not described herein.
The computer storage medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage media element, a magnetic storage media element, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, antenna, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of protection of the application is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order and there are many other variations of the different aspects of one or more embodiments of the application as described above, which are not provided in detail for the sake of brevity.
The above embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, the scope of which is defined by the claims. Various modifications and equivalent arrangements of this invention will occur to those skilled in the art, and are intended to be within the spirit and scope of the invention.
Claims (8)
1. A fundus image processing method, comprising:
Obtaining a fundus image including a fundus image of a cataract patient;
performing a first analysis process on the fundus image to computationally determine a sharpness of the fundus image;
Performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
Obtaining a clear target scene picture;
Performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture;
Processing the blurred scene picture based on the definition and chromaticity distribution to form a simulation picture for simulating an imaging effect when the patient looks at a target scene, wherein the simulation picture is a color picture;
The method further comprises the steps of:
constructing an initial model based on cycleGAN models;
obtaining fundus gray images of common people and fundus gray images of different cataract patients as training data;
Training the initial model based on the training data to obtain a candidate model, wherein the candidate model is used for learning the fundus gray map style of the cataract patient and can transfer the fundus gray map style of the cataract patient into the fundus gray map of an ordinary person so as to process and form a target fundus gray map with the fundus gray map style of the cataract patient;
comparing the target fundus gray map with the fundus gray map of the corresponding cataract patient, and updating weight parameters in the candidate model based on the comparison result to obtain a target model with higher processing precision than the candidate model;
Performing style migration processing on the target scene picture based on the fundus image to form a blurred version target scene picture, including:
performing color space conversion on the fundus image to form a third gray scale image;
processing the target scene picture to form a single-channel image, wherein the single-channel image comprises a red channel image, a green channel image and a blue channel image;
Performing color space conversion on the single-channel image to form a single-channel gray scale image;
inputting the single-channel gray level image and the third gray level image into a target model to obtain a blurred single-channel gray level image which is matched with the style of the third gray level image;
and merging the blurred single-channel gray level images to form a blurred target scene image.
2. The fundus image processing method according to claim 1, wherein said performing a first analysis process on the fundus image to computationally determine sharpness of the fundus image comprises:
performing first preprocessing on the fundus images to at least unify the resolutions of different fundus images as first resolution;
performing color space conversion on the fundus image after the first pretreatment to form a first gray image;
Intercepting a region containing blood vessels and video disc textures in the first gray level image to form a first target image;
performing Gaussian blur processing on the first target image to form a second target image;
performing edge detection on the second target image, and counting the number of pixels with pixel values meeting the requirement based on a detection result;
And calculating the definition of each fundus image based on the pixel number and the total pixel number of the fundus image.
3. The fundus image processing method according to claim 1, wherein said performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image comprises:
Performing second preprocessing on the fundus images to unify at least the resolutions of different fundus images as second resolutions;
Performing color space conversion on the fundus image after the second pretreatment to form a second gray scale image;
analyzing and determining color component ranges of different colors in the second gray level image;
Calculating and determining the number of pixels under each color component and the number of pixels of the fundus image;
the chromaticity distribution of the fundus image is calculated based on the number of pixels per color component.
4. A fundus image processing method according to claim 3, wherein said analyzing determines a range of color components of different colors in said second gray scale image, comprising:
the analysis determines a range of color components of red, orange, yellow, green, and blue in the second gray scale image.
5. The fundus image processing method according to claim 1, wherein the method further comprises:
Training an initial model based on the fundus image and the corresponding definition and chromaticity distribution as training data to obtain a fundus image analysis model for analyzing and calculating the definition and chromaticity distribution of the fundus image;
the first analysis processing is performed on the fundus image to calculate and determine the sharpness of the fundus image, including:
performing first analysis processing on the fundus image based on the fundus image analysis model to calculate and determine the definition of the fundus image;
the performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image includes:
And performing second analysis processing on the fundus image based on the fundus image analysis model to calculate and determine chromaticity distribution of the fundus image.
6. The fundus image processing method according to claim 1, wherein the processing of the blurred scene picture based on the sharpness, chromaticity distribution to form a simulated picture for simulating an imaging effect of the patient looking at a target scene comprises:
And adjusting the definition and chromaticity of the blurred scene picture based on the definition and chromaticity distribution, so as to form a simulation picture for simulating the imaging effect of the patient looking at the target scene.
7. An image data processing apparatus, comprising:
a first obtaining module for obtaining a fundus image including a fundus image of a cataract patient;
A first processing module for performing a first analysis process on the fundus image to calculate and determine a sharpness of the fundus image;
A second processing module for performing a second analysis process on the fundus image to computationally determine a chromaticity distribution of the fundus image;
the second obtaining module is used for obtaining a clear target scene picture;
The third processing module is used for carrying out style migration processing on the target scene picture according to the fundus image to form a blurred version target scene picture;
the fourth processing module is used for processing according to the definition, the chromaticity distribution and the blurred version scene picture to form a simulation picture for simulating the imaging effect of the patient looking at the target scene, wherein the simulation picture is a color picture;
The apparatus further comprises:
the construction module is used for constructing an initial model according to the cycleGAN model;
the second obtaining module is also used for obtaining fundus gray maps of common people and fundus gray maps of different cataract patients as training data;
The second training module is used for training the initial model according to the training data to obtain a candidate model, wherein the candidate model is used for learning the fundus gray map style of the cataract patient and can transfer the fundus gray map style of the cataract patient to the fundus gray map of an ordinary person so as to process and form a target fundus gray map with the fundus gray map style of the cataract patient;
The updating module is used for comparing the target fundus gray map with the fundus gray map of the corresponding cataract patient, and updating the weight parameters in the candidate model based on the comparison result to obtain a target model with higher processing precision than the candidate model;
performing style migration processing on the target scene picture according to the fundus image to form a blurred version target scene picture, including:
performing color space conversion on the fundus image to form a third gray scale image;
processing the target scene picture to form a single-channel image, wherein the single-channel image comprises a red channel image, a green channel image and a blue channel image;
Performing color space conversion on the single-channel image to form a single-channel gray scale image;
inputting the single-channel gray level image and the third gray level image into a target model to obtain a blurred single-channel gray level image which is matched with the style of the third gray level image;
and merging the blurred single-channel gray level images to form a blurred target scene image.
8. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the fundus image processing method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410586406.6A CN118196218B (en) | 2024-05-13 | 2024-05-13 | Fundus image processing method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410586406.6A CN118196218B (en) | 2024-05-13 | 2024-05-13 | Fundus image processing method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118196218A CN118196218A (en) | 2024-06-14 |
CN118196218B true CN118196218B (en) | 2024-09-17 |
Family
ID=91401902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410586406.6A Active CN118196218B (en) | 2024-05-13 | 2024-05-13 | Fundus image processing method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118196218B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119251229B (en) * | 2024-12-05 | 2025-03-21 | 延安大学 | Fundus image evaluation method based on fusion of digital twin and optic nerve imaging |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563398A (en) * | 2023-05-15 | 2023-08-08 | 北京石油化工学院 | Low-quality fundus color photograph generation method and device |
CN117372284A (en) * | 2023-12-04 | 2024-01-09 | 江苏富翰医疗产业发展有限公司 | Fundus image processing method and fundus image processing system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544540B (en) * | 2018-11-28 | 2020-12-25 | 东北大学 | Diabetic retina image quality detection method based on image analysis technology |
EP3956813A4 (en) * | 2019-04-18 | 2022-11-16 | Tracery Ophthalmics Inc. | DETECTION, PREDICTION AND CLASSIFICATION OF EYE DISEASES |
-
2024
- 2024-05-13 CN CN202410586406.6A patent/CN118196218B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563398A (en) * | 2023-05-15 | 2023-08-08 | 北京石油化工学院 | Low-quality fundus color photograph generation method and device |
CN117372284A (en) * | 2023-12-04 | 2024-01-09 | 江苏富翰医疗产业发展有限公司 | Fundus image processing method and fundus image processing system |
Also Published As
Publication number | Publication date |
---|---|
CN118196218A (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Claro et al. | An hybrid feature space from texture information and transfer learning for glaucoma classification | |
CN110276356B (en) | Fundus image microaneurysm identification method based on R-CNN | |
CN110399929B (en) | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium | |
JP3810776B2 (en) | A method for detecting and correcting red eyes in digital images. | |
WO2020151307A1 (en) | Automatic lesion recognition method and device, and computer-readable storage medium | |
Sinthanayothin | Image analysis for automatic diagnosis of diabetic retinopathy | |
KR20200004841A (en) | System and method for guiding a user to take a selfie | |
EP1229493A2 (en) | Multi-mode digital image processing method for detecting eyes | |
CN112017185B (en) | Focus segmentation method, device and storage medium | |
JPH11136498A (en) | Computer program matter for red eye detection | |
CN110751637A (en) | Diabetic retinopathy detection system, method, equipment and training system | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN111986202B (en) | Glaucoma auxiliary diagnosis device, method and storage medium | |
CN118196218B (en) | Fundus image processing method, device and equipment | |
CN110473176B (en) | Image processing method and device, fundus image processing method and electronic equipment | |
US20230346276A1 (en) | System and method for detecting a health condition using eye images | |
Uribe-Valencia et al. | Automated Optic Disc region location from fundus images: Using local multi-level thresholding, best channel selection, and an Intensity Profile Model | |
Zhang et al. | A fundus image enhancer based on illumination-guided attention and optic disc perception GAN | |
Zou et al. | Supervised vessels classification based on feature selection | |
CN110288588A (en) | Retinal image blood vessel segmentation method and system based on gray level variance and standard deviation | |
Resita et al. | Color RGB and structure GLCM method to feature extraction system in endoscope image for the diagnosis support of otitis media disease | |
Reza et al. | Automatic detection of optic disc in fundus images by curve operator | |
Hussein et al. | Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration | |
Raja Rajeswari Chandni et al. | Fundus image enhancement using EAL-CLAHE technique | |
Triwijoyo | Optic disk segmentation using histogram analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |