[go: up one dir, main page]

CN115131592B - Fundus image classification reading system and reading method - Google Patents

Fundus image classification reading system and reading method Download PDF

Info

Publication number
CN115131592B
CN115131592B CN202110316350.9A CN202110316350A CN115131592B CN 115131592 B CN115131592 B CN 115131592B CN 202110316350 A CN202110316350 A CN 202110316350A CN 115131592 B CN115131592 B CN 115131592B
Authority
CN
China
Prior art keywords
image
result
classification
quality control
classification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110316350.9A
Other languages
Chinese (zh)
Other versions
CN115131592A (en
Inventor
郭宁
胡志钢
童志鹏
段晓明
张诗华
连倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sibionics Intelligent Technology Co Ltd
Original Assignee
Shenzhen Sibionics Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sibionics Intelligent Technology Co Ltd filed Critical Shenzhen Sibionics Intelligent Technology Co Ltd
Priority to CN202411917370.1A priority Critical patent/CN119723209A/en
Priority to CN202110316350.9A priority patent/CN115131592B/en
Priority to CN202411917369.9A priority patent/CN119723208A/en
Publication of CN115131592A publication Critical patent/CN115131592A/en
Application granted granted Critical
Publication of CN115131592B publication Critical patent/CN115131592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure describes a film reading system for fundus image classification. The film reading system comprises an acquisition module for acquiring fundus images; a preprocessing module for preprocessing the fundus image, and a first classification module for classifying the preprocessed fundus image by using a first classification model based on deep learning to obtain a first classification result and a classification result type; a grouping module for dividing the preprocessed fundus image into a negative result image, a positive result image and an image to be reclassified; the first quality control module is used for acquiring a final classification result and an image to be arbitrated based on a quality control model configured by utilizing a preset negative prediction rate and a preset positive prediction rate; the second classification module classifies the images to be reclassified by using a second classification model to obtain a final classification result and the images to be arbitrated; and the arbitration module arbitrates the images to be arbitrated to obtain an arbitration classification result. According to the present disclosure, a fundus image classification film reading system and a fundus image reading method capable of improving classification accuracy are provided.

Description

Fundus image classification film reading system and fundus image classification film reading method
Technical Field
The present disclosure relates generally to a fundus image classification film reading system and a fundus image classification film reading method.
Background
Medical images often contain numerous details of the body structure or tissue. In modern hospitals, most of the treatment information originates from medical images, such as fundus images. In the clinic, doctors can be assisted in relevant disease identification by understanding these details in medical images. Medical images have evolved into the primary method of clinically identifying diseases. However, conventional identification of disease information based on medical images relies mainly on experience from a professional physician. Under such circumstances, development of a film reading technique capable of assisting a doctor in automatic film reading for identification of related diseases has been a popular direction in the field of medical imaging. With the development of artificial intelligence technology, film reading technology based on computer vision and artificial intelligence such as machine learning has been developed and applied in medical image recognition.
For example, patent document 1 (CN 105513077 a) discloses a system for diabetic retinopathy screening, which comprises: a fundus image acquisition apparatus for acquiring or receiving fundus images of a person to be inspected; the image processing and screening device is used for processing the fundus image, detecting whether a lesion exists in the fundus image, and then transmitting a detection result to the report output device; the report output device outputs a corresponding detection report based on the detection result.
However, in practical clinical application, the screening system described in patent document 1 may output an erroneous or inaccurate detection report when processing some fundus images due to the diversity of fundus images, resulting in a decrease in the classification accuracy of the screening system.
Disclosure of Invention
The present disclosure has been made in view of the above-described circumstances, and an object thereof is to provide a fundus image classification reading system and a fundus image reading method capable of improving classification accuracy.
To this end, a first aspect of the present disclosure provides a film reading system for fundus image classification, comprising: an acquisition module for acquiring a fundus image; a preprocessing module for preprocessing the fundus image to obtain a preprocessed fundus image; a first classification module that receives the pre-processed fundus image, classifies the pre-processed fundus image using a first classification model based on deep learning to obtain a first classification result and obtains a classification result type including whether reclassification is required based on the first classification result; a grouping module that divides the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified based on the first classification result and the classification result type; the first quality control module comprises a negative quality control module and a positive quality control module, wherein the negative quality control module receives the negative result image, a first quality control model configured on the basis of a preset negative prediction rate is utilized to acquire a negative quality control result of the negative result image, if the negative quality control result is consistent with the first classification result, the negative quality control result is used as a final classification result, otherwise, the negative result image is used as a first image to be arbitrated, the positive quality control module receives the positive result image, a second quality control model configured on the basis of a preset positive prediction rate is utilized to acquire a positive quality control result of the positive result image, if the positive quality control result is consistent with the first classification result, the positive quality control result is used as the final classification result, and otherwise, the positive result image is used as a second image to be arbitrated; a second classification module, which receives the images to be reclassified, classifies the images to be reclassified by using a second classification model which is based on deep learning and is trained on the images to be reclassified to acquire a second classification result, if the second classification result is consistent with the first classification result, the second classification result is used as the final classification result, otherwise, the images to be reclassified are used as a third image to be arbitrated; and an arbitration module which receives the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated and uses the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as the image to be arbitrated, and the arbitrated image to obtain an arbitration classification result and use the arbitration classification result as the final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on a negative prediction rate and a positive result image with a higher risk is acquired based on a positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using a second classification model, and finally arbitration is performed for the image to be arbitrated. Therefore, the classification accuracy of the film reading system can be improved.
Further, in the film reading system according to the first aspect of the present disclosure, optionally, wherein the first classification model includes a plurality of sub-classification models for each type of diabetic retinopathy for receiving the pre-processed fundus image and acquiring sub-classification results, the first classification module acquiring the first classification result based on a plurality of the sub-classification results. Thereby, the first classification result can be acquired based on the plurality of sub-classification models.
In addition, in the film reading system according to the first aspect of the present disclosure, optionally, the preset negative predictive rate is 95% to 99%, and the preset positive predictive rate is 95% to 99%. Thus, a preset negative predictive rate and a preset positive predictive rate can be obtained.
Additionally, in the film reading system according to the first aspect of the present disclosure, optionally, the first classification module outputs the first classification result according to a retinopathy hierarchy used in the national United kingdom retinopathy screening program. In this case, the classification accuracy of the film reading system can be further improved based on the retinopathy hierarchy which has been already applied.
Additionally, in the film reading system according to the first aspect of the present disclosure, optionally, the first classification result includes no retinopathy, background period, pre-proliferation period, and proliferation period; the negative result image comprises the preprocessed fundus image of which the first classification result is retinopathy-free and the classification result type is that no reclassification is required; the positive result image comprises the preprocessed fundus image of which the first classification result is a pre-proliferation stage or a proliferation stage and the classification result type is not required to be classified again; the image to be reclassified comprises the preprocessed fundus image of which the classification result type is required to be reclassified. In this case, dividing the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified enables convenient subsequent targeted processing of each image. Therefore, the classification accuracy of the film reading system can be further improved.
In addition, in the film reading system according to the first aspect of the present disclosure, optionally, the film reading system further includes a self-checking module, where the self-checking module is configured to perform a sampling check on the fundus image of the negative quality control result to determine whether the first confidence threshold meets a requirement, and perform a sampling check on the fundus image of the positive quality control result to determine whether the second confidence threshold meets a requirement. In this case, it may be further confirmed whether the first confidence threshold value and the second confidence threshold value are satisfactory. Therefore, the classification accuracy of the film reading system can be improved.
In addition, in the film reading system related to the first aspect of the present disclosure, optionally, the first confidence threshold is configured with gold standard data based on the preset negative prediction rate; and configuring the second confidence threshold by using gold standard data based on the preset positive predictive rate. Thus, a confidence threshold can be determined.
In addition, in the film reading system related to the first aspect of the present disclosure, optionally, the film reading system further includes an output module, where the output module is configured to output a result report. Thereby, the result report can be output.
A second aspect of the present disclosure provides a film reading method of fundus image classification, including: acquiring a fundus image; preprocessing the fundus image to obtain a preprocessed fundus image; classifying the preprocessed fundus image by using a first classification model based on deep learning to obtain a first classification result and obtaining a classification result type including whether reclassification is required or not based on the first classification result; dividing the pre-processed fundus image into a negative result image, a positive result image and an image to be reclassified based on the first classification result and the classification result type; obtaining a negative quality control result of the negative result image by using a first quality control model configured on the basis of a preset negative prediction rate, if the negative quality control result is consistent with the first classification result, taking the negative quality control result as a final classification result, otherwise taking the negative result image as a first image to be arbitrated, obtaining a positive quality control result of the positive result image by using a second quality control model configured on the basis of a preset positive prediction rate and a second confidence threshold, and if the positive quality control result is consistent with the first classification result, taking the positive quality control result as the final classification result, otherwise taking the positive result image as a second image to be arbitrated; classifying the images to be reclassified by using a second classification model which is based on deep learning and is trained for the images to be reclassified to acquire a second classification result, wherein if the second classification result is consistent with the first classification result, the second classification result is used as the final classification result, otherwise, the images to be reclassified are used as a third image to be arbitrated; and arbitrating the image to be arbitrated to obtain an arbitration classification result and taking the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as an image to be arbitrated, and taking the arbitrated image to be arbitrated as the final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on a negative prediction rate and a positive result image with a higher risk is acquired based on a positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using a second classification model, and finally arbitration is performed for the image to be arbitrated. Thus, the classification accuracy can be improved.
In addition, in the film reading method according to the second aspect of the present disclosure, optionally, a sampling test is performed on the fundus image of the negative quality control result to determine whether the first confidence threshold meets a requirement, and a sampling test is performed on the fundus image of the positive quality control result to determine whether the second confidence threshold meets a requirement. In this case, it may be further confirmed whether the first confidence threshold value and the second confidence threshold value are satisfactory. Thus, the classification accuracy can be improved.
According to the present disclosure, a fundus image classification reading system and a fundus image reading method that improve classification accuracy can be provided.
Drawings
The present disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
Fig. 1 is an application scenario diagram showing a film reading method of fundus image classification according to an example of the present disclosure.
Fig. 2 is a block diagram showing a film reading system of fundus image classification according to an example of the present disclosure.
Fig. 3 (a) is a schematic diagram showing a fundus image to which the example of the present disclosure relates.
Fig. 3 (b) is a schematic diagram showing a fundus image to which the example of the present disclosure relates.
Fig. 4 is a schematic diagram illustrating a convolution kernel employed in a convolution neural network of a first classification module according to an example of the present disclosure.
Fig. 5 is a block diagram showing a film reading system of fundus image classification according to an example of the present disclosure.
Fig. 6 is a flowchart showing a film reading method of fundus image classification according to an example of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same members are denoted by the same reference numerals, and overlapping description thereof is omitted. In addition, the drawings are schematic, and the ratio of the sizes of the components to each other, the shapes of the components, and the like may be different from actual ones. It should be noted that the terms "comprises" and "comprising," and any variations thereof, in this disclosure, such as a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The present disclosure relates to a film reading system 200 and a film reading method of fundus image classification capable of improving classification accuracy. Among them, the film reading system 200 of fundus image classification may be sometimes simply referred to as the film reading system 200, and the film reading method of fundus image classification may be sometimes simply referred to as the film reading method.
Fig. 1 is an application scenario diagram showing a film reading method of fundus image classification according to an example of the present disclosure.
In some examples, a film reading method (described later) may be applied in the application scenario 100 as shown in fig. 1. In the application scenario 100, the operator 110 may control the collection device 130 connected to the terminal 120 to collect fundus images of the fundus of the human eye 140, after the collection device 130 completes the fundus image collection, the terminal 120 may submit the fundus images to the server 150 through a computer network, the server 150 may implement a film reading method by executing computer program instructions stored on the server 150, receive the fundus images and generate a result report of the fundus images through the film reading method, and the server 150 may return the generated result report of the fundus images to the terminal 120. In some examples, the terminal 120 may display the results report. In other examples, the result report may be stored as an intermediate result in a memory of terminal 120 or server 150. In other examples, the fundus image received by the film reading method may be a fundus image stored in the terminal 120 or the server 150.
In some examples, the operator 110 may be a professional, such as an ophthalmic doctor. In other examples, the operator 110 may be a person of ordinary skill in the reading training. The film reading training may include, but is not limited to, the operation of the acquisition device 130 and the operation of the terminal 120 involving the film reading method. In some examples, terminal 120 may include, but is not limited to, a notebook, tablet, or desktop computer, or the like. In some examples, the acquisition device 130 may be a camera. The camera may be, for example, a hand-held fundus camera or a table fundus camera. In some examples, acquisition device 130 may be connected to terminal 120 via a serial port. In some examples, acquisition device 130 may be integrated in terminal 120.
In some examples, server 150 may include one or more processors and one or more memories. The processor may include, among other things, a central processing unit, a graphics processing unit, and any other electronic components capable of processing data, capable of executing computer program instructions. The memory may be used to store computer program instructions. In some examples, server 150 may implement the film reading method by executing computer program instructions on a memory. In some examples, server 150 may also be a cloud server.
Hereinafter, the film reading system 200 according to the present disclosure will be described in detail with reference to the accompanying drawings. The present disclosure relates to a film reading system 200 for implementing the film reading method described above. Fig. 2 is a block diagram illustrating a film reading system 200 of fundus image classification according to an example of the present disclosure.
In some examples, as shown in fig. 2, the film reading system 200 may include an acquisition module 210, a preprocessing module 220, a first classification module 230, a grouping module 240, a first quality control module 250, a second classification module 260, and an arbitration module 270. In some examples, the obtaining module 210 may be used to obtain a fundus image, the preprocessing module 220 may be used to preprocess the fundus image to obtain a preprocessed fundus image, the first classifying module 230 may be used to classify the preprocessed fundus image and obtain a first classification result and a classification result type, the grouping module 240 may divide the preprocessed fundus image into a negative result image, a positive result image, and an image to be reclassified, the first quality control module 250 may obtain a final classification result, a first image to be arbitrated, and a second image to be arbitrated based on the negative result image and the positive result image, the second classifying module 260 may obtain a final classification result and a third image to be arbitrated based on the image to be reclassified, and the arbitration module 270 may be used to arbitrate the image to be arbitrated to obtain an arbitration classification result and as a final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on a negative prediction rate and a positive result image with a higher risk is acquired based on a positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using a second classification model, and finally arbitration is performed for the image to be arbitrated. Thus, the classification accuracy of the film reading system 200 can be improved.
Fig. 3 (a) is a schematic diagram showing a fundus image to which the example of the present disclosure relates. Fig. 3 (b) is a schematic diagram showing a fundus image to which the example of the present disclosure relates.
In some examples, the acquisition module 210 may be used to acquire fundus images. In some examples, the fundus image may be a color fundus image. The colorful fundus image can clearly present abundant fundus information such as optic disc, optic cup, macula, blood vessel and the like. In addition, the fundus image may be an image of RGB mode, CMYK mode, lab mode, or gradation mode, or the like. In some examples, fundus images may be acquired by the acquisition device 130. In other examples, the fundus image may be a fundus image stored in the terminal 120 or the server 150. As an example of the fundus image, fig. 3 (a) and 3 (b) are, for example, fundus images of different human eyes 140, respectively.
In some examples, the preprocessing module 220 may be used to preprocess the fundus image to obtain a preprocessed fundus image. Specifically, the preprocessing module 220 may acquire the fundus image output by the acquisition module 210, and perform preprocessing on the fundus image to obtain a preprocessed fundus image.
In some examples, the preprocessing module 220 may crop the bottom-of-eye image. In general, since the fundus image acquired by the acquisition module 210 may have a problem of different image formats or sizes, it is necessary to trim the fundus image so that the fundus image is converted into an image of a fixed standard form. A fixed standard form may refer to the images being in the same format and of uniform size. For example, in some examples, the size of the fundus image after preprocessing may be unified as a fundus image of 256×256, 374×374, 512×512, 768×768, or 1024×1024 pixels.
In some examples, the preprocessing module 220 may normalize the bottom-of-eye image.
In some examples, the normalization process may include coordinate centering, scaling normalization, and the like on the bottom-of-eye image. Therefore, the difference of different fundus images can be overcome, and the performance of the first classification model can be improved. In addition, in some examples, the preprocessing module 220 may include performing noise reduction, graying, and the like on the fundus image. Thus, the features of the fundus image can be highlighted.
In some examples, the fundus image may also be directly classified subsequently without preprocessing.
In some examples, the first classification module 230 may be configured to classify the pre-processed fundus image and obtain a first classification result and a classification result type. In some examples, the first classification module 230 may also obtain a confidence of the first classification result. In some examples, the first classification module 230 may output the first classification result according to a retinopathy hierarchy used by the british national retinopathy screening program. In some examples, the first classification result may include at least a retinopathies (R0), a background period (R1), a pre-proliferation period (R2), and a proliferation period (R3). In this case, the classification accuracy of the film reading system 200 can be further improved based on the retinopathy hierarchy which has already been applied. In some examples, the first classification result may further include non-diabetic macular edema (M0) and macular edema (M1).
Examples of the present disclosure are not limited thereto, however, and in other examples, the first classification result may include at least a negative result and a positive result. In some examples, the first classification module 230 may filter out pre-processed fundus images that cannot be classified (e.g., pre-processed fundus images that are too poor in picture quality to be classified).
In some examples, the classification result type may include a type of whether reclassification is required (e.g., reclassification is required and reclassification is not required). In some examples, a classification result type may be obtained based on the first classification result. In some examples, the classification result type of the pre-processed fundus image in which the first classification result is the background period may be set to require reclassification, and the other pre-processed fundus images are set not to require reclassification. In some examples, a determination may be made as to whether reclassification is needed to obtain the classification result type based on the confidence of the first classification result. For example, the pre-processed fundus image having the first classification result lower than the preset confidence (e.g., 40% or 50%) is set to require reclassification, and the other pre-processed fundus images are set not to require reclassification.
In some examples, the first classification module 230 may classify the fundus image using a machine-learned algorithm to obtain a first classification result. In some examples, the machine-learned algorithm may be at least one of a traditional machine-learned algorithm and a deep-learned algorithm. In this case, an appropriate machine learning algorithm can be selected according to actual needs. In some examples, the first classification model may be established based on a machine learning algorithm.
Fig. 4 is a schematic diagram illustrating a convolution kernel employed in a convolutional neural network of the first classification module 230 in accordance with examples of the present disclosure.
In some examples, the first classification model established based on the deep learning algorithm may be a Convolutional Neural Network (CNN). In some examples, a Convolutional Neural Network (CNN) may automatically identify features in the fundus image using a3 x3 convolutional kernel (see fig. 4). Examples of the present disclosure are not limited thereto and in other examples, the convolution kernel of the Convolutional Neural Network (CNN) may be a5 x 5 convolution kernel, a 2x 2 convolution kernel, a7 x 7 convolution kernel, or the like. In this case, since the Convolutional Neural Network (CNN) has the characteristic of high efficiency of image feature recognition, the performance of the film reading system 200 can be effectively improved.
Examples of the present disclosure are not limited thereto, however, in other examples, the machine-learned algorithm of the first classification module 230 may be a conventional machine-learned algorithm. In some examples, the algorithms of traditional machine learning may include, but are not limited to, linear regression algorithms, logistic regression algorithms, decision tree algorithms, support vector machine algorithms, bayesian algorithms, or the like. In this case, fundus features in the fundus image may be extracted first by using an image processing algorithm, and then input into a first classification model established based on an algorithm of conventional machine learning to realize classification of the fundus image.
In some examples, the first classification model may include a plurality of sub-classification models. The respective sub-classification model may be for each type of diabetic retinopathy. Each sub-classification model may receive the pre-processed fundus image and obtain sub-classification results. In some examples, the first classification module 230 may obtain the first classification result based on a plurality of sub-classification results. Thereby, the first classification result can be acquired based on the plurality of sub-classification models.
Specifically, different sub-classification models can be built for the non-retinopathy, background period, pre-proliferation period, non-diabetic macular edema and macular edema, and training can be performed to obtain sub-classification results (such as background period or non-background period) and confidence of whether the sub-classification results are of a certain diabetic retinopathy, and further the first classification results can be obtained according to the respective sub-classification results and confidence. For example, the sub-classification result with the highest confidence may be obtained as the first classification result.
In some examples, grouping module 240 may divide the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified. In some examples, the pre-processed fundus image may be divided into a negative result image, a positive result image, and an image to be reclassified based on the first classification result and the classification result type. In this case, dividing the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified enables convenient subsequent targeted processing of each image. Thus, the classification accuracy of the film reading system 200 can be further improved.
Specifically, the negative result image may include a preprocessed fundus image in which the first classification result is retinopathy-free and the classification result type is such that no reclassification is required. The positive result image may include a pre-processed fundus image in which the first classification result is a pre-proliferation or proliferation stage and the classification result type is a pre-processed fundus image that does not require reclassification. The image to be reclassified may include a pre-processed fundus image of which the classification result type is that which needs to be reclassified. In some examples, the pre-processed fundus image that needs to be reclassified may include a pre-processed fundus image in which the first classification result is a background period and a pre-processed fundus image that cannot be classified.
In some examples, the first quality control module 250 may obtain the final classification result, the first image to be arbitrated, and the second image to be arbitrated based on the negative result image and the positive result image. As shown in fig. 2, in some examples, the first quality control module 250 may include a female quality control module 251 and a male quality control module 252.
In some examples, negative quality control module 251 may receive a negative result image, obtain a negative quality control result using a first quality control model that configures a first confidence threshold based on a preset negative predictive rate. In general, the higher the negative predictive rate, the more sensitive the first quality control model is to negative results (i.e., no retinopathy), and the more uncertain negative result images will be classified as positive results (i.e., there is some type of diabetic retinopathy). Specifically, the preset negative predictive rate may be set according to requirements (e.g., customer requirements or default requirements) before the film reading system 200 is formally released to the formal environment. And continuously adjusting the first confidence threshold value based on the preset negative predictive rate and testing to obtain the first confidence threshold value corresponding to the preset negative predictive rate. In some examples, the preset negative predictive rate may be 95% to 99%. For example, the preset negative predictive rate may be 95%, 96%, 97%, 98%, 99%, or the like.
In some examples, the first confidence threshold may be configured with gold standard data. Thus, a first confidence threshold can be determined. In some examples, the first confidence threshold may be solved inversely according to a preset negative predictive rate based on the gold standard data. In some examples, in the inverse solution, confidence thresholds within a preset range (e.g., the preset range may be 90% to 100%) may be traversed in a preset step size and the performance indicators may be solved to obtain correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators, and the first confidence threshold may be determined based on the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators and the preset negative prediction rate. In this case, the first confidence threshold value can be easily and quickly determined using the correspondence between the plurality of confidence threshold values and the plurality of sets of performance indicators. In some examples, performance metrics may include sensitivity, specificity, positive predictive rate, and negative predictive rate.
Specifically, since on the gold standard data, performance indexes such as sensitivity, specificity, positive prediction rate (positive prediction rate may be the number of true positives/(the number of true positives+the number of false positives)) and negative prediction rate (negative prediction rate may be the number of true negatives/(the number of true negatives+the number of false negatives)) corresponding to the respective confidence thresholds are determined. In some examples, confidence thresholds within a preset range may be traversed in preset steps and related performance indicators solved. For example, the preset step size may be 0.01, 0.001, or 0.0001, etc. In this case, a table recording a set of performance indexes corresponding to each confidence threshold may be created based on the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indexes, where the confidence threshold corresponding to the performance index including the preset negative prediction rate is the first confidence threshold.
However, examples of the present disclosure are not limited thereto, and in other examples, the first confidence threshold corresponding to the preset negative predictive rate may be obtained by continuously adjusting the first confidence threshold and testing based on the preset negative predictive rate. For example, a first initial confidence threshold may be set based on the gold standard data and a negative prediction rate based on the first initial confidence threshold may be obtained, and if the absolute difference between the negative prediction rate and the preset negative prediction rate is greater than a preset value (e.g., 1%, 2%, or 3%), the first initial confidence threshold is adjusted and continuing to compare the negative prediction rate with the preset negative prediction rate, otherwise the first initial confidence threshold is used as the first confidence threshold.
In some examples, the first quality control model may be the same as the first classification model. In other examples, the first quality control model may be a model that is retrained for negative result images. In some examples, the negative quality control result may include a portion of the first classification result. For example, the negative quality control result may include no retinopathy. In some examples, the confidence may be a probability that the negative result image belongs to a certain negative quality control result. In some examples, the first confidence threshold may include a positive clearance threshold, a negative clearance threshold, and a result threshold. In some examples, the first confidence threshold may be multiple sets if the first quality control model includes multiple first sub-quality control models for each type of diabetic retinopathy. For example, if there are n first sub-quality control models, then n sets of first confidence thresholds are required. In some examples, the first quality control model may output a negative quality control result according to a result threshold. In some examples, the first quality control model confidence may arbitrate for negative quality control results between a negative clearance threshold and a positive clearance threshold.
In some examples, if the negative quality control result of the negative result image is consistent with the first classification result, the negative quality control result is used as the final classification result of the negative result image, otherwise, the negative result image is used as the first image to be arbitrated. In this case, a negative result image having a low risk can be distinguished by setting a preset negative prediction rate and compared with the first classification result.
In some examples, the first quality control module 250 may include a positive quality control module 252. In some examples, positive quality control module 252 may receive the positive result image and obtain the positive quality control result using a second quality control model that configures a second confidence threshold based on a preset positive predictive rate. In general, the higher the positive predictive rate, the more sensitive the second quality control model is to positive results (i.e., the presence of some type of diabetic retinopathy), and the more uncertain positive result images will be classified as negative results (i.e., no retinopathy). Specifically, before the film reading system 200 is formally released to the formal environment. The preset positive predictive rate may be set according to requirements (e.g., customer requirements or default requirements). And continuously adjusting the second confidence threshold based on the preset positive predictive rate and testing to obtain the second confidence threshold corresponding to the preset positive predictive rate. In some examples, the preset positive predictive rate may be 95% to 99%. For example, the predetermined positive predictive rate may be 95%, 96%, 97%, 98%, 99%, or the like.
In some examples, the second confidence threshold may be configured with gold standard data. Thereby, a second confidence threshold can be determined. In some examples, the second confidence threshold may be solved inversely according to a preset positive predictive rate based on the gold standard data. In some examples, the second confidence threshold may be determined based on the correspondence of the plurality of confidence thresholds to the plurality of sets of performance metrics and the preset positive predictive rate. In this case, the second confidence threshold value can be easily and quickly determined using the correspondence between the plurality of confidence threshold values and the plurality of sets of performance indicators. For details, see the description of the inverse solution in the first confidence threshold.
Examples of the present disclosure are not limited thereto, however, in other examples, the second confidence threshold corresponding to the preset positive predictive rate may be obtained by continuously adjusting the second confidence threshold and testing based on the preset positive predictive rate.
In some examples, the second quality control model may be the same as the first classification model. In other examples, the second quality control model may be a model that is retrained for positive result images. In some examples, the positive quality control results may include a portion of the first classification results. For example, positive quality control results may include pre-proliferation and proliferation phases. In some examples, the confidence may be a probability that the positive result image belongs to a positive quality control result. In some examples, the second confidence threshold may include a positive clearance threshold, a negative clearance threshold, and a result threshold. For details, reference may be made to the description associated with the first confidence threshold.
In some examples, if the positive quality control result of the positive result image is consistent with the first classification result, the positive quality control result is used as the final classification result of the positive result image, otherwise, the positive result image is used as the second image to be arbitrated. In this case, a positive result image having a high risk can be distinguished by setting a preset positive prediction rate and compared with the first classification result.
As described above, the reader system 200 may include a second classification module 260 (see FIG. 2). In some examples, the second classification module 260 may obtain a final classification result and a third image to be arbitrated based on the images to be reclassified.
In some examples, the second classification module 260 may receive the image to be reclassified, classify the image to be reclassified using the second classification model to obtain a second classification result. The second classification model may be obtained based on deep learning and trained on the images to be reclassified. In some examples, the second classification result may include a portion of the first classification result. For example, the second classification result may include no retinopathy, background phase, pre-proliferation phase, and proliferation phase. In some examples, when training for images to be reclassified, such image-related features of the images to be reclassified may be extracted and a second classification model trained in conjunction with the images to be reclassified. In some examples, the relevant features may include microaneurysms, hemorrhages, exudations, lint plaques, neovascular or maculopathy. In some examples, the relevant characteristics may include health, age, and medical history. In some examples, the second classification model may also be trained in connection with color features, texture features, and shape features of the image to be reclassified.
In some examples, if the second classification result of the image to be reclassified is consistent with the first classification result, the second classification result is taken as a final classification result of the image to be reclassified, otherwise, the image to be reclassified is taken as a third image to be arbitrated.
In some examples, the arbitration module 270 may be configured to arbitrate the image to be arbitrated to obtain an arbitration classification result. In some examples, the arbitration classification result may be the final classification result. In some examples, the image to be arbitrated may be a first image to be arbitrated, a second image to be arbitrated, or a third image to be arbitrated. In some examples, the arbitration classification result may be consistent with the first classification result. In some examples, the image to be arbitrated may be judged by an arbitrating physician to obtain an arbitration classification result.
Fig. 5 is a block diagram illustrating a film reading system 200 of fundus image classification according to an example of the present disclosure.
In some examples, as shown in fig. 5, the film reading system 200 further includes a self-test module 280, and in some examples, the self-test module 280 may be configured to perform a spot test on the fundus image of the negative quality control result to determine whether the first confidence threshold is met. In some examples, the self-test module 280 may be configured to perform a spot test on the fundus image of the positive quality control result to determine if the second confidence threshold is satisfactory. In some examples, sampling methods may be utilized for spot checking. Random sampling may be used for spot checking, for example. In some examples, the degree of spot inspection for the just-released reader system 200 may be increased (e.g., increasing the sampling rate). In this case, it may be further confirmed whether the first confidence threshold value and the second confidence threshold value are satisfactory. Thus, the classification accuracy of the film reading system 200 can be improved.
In some examples, as shown in fig. 5, the film reading system 200 may also include an output module 290. In some examples, the output module may be used to output the result report. In some examples, the output module 290 may determine at least one of a first classification result, a negative quality control result, a positive quality control result, a second classification result, an arbitration classification result, a final classification result to output a result report of the fundus image. In some examples, the results report may include a confidence level for each result.
Hereinafter, a film reading method of fundus image classification of the present disclosure is described in detail with reference to fig. 6. The film reading method of fundus image classification to which the present disclosure relates may sometimes be simply referred to as a film reading method. The film reading method is applied to the film reading system 200 described above. Fig. 6 is a flowchart showing a film reading method of fundus image classification according to an example of the present disclosure.
In some examples, as shown in fig. 6, the film reading method may include acquiring a fundus image (step S110), preprocessing the fundus image to acquire a preprocessed fundus image (step S120), classifying the preprocessed fundus image and acquiring a first classification result and a classification result type (step S130), dividing the preprocessed fundus image into a negative result image, a positive result image, and an image to be reclassified (step S140), acquiring a final classification result, a first image to be arbitrated, and a second image to be arbitrated based on the negative result image and the positive result image (step S150), acquiring a final classification result and a third image to be arbitrated based on the image to be reclassified (step S160), and arbitrating the image to acquire an arbitration classification result and as a final classification result (step S170). In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on a negative prediction rate and a positive result image with a higher risk is acquired based on a positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using a second classification model, and finally arbitration is performed for the image to be arbitrated. Thus, the classification accuracy can be improved.
In some examples, in step S110, a fundus image may be acquired. The fundus image may be a color fundus image. The colorful fundus image can clearly present abundant fundus information such as optic disc, optic cup, macula, blood vessel and the like. For a detailed description, reference may be made to the description of the acquisition module 210 in the reader system 200.
In some examples, in step S120, the fundus image may be preprocessed to obtain a preprocessed fundus image. In some examples, the bottom-of-eye image may be cropped, normalized, noise reduced, grayed, etc. For a detailed description, reference may be made to the relevant description of the preprocessing module 220 in the reader system 200.
In some examples, in step S130, the pre-processed fundus image may be classified using a first classification model based on deep learning to obtain a first classification result. In some examples, a classification result type may be obtained based on the first classification result. In some examples, the classification result type includes a type of whether reclassification is required. In some examples, the first classification result may be output according to a retinopathy hierarchy used in the british national retinopathy screening program. In some examples, the first classification result may include at least a retinopathies (R0), a background period (R1), a pre-proliferation period (R2), and a proliferation period (R3). In this case, classification accuracy can be further improved based on the retinopathy hierarchy which has already been applied in maturity. In some examples, an unclassified pre-processed fundus image may be acquired (e.g., a pre-processed fundus image that is too poor in picture quality to be classified). In some examples, the first classification model may include a plurality of sub-classification models. The respective sub-classification model may be for each type of diabetic retinopathy. Each sub-classification model may receive the pre-processed fundus image and obtain sub-classification results. In some examples, the first classification module 230 may obtain the first classification result based on a plurality of sub-classification results. Thereby, the first classification result can be acquired based on the plurality of sub-classification models. For a detailed description, reference may be made to the description of the first classification module 230 in the reader system 200.
In some examples, in step S140, the pre-processed fundus image may be divided into a negative result image, a positive result image, and an image to be reclassified based on the first classification result and the classification result type. In this case, dividing the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified enables convenient subsequent targeted processing of each image. Thus, the classification accuracy can be further improved. Specifically, the negative result image may include a preprocessed fundus image in which the first classification result is retinopathy-free and the classification result type is such that no reclassification is required. The positive result image may include a pre-processed fundus image in which the first classification result is a pre-proliferation or proliferation stage and the classification result type is a pre-processed fundus image that does not require reclassification. The image to be reclassified may include a pre-processed fundus image of which the classification result type is that which needs to be reclassified. In some examples, the pre-processed fundus image that needs to be reclassified may include a pre-processed fundus image in which the first classification result is a background period and a pre-processed fundus image that cannot be classified. For a detailed description, reference may be made to the description of the grouping module 240 in the reader system 200.
In some examples, in step S150, a negative quality control result of the negative result image may be obtained using the first quality control model. In some examples, the first quality control model may be configured with a first confidence threshold based on a preset negative predictive rate. In some examples, if the negative quality control result is consistent with the first classification result, the negative quality control result is taken as a final classification result, otherwise, the negative result image is taken as a first image to be arbitrated. In some examples, a positive quality control result of the positive result image may be obtained using the second quality control model. The first quality control model may be configured with a second confidence threshold based on a preset positive predictive rate. In some examples, if the positive quality control result is consistent with the first classification result, the positive quality control result is taken as a final classification result, otherwise, the positive result image is taken as a second image to be arbitrated. In some examples, the first confidence threshold may be configured with gold standard data (i.e., the first confidence threshold is continually adjusted and ultimately determined with gold standard data). Thus, a first confidence threshold can be determined. In some examples, the second confidence threshold may be configured with gold standard data (i.e., the second confidence threshold is continually adjusted and ultimately determined with gold standard data). Thereby, a second confidence threshold can be determined. For a detailed description, reference may be made to the description of the first quality control module 250 in the reader system 200.
In some examples, in step S160, the images to be reclassified may be classified using a deep learning based second classification model to obtain a second classification result. In some examples, the second classification model may be trained on the images to be reclassified. In some examples, if the second classification result is consistent with the first classification result, the second classification result is taken as a final classification result, otherwise, the image to be reclassified is taken as a third image to be arbitrated. For a detailed description, reference may be made to the description of the second classification module 260 in the reader system 200.
In some examples, in step S170, the image to be arbitrated may be arbitrated to obtain an arbitration classification result and as a final classification result. In some examples, the image to be arbitrated may be a first image to be arbitrated, a second image to be arbitrated, or a third image to be arbitrated. For a detailed description, reference may be made to the associated description of the arbitration module 270 in the reader system 200.
In some examples, the film reading method further includes a self-checking step (not shown). In some examples, in the self-checking step, a sampling of fundus images of negative quality control results may be performed to determine whether the first confidence threshold is satisfactory. In some examples, a spot check is performed on the fundus image of the positive quality control result to determine if the second confidence threshold is satisfactory. In this case, it may be further confirmed whether the first confidence threshold value and the second confidence threshold value are satisfactory. Thus, the classification accuracy of the film reading system 200 can be improved. For a detailed description, reference may be made to the description of the self-test module 280 in the reader system 200.
In some examples, the film reading method further includes an outputting step. In some examples, the outputting step may be for outputting a result report. For a detailed description, reference may be made to the associated description of the output module 290 in the reader system 200.
While the invention has been described in detail in connection with the drawings and embodiments, it should be understood that the foregoing description is not intended to limit the invention in any way. Modifications and variations of the invention may be made as desired by those skilled in the art without departing from the true spirit and scope of the invention, and such modifications and variations fall within the scope of the invention.

Claims (10)

1.A film reading system for classifying fundus images is characterized in that,
Comprising the following steps: an acquisition module for acquiring a fundus image; a preprocessing module for preprocessing the fundus image to obtain a preprocessed fundus image; a first classification module that receives the pre-processed fundus image, classifies the pre-processed fundus image using a first classification model based on deep learning to obtain a first classification result and obtains a classification result type including whether reclassification is required based on the first classification result; a grouping module that divides the pre-processed fundus image into a negative result image, a positive result image, and an image to be reclassified based on the first classification result and the classification result type; the first quality control module comprises a negative quality control module and a positive quality control module, wherein the negative quality control module receives the negative result image, a first quality control model configured on the basis of a preset negative prediction rate is utilized to obtain a negative quality control result of the negative result image, if the negative quality control result is consistent with the first classification result, the negative quality control result is used as a classification result, otherwise, the negative result image is used as a first image to be arbitrated, the positive quality control module receives the positive result image, a second quality control model configured on the basis of a preset positive prediction rate is utilized to obtain a positive quality control result of the positive result image, if the positive quality control result is consistent with the first classification result, the positive quality control result is used as the classification result, and otherwise, the positive result image is used as a second image to be arbitrated; a second classification module, which receives the images to be reclassified, classifies the images to be reclassified by using a second classification model which is based on deep learning and is trained on the images to be reclassified to acquire a second classification result, if the second classification result is consistent with the first classification result, the second classification result is used as the classification result, otherwise, the images to be reclassified are used as a third image to be arbitrated; and an arbitration module which receives the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated and uses the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as the image to be arbitrated, and the arbitrated image to obtain an arbitration classification result and uses the arbitration classification result as the classification result.
2. The film reading system as in claim 1, wherein:
The first classification model includes a plurality of sub-classification models for each type of diabetic retinopathy for receiving the pre-processed fundus image and obtaining sub-classification results, the first classification module obtaining the first classification result based on a plurality of the sub-classification results.
3. The film reading system as in claim 1, wherein:
the preset negative predictive rate is 95% to 99%, and the preset positive predictive rate is 95% to 99%.
4. The film reading system as in claim 1, wherein:
The first classification module outputs the first classification result according to a retinopathy hierarchy used in the national retinopathy screening program in the United kingdom.
5. The film reading system as defined in claim 4, wherein:
the first classification result includes no retinopathy, background period, pre-proliferation period and proliferation period;
the negative result image comprises the preprocessed fundus image of which the first classification result is retinopathy-free and the classification result type is that no reclassification is required;
The positive result image comprises the preprocessed fundus image of which the first classification result is a pre-proliferation stage or a proliferation stage and the classification result type is not required to be classified again;
The image to be reclassified comprises the preprocessed fundus image of which the classification result type is required to be reclassified.
6. The film reading system as in claim 1, wherein:
The film reading system further comprises a self-checking module, wherein the self-checking module is used for conducting selective checking on the fundus image of the negative quality control result to judge whether the first confidence coefficient threshold meets the requirement or not and conducting selective checking on the fundus image of the positive quality control result to judge whether the second confidence coefficient threshold meets the requirement or not.
7. The film reading system as in claim 1, wherein:
Configuring the first confidence threshold by using gold standard data based on the preset negative predictive rate; and configuring the second confidence threshold by using gold standard data based on the preset positive predictive rate.
8. The film reading system of any of claims 1 through 7, wherein:
the film reading system further comprises an output module, wherein the output module is used for outputting a result report.
9. A film reading method for classifying fundus images is characterized in that,
Comprising the following steps: acquiring a fundus image; preprocessing the fundus image to obtain a preprocessed fundus image; classifying the preprocessed fundus image by using a first classification model based on deep learning to obtain a first classification result and obtaining a classification result type including whether reclassification is required or not based on the first classification result; dividing the pre-processed fundus image into a negative result image, a positive result image and an image to be reclassified based on the first classification result and the classification result type; obtaining a negative quality control result of the negative result image by using a first quality control model configured on the basis of a preset negative prediction rate, if the negative quality control result is consistent with the first classification result, taking the negative quality control result as a classification result, otherwise taking the negative result image as a first image to be arbitrated, obtaining a positive quality control result of the positive result image by using a second quality control model configured on the basis of a preset positive prediction rate and a second confidence threshold, and if the positive quality control result is consistent with the first classification result, taking the positive quality control result as the classification result, otherwise taking the positive result image as a second image to be arbitrated; classifying the images to be reclassified by using a second classification model which is based on deep learning and is trained for the images to be reclassified to acquire a second classification result, wherein if the second classification result is consistent with the first classification result, the second classification result is used as the classification result, otherwise, the images to be reclassified are used as a third image to be arbitrated; and arbitrating the image to be arbitrated to obtain an arbitration classification result and taking the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as an image to be arbitrated.
10. The film reading method as claimed in claim 9, wherein:
and performing sampling inspection on the fundus image of the negative quality control result to judge whether the first confidence coefficient threshold meets the requirement or not, and performing sampling inspection on the fundus image of the positive quality control result to judge whether the second confidence coefficient threshold meets the requirement or not.
CN202110316350.9A 2021-03-24 2021-03-24 Fundus image classification reading system and reading method Active CN115131592B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202411917370.1A CN119723209A (en) 2021-03-24 2021-03-24 Quality control module for fundus images and film reading system including the quality control module
CN202110316350.9A CN115131592B (en) 2021-03-24 2021-03-24 Fundus image classification reading system and reading method
CN202411917369.9A CN119723208A (en) 2021-03-24 2021-03-24 Fundus image classification film reading method, fundus image classification film reading system and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110316350.9A CN115131592B (en) 2021-03-24 2021-03-24 Fundus image classification reading system and reading method

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202411917370.1A Division CN119723209A (en) 2021-03-24 2021-03-24 Quality control module for fundus images and film reading system including the quality control module
CN202411917369.9A Division CN119723208A (en) 2021-03-24 2021-03-24 Fundus image classification film reading method, fundus image classification film reading system and server

Publications (2)

Publication Number Publication Date
CN115131592A CN115131592A (en) 2022-09-30
CN115131592B true CN115131592B (en) 2024-11-26

Family

ID=83374027

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110316350.9A Active CN115131592B (en) 2021-03-24 2021-03-24 Fundus image classification reading system and reading method
CN202411917370.1A Pending CN119723209A (en) 2021-03-24 2021-03-24 Quality control module for fundus images and film reading system including the quality control module
CN202411917369.9A Pending CN119723208A (en) 2021-03-24 2021-03-24 Fundus image classification film reading method, fundus image classification film reading system and server

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202411917370.1A Pending CN119723209A (en) 2021-03-24 2021-03-24 Quality control module for fundus images and film reading system including the quality control module
CN202411917369.9A Pending CN119723208A (en) 2021-03-24 2021-03-24 Fundus image classification film reading method, fundus image classification film reading system and server

Country Status (1)

Country Link
CN (3) CN115131592B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330449A (en) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 Method and device for detecting signs of diabetic retinopathy
CN108334909A (en) * 2018-03-09 2018-07-27 南京天数信息科技有限公司 Cervical carcinoma TCT digital slices data analysing methods based on ResNet

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080577B (en) * 2019-11-27 2023-05-26 北京至真互联网技术有限公司 Fundus image quality evaluation method, fundus image quality evaluation system, fundus image quality evaluation apparatus, and fundus image storage medium
CN111814893A (en) * 2020-07-17 2020-10-23 首都医科大学附属北京胸科医院 EGFR mutation prediction method and system based on deep learning in lung full scan images
CN112269878B (en) * 2020-11-02 2024-03-26 成都纬创立科技有限公司 Interpretable legal decision prediction method, interpretable legal decision prediction device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330449A (en) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 Method and device for detecting signs of diabetic retinopathy
CN108334909A (en) * 2018-03-09 2018-07-27 南京天数信息科技有限公司 Cervical carcinoma TCT digital slices data analysing methods based on ResNet

Also Published As

Publication number Publication date
CN119723209A (en) 2025-03-28
CN115131592A (en) 2022-09-30
CN119723208A (en) 2025-03-28

Similar Documents

Publication Publication Date Title
US12114929B2 (en) Retinopathy recognition system
Bilal et al. Diabetic retinopathy detection and classification using mixed models for a disease grading database
US10413180B1 (en) System and methods for automatic processing of digital retinal images in conjunction with an imaging device
Hassan et al. Joint segmentation and quantification of chorioretinal biomarkers in optical coherence tomography scans: A deep learning approach
Niemeijer et al. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis
Xiao et al. Major automatic diabetic retinopathy screening systems and related core algorithms: a review
CN112232448B (en) Image classification method and device, electronic equipment and storage medium
CN113158821B (en) Method and device for processing eye detection data based on multiple modes and terminal equipment
CN113177916A (en) Slight hypertension fundus identification model based on few-sample learning method
WO2017020045A1 (en) System and methods for malarial retinopathy screening
CN110619332A (en) Data processing method, device and equipment based on visual field inspection report
CN111242920A (en) Biological tissue image detection method, device, equipment and medium
JPWO2019073962A1 (en) Image processing apparatus and program
Reethika et al. Diabetic retinopathy detection using statistical features
CN115206494A (en) Film reading system and method based on fundus image classification
CN115131592B (en) Fundus image classification reading system and reading method
Hussein et al. Automatic classification of AMD in retinal images
CN114557670A (en) Physiological age prediction method, apparatus, device and medium
CN115131267B (en) Fundus abnormality interpretation system and interpretation method based on fundus image
CN115132326B (en) Evaluation system and evaluation method for diabetic retinopathy based on fundus image
Hakeem et al. Inception V3 and CNN Approach to Classify Diabetic Retinopathy Disease
CN115120179A (en) Interpretation system and interpretation method for detecting fundus abnormality based on fundus image
Hussein et al. Convolutional Neural Network (CNN) for diagnosing age-related macular degeneration (AMD) in retinal images
CN115206477A (en) Fundus image-based film reading system and fundus image-based film reading method
Rahman et al. A Low-Cost Diabetic Retinopathy Screening Tool Using a Smartphone and Machine Learning Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant