CN112634221B - Cornea hierarchy identification and lesion positioning method and system based on images and depth - Google Patents
Cornea hierarchy identification and lesion positioning method and system based on images and depth Download PDFInfo
- Publication number
- CN112634221B CN112634221B CN202011498043.9A CN202011498043A CN112634221B CN 112634221 B CN112634221 B CN 112634221B CN 202011498043 A CN202011498043 A CN 202011498043A CN 112634221 B CN112634221 B CN 112634221B
- Authority
- CN
- China
- Prior art keywords
- cornea
- image
- hierarchy
- depth
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004087 cornea Anatomy 0.000 title claims abstract description 329
- 230000003902 lesion Effects 0.000 title claims abstract description 109
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000005856 abnormality Effects 0.000 claims description 37
- 230000002159 abnormal effect Effects 0.000 claims description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000012937 correction Methods 0.000 claims description 22
- 238000010586 diagram Methods 0.000 claims description 19
- 230000004807 localization Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 13
- 230000000007 visual effect Effects 0.000 claims description 11
- 210000002919 epithelial cell Anatomy 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 210000002889 endothelial cell Anatomy 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 230000036285 pathological change Effects 0.000 claims description 7
- 231100000915 pathological change Toxicity 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000001000 micrograph Methods 0.000 claims description 4
- 210000004126 nerve fiber Anatomy 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005096 rolling process Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 7
- 210000003683 corneal stroma Anatomy 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000001575 pathological effect Effects 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 206010063385 Intellectualisation Diseases 0.000 description 3
- 206010030113 Oedema Diseases 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000017074 necrotic cell death Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012163 sequencing technique Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 208000021921 corneal disease Diseases 0.000 description 2
- 210000005081 epithelial layer Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241000224489 Amoeba Species 0.000 description 1
- 201000004569 Blindness Diseases 0.000 description 1
- 206010011033 Corneal oedema Diseases 0.000 description 1
- 241000233866 Fungi Species 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000942 confocal micrograph Methods 0.000 description 1
- 238000004624 confocal microscopy Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 201000004778 corneal edema Diseases 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 210000004443 dendritic cell Anatomy 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000004969 inflammatory cell Anatomy 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 206010023365 keratopathy Diseases 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007920 subcutaneous administration Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of medical artificial intelligent image recognition, and particularly discloses a cornea hierarchy recognition and lesion positioning method and system based on images and depth.
Description
Technical Field
The invention relates to the field of medical artificial intelligent image recognition, in particular to a cornea hierarchy recognition and lesion positioning method and system based on images and depth.
Background
Keratopathy can seriously threaten vision, and is the second cause of blindness and low vision in China. Confocal microscopy can scan living cornea, detect cornea ultrastructure, display cell level morphological change under normal and pathological state, and provide important information for diagnosis of cornea diseases. By identifying lesions and analyzing the anatomical hierarchy they involve, the severity of corneal disease can be assessed, thereby selecting an appropriate treatment regimen. The cornea can be divided into five layers according to anatomy and confocal microscopy image morphology: epithelial cell layer, anterior elastic layer, corneal stroma layer, posterior elastic layer, and endothelial cell layer. Wherein the nerve fibers are distributed in the subcutaneous and anterior elastic layers. However, manual interpretation of the cornea confocal images requires specialized training by an interpretation person, and the film reading work is time-consuming and laborious, and lacks enough ophthalmologists to do so clinically; and manual reading depends on personal experience of doctors and is subjectively influenced. The application of artificial intelligence can greatly improve the efficiency and accuracy of confocal microscope image interpretation, and greatly improve the value of the medical examination means in clinic.
At present, artificial intelligence is used for automatically analyzing the cornea confocal image, so that a plurality of layers of the cornea cannot be accurately identified, and particularly, in the case of pathological changes or abnormal conditions, the identification accuracy is insufficient, and the pathological change range cannot be positioned.
Disclosure of Invention
In order to solve the problems that the existing cornea hierarchy identification accuracy is insufficient and the lesions cannot be intelligently positioned at present, the invention provides a cornea hierarchy identification and lesion positioning method and system based on images and depth.
The invention provides a technical scheme for solving the technical problems as follows: the cornea layer comprises an epithelial cell layer, an epithelial nerve fiber plexus, a pre-elastic layer, a cornea stroma layer, a post-elastic layer and an endothelial cell layer; the method comprises the following steps of S1: acquiring patient information and a plurality of corresponding first corner images; step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with definition meeting requirements; step S3: judging whether the cornea hierarchy of the current second cornea image is identifiable or not based on the image characteristics, if so, entering a step S4, and if not, entering a step S6; step S4: identifying a cornea level of the current second cornea image; step S5: identifying whether the current second cornea image has a lesion; step S6: obtaining a depth value of a current second cornea image, and judging a cornea layer of the current second cornea image; step S7: it is identified whether an abnormality exists in the current second cornea image.
Preferably, the step S6 specifically includes the following steps: step S61: using a template matching algorithm to acquire a corresponding depth value in a specific area of the current second cornea image; step S62: the cornea hierarchy of the current second cornea image is predicted based on the depth values.
Preferably, the step S6 further includes the steps of: step S63: sorting the second cornea image and the current second cornea image of which the cornea layers have been identified in step S4 based on the corresponding depth values; step S64: calculating the confidence coefficient of the cornea hierarchy corresponding to the current second cornea image based on the depth values of the ordered second cornea images; step S65: and judging the cornea hierarchy of the current second cornea image based on the confidence.
Preferably, the corneal layer comprises a front elastic layer and a rear elastic layer; the step S7 specifically includes the following steps: step S71: judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering a step S72, and if not, entering a step S73; step S72: identifying an abnormal state of an adjacent corneal layer of the current second corneal image to determine the abnormal state of the current second corneal image; step S73: and continuously scanning, judging the number of images with abnormality, judging the pathological changes if the number of images exceeds 3, and prompting the abnormality if the number of images is less than 3.
Preferably, the method further comprises: step S8: and carrying out three-dimensional imaging display based on the identified multiple cornea layers and lesions, and displaying the abnormalities and lesions in the corresponding areas.
Preferably, the step S8 specifically includes the following steps: step S81: mapping a plurality of second cornea images into a cornea sagittal section schematic diagram, and rolling to display the three-dimensional depth and the hierarchical positioning of the second cornea images in real time; step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinates of the abnormality in the cornea hierarchy, and colorizing to display the distribution probability of lesions of each hierarchy.
Preferably, the method further comprises: step S9: based on the identified multiple cornea layers, a tree-type tissue structure diagram of the cornea layers is derived, and the original pictures are derived in batches according to the categories.
The invention also provides a cornea hierarchy recognition and lesion positioning system based on the image and the depth, which comprises: the information and image acquisition unit is used for acquiring patient information and a plurality of corresponding first cornea images; the image definition detection unit is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with definition meeting the requirement; the hierarchy characteristic preliminary screening unit is used for judging whether the cornea hierarchy of the current second cornea image is identifiable or not based on the image characteristics; the hierarchy characteristic identifying unit is used for identifying the cornea hierarchy of the second cornea image which is judged to be identifiable in the hierarchy characteristic preliminary screening unit; the lesion recognition unit is used for recognizing whether the second cornea image in the hierarchical feature recognition unit has lesions or not; the depth identification unit is used for acquiring the depth value of the second cornea image which is judged to be unidentifiable in the hierarchy characteristic preliminary screening unit and judging the cornea hierarchy of the current second cornea image; an abnormality determination unit for identifying whether the second cornea image in the depth identification unit has a lesion; the visual reconstruction unit is used for carrying out three-dimensional imaging display based on the identified cornea layers and lesions and displaying the abnormalities and the lesions in the corresponding areas; and the preview and export unit is used for exporting a tree-type tissue structure diagram of the cornea layers based on the identified cornea layers and exporting the original pictures in batches according to the categories.
Preferably, the depth recognition unit further includes: the depth information extraction unit is used for acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm; a hierarchy predicting unit for predicting a cornea hierarchy of the current second cornea image based on the depth value; or the depth recognition unit may further include: an image sorting unit for sorting the second cornea image and the current second cornea image of which the cornea hierarchy has been identified in step S4 based on the corresponding depth values; the confidence calculating unit is used for calculating the confidence of the cornea hierarchy corresponding to the current second cornea image based on the depth values of the ordered second cornea images; and a level correction unit for discriminating a cornea level of the current second cornea image based on the confidence level.
Compared with the prior art, the cornea hierarchy identification and lesion positioning method and system based on the image and the depth have the following advantages:
1. After the cornea level image is acquired, the image recognition is carried out by using a deep learning algorithm, the depth numerical analysis is carried out by combining a machine learning algorithm, the anatomical level in the cornea living body scanning is automatically detected, the level in a lesion or abnormal region can be accurately identified, the visual reconstruction is carried out, the full-automatic level marking is realized, the manual intervention is not needed, the labor cost is reduced, meanwhile, the cornea level is identified by utilizing an analysis method integrated by a plurality of machine learning algorithms, the accuracy is high, and the effect is stable.
2. By using a template matching algorithm to acquire corresponding depth values for a specific region, the invalid calculation amount of depth value acquisition of the whole image is reduced, and the cornea hierarchy identification efficiency is improved.
3. The cornea layer is predicted based on the depth value, then the confidence correction is carried out on the prediction result, the positioning result of the cornea layer is further accurately obtained, and the front elastic layer and the rear elastic layer which cannot be identified by the common image identification method are accurately positioned on the ordered multiple second cornea images in the confidence correction way, so that the identification accuracy of the cornea layer is improved.
4. When the abnormal lesion is identified at the position positioned on the elastic layer, the elastic layer is comprehensively judged by abutting the cornea layer, so that the lesion identification accuracy of the elastic layer is improved, and the problems of large calculated amount and training amount caused by direct identification are reduced. And the abnormality or lesion of the current cornea level is comprehensively judged by continuously scanning a plurality of second cornea images, so that the judgment accuracy of the lesion is improved, the misjudgment of discrete non-lesion abnormality as the erroneous recognition of the lesion is avoided, and the accuracy of cornea level recognition is further improved.
5. The identified multiple second cornea images are sequenced and then displayed in a three-dimensional image mode, and the abnormality and the lesion are displayed in the corresponding areas to output corresponding diagnosis images, so that the whole depth range of the lesion is obtained, visual reconstruction is achieved, a user can conveniently check an output result, automation and intellectualization of inputting, calculating and identifying output are achieved, and diagnosis and other works aiming at the output result are facilitated for the user.
6. Different lesion degrees or lesion conditions are displayed through three-dimensional colorization, and the lesion range is visually displayed, so that the user can view the lesion range conveniently.
7. And displaying the hierarchical and sequential relation of each classification after sequencing the identified plurality of second cornea images, previewing the pictures in each category in a classification mode, and deriving the original pictures in batches according to the categories. The tree classification preview is carried out on all the second cornea images, and the tree classification preview can be derived in a classified batch mode, so that the technology is beneficial to being applied to more scenes such as medical teaching.
Drawings
Fig. 1 is a flowchart showing a method for identifying cornea layers and locating lesions based on images and depths according to a first embodiment of the present invention.
Fig. 2 is a detailed flowchart of step S6 in the cornea hierarchy identification and lesion localization method based on image and depth according to the first embodiment of the present invention.
Fig. 3 is a flowchart showing still another detail of step S6 in the method for identifying cornea hierarchy and locating lesions based on image and depth according to the first embodiment of the present invention.
Fig. 4 is a detailed flowchart of step S7 in the cornea hierarchy identification and lesion localization method based on image and depth according to the first embodiment of the present invention.
Fig. 5 is a flowchart of step S8 and step S9 in the cornea hierarchy identification and lesion localization method based on image and depth according to the first embodiment of the present invention.
Fig. 6 is a detailed flowchart of step S8 in the cornea hierarchy identification and lesion localization method based on image and depth according to the first embodiment of the present invention.
Fig. 7 is a block diagram of a cornea hierarchy identification and lesion localization system based on images and depth according to a second embodiment of the present invention.
Fig. 8 is a block diagram of a depth recognition unit in a cornea hierarchy recognition and lesion localization system based on images and depth according to a second embodiment of the present invention.
Fig. 9 is a block diagram of an apparatus according to a third embodiment of the present invention.
Reference numerals illustrate:
1-information and image acquisition unit, 2-image definition detection unit, 3-hierarchical feature recognition unit, 4-image recognition unit, 5-lesion recognition unit, 6-depth recognition unit, 7-anomaly determination unit, 8-visual reconstruction unit, 9-preview and derivation unit,
61-A depth information extraction unit, 62-a hierarchical prediction unit, 63-an image ordering unit, 64-a confidence calculation unit, 65-a hierarchical correction unit,
10-Memory, 20-processor.
Detailed Description
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and examples of implementation. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, a first embodiment of the present invention provides a cornea hierarchy identification and lesion localization method based on image and depth, which includes the following steps:
Step S1: acquiring patient information and a plurality of corresponding first corner images;
Step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with definition meeting requirements;
Step S3: judging whether the cornea hierarchy of the current second cornea image is identifiable or not based on the image characteristics, if so, entering a step S4, and if not, entering a step S6;
Step S4: identifying a cornea level of the current second cornea image;
Step S5: identifying whether the current second cornea image has a lesion;
step S6: obtaining a depth value of a current second cornea image, and judging a cornea layer of the current second cornea image; and
Step S7: it is identified whether an abnormality exists in the current second cornea image.
It will be appreciated that in step S1, the patient basic information and the plurality of first cornea images are acquired as input values of the present system. The patient information comprises corresponding contents retrieved from a numerical library associated with the examination instrument and stored at the input end of the system or input at the front end interface of the system. The recorded contents include patient basic information (name, ID, age, sex), course information (current course, past history of cornea), examination signs, treatment conditions, examination information (examination number, examination date, examination eye category). The first cornea images are confocal microscope images of multiple cornea, and the confocal microscope images comprise multiple cornea layer-by-layer scanning images and multiple corresponding scanning depths of each image.
It is understood that the cornea can be divided into five layers in sequence: epithelial cell layer, anterior elastic layer, corneal stroma layer, posterior elastic layer, and endothelial cell layer. The nerve fibers are distributed in the epithelial cell layer and the front elastic layer, and the front elastic layer and the rear elastic layer (both elastic layers) are of cell-free structures.
It will be appreciated that in step S2, quality judgment and preliminary screening are performed on the input plurality of first cornea images, and the blurred image is removed from the subsequent analysis. The automatic recognition mode of the computer can be performed by an image blurring detection method, including but not limited to an algorithm supporting a vector machine (Support Vector Machine, SVM) to perform blurring detection on a plurality of first film images. Of course, screening can also be performed by manual screening to obtain a plurality of second cornea images meeting the definition requirements.
It will be understood that in step S3, the image recognition is primarily performed on the plurality of second cornea images by means of image recognition, and the cornea layer is recognized based on the neural network trained in advance, at this time, there is still an unrecognizable layer in the primary recognition process, where the unrecognizable cornea layer includes an image area where abnormality or pathology occurs (pathological forms such as necrosis, severe edema and other structural disorder blur in the diseased cornea cannot be accurately recognized), or an image located on the front elastic layer or the rear elastic layer.
It can be understood that, due to the cell-free structure of the front elastic layer or the rear elastic layer, the front elastic layer or the rear elastic layer cannot be identified through a simple image recognition network, that is, the front elastic layer or the rear elastic layer cannot be identified through the image recognition method in the step S3, and whether the front elastic layer or the rear elastic layer is abnormal or diseased cannot be identified.
It will be appreciated that in step S4, based on the cornea level at which no abnormality or lesion has occurred, the cornea level of that layer may be directly identified, for example, to characterize a normal epithelial cell layer, a corneal stroma layer, or an endothelial cell layer. In a diseased or abnormal cornea layer, a pathological form such as necrosis, severe edema, and other structural disorder blur usually appears as an irregular image, and cannot be distinguished by image recognition, so that the cornea layer corresponding to the current second cornea image cannot be accurately located.
It will be appreciated that in step S5, it is determined for each picture whether a lesion is present. Specifically, classification judgment is performed on each picture: normal, or abnormal. The abnormal group includes but is not limited to the disorder of the hierarchy characteristic (including necrosis, severe edema, etc.), fungus hypha, amoeba encapsulation, new blood vessels in cornea, inflammatory cells, activated dendritic cells, etc. various identifiable pathological morphological characteristics. A convolutional neural network algorithm, including but not limited to a VGG, resNet, inception, xception, inception-ResNet, etc., is used to build a classification model. Thereby determining whether the scanned layer has an abnormal shape.
It will be appreciated that in step S7, the unidentified cornea level is further identified by the depth value by combining the depth values to determine an accurate cornea level, and lesion screening is performed in step S7 based on the currently identified cornea level.
It can be understood that after the cornea level image is acquired, the image recognition is performed by using a deep learning algorithm, the depth numerical analysis is performed by combining a machine learning algorithm, the anatomical level is automatically detected in the cornea living body scanning, the level can be accurately recognized for the lesion or abnormal region, the visual reconstruction is performed, the full-automatic level marking is realized, the manual intervention is not needed, the labor cost is reduced, and meanwhile, the cornea level recognition accuracy is high and the effect is stable by using the analysis method integrated by a plurality of machine learning algorithms.
Referring to fig. 2, step S6: and obtaining the depth value of the current second cornea image, and judging the cornea level of the current second cornea image. The step S6 specifically includes steps S61 to S62:
Step S61: using a template matching algorithm to acquire a corresponding depth value in a specific area of the current second cornea image; and
Step S62: the cornea hierarchy of the current second cornea image is predicted based on the depth values.
It will be appreciated that in step S61, since the thickness of the cornea increases in the pathological state such as corneal edema, the absolute value of each level of scan depth increases, the absolute value of the depth is converted into a relative depth, i.e., the relative depth of each scan layer in the cornea at present. The relative depth values and image layering results (epithelial cell layer, anterior elastic layer, corneal stroma layer, posterior elastic layer, endothelial cell layer) are input using Machine learning algorithms including, but not limited to, a K-nearest neighbor classification model, lightGBM (LIGHT GRADIENT Boosting Machine), decision tree, GBDT, etc., to pre-train to obtain probabilities that each relative depth value is judged to be a respective level, and the relative depth ranges for the respective levels including the anterior elastic layer and the posterior elastic layer are calculated.
It can be appreciated that by using a template matching algorithm to obtain corresponding depth values for a particular region, the amount of invalid computation of depth value acquisition of the overall image is reduced, and the efficiency of cornea hierarchy recognition is improved.
It is to be understood that steps S61 to S62 are merely implementation modes of this example, and implementation modes thereof are not limited to steps S61 to S62.
Referring to fig. 3, the step S6 further includes steps S63 to S66:
Step S63: sorting the second cornea image and the current second cornea image of which the cornea layers have been identified in step S4 based on the corresponding depth values;
step S64: calculating the confidence coefficient of the cornea hierarchy corresponding to the current second cornea image based on the depth values of the ordered second cornea images; and
Step S65: and judging the cornea hierarchy of the current second cornea image based on the confidence.
It is understood that in step S63, the second cornea image and the current second cornea image of which the cornea hierarchy has been identified in step S4 are ordered based on the corresponding depth values.
It will be appreciated that in step S64, based on the depth values of the ordered plurality of second cornea images, a modified confidence level for the target picture classification at each level is calculated according to the following confidence level modification function:
Fk=Pk×αk
the level of the maximum correction confidence is judged as the level of the target picture, namely the level serial number k of the target picture is:
k=argmax(Fk)
k represents a hierarchy number; where k=1, 2,3,4,5 denote a classification into epithelial cell layer, pre-elastic layer, corneal stroma layer, post-elastic layer, endothelial cell layer, respectively. F k denotes the correction confidence of the target picture classification at k-level. P k represents the confidence that the target picture is classified in the k-level according to the scan depth value in step S62. Alpha k represents the correction coefficient for the target picture classified at the k-level. The calculation process is as follows: and (3) based on the corresponding depth values, carrying out incremental sorting on the second cornea image and the current second cornea image of which the cornea layers are already identified in the step S4 to obtain a sequence L (L 1,l2,...,ln), wherein i represents the serial number of the sorted second cornea image, and L i represents the layer to which the ith image belongs. For each value of k, the following calculation is performed:
where e is the natural logarithm, n is the length of the sequence L, t is the sequence number of the target picture, and k represents the hierarchy to which the target picture belongs.
Therefore, after the correction confidence is calculated by the above formula, the cornea layer determined to be the corresponding cornea layer with the highest correction confidence is taken.
It can be understood that, by means of confidence correction for the prediction result after predicting the cornea level based on the depth value, the positioning result of the cornea level is further accurately predicted, and by means of confidence correction, the pre-elastic layer and the post-elastic layer which cannot be identified by the common image identification method are accurately positioned for the ordered plurality of second cornea images, so that the identification accuracy of the cornea level is improved.
It is to be understood that steps S63 to S65 are merely implementation modes of this example, and implementation modes thereof are not limited to steps S53 to S55.
Referring to fig. 4, step S7: it is identified whether an abnormality exists in the current second cornea image. The step S7 specifically includes steps S71 to S74:
Step S71: and judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering a step S72, and if not, entering a step S73.
Step S72: an abnormal state of an adjacent cornea level of the current second cornea image is identified to determine the abnormal state of the current second cornea image. And
Step S73: and continuously scanning, judging the number of images with abnormality, judging the pathological changes if the number of images exceeds 3, and prompting the abnormality if the number of images is less than 3.
It is understood that in step S71, it is determined whether the current second cornea image is located in the elastic layer (including the front elastic layer and the rear elastic layer) based on the cornea level identified in step S65, so that in the subsequent step, it is determined whether there is an abnormality in the current second cornea image. For example, in step S72, by detecting abnormality of adjacent cornea layers of the elastic layer, for example, the cornea epithelial layer and the cornea stroma layer, which are adjacent layers in the anterior elastic layer, if there is a lesion in the detected cornea epithelial layer or cornea stroma layer shallow layer image, it is determined that the lesion occurs in the area of the anterior elastic layer. As another example, in step S73, in the non-elastic layer, the continuous scanning (that is, the scanning is continued in sequence) acquires other second cornea images, so as to comprehensively determine whether the current cornea layer is diseased or abnormal through the result of the continuous scanning.
It can be appreciated that in step S72, when the abnormal lesion is identified in the second cornea image positioned on the elastic layer, the abnormal lesion cannot be directly identified, and whether the lesion exists in the elastic layer needs to be comprehensively judged by abutting the cornea layer, so that the accuracy of identifying the lesion in the elastic layer is improved, and the problems of large calculation amount and training amount caused by direct identification are reduced.
It can be understood that in step S73, the abnormality or lesion of the current cornea layer is comprehensively determined by continuously scanning a plurality of second cornea images, so as to improve the accuracy of determining the lesion. The reason is that: in practical cases, the lesions actually exist not only in the intermittent individual layers but also in the aggregate, so that the unidentifiable layers appearing in the aggregate are judged as the lesions; the discrete unrecognizable layer is only prompted to be abnormal, so that possible special conditions (such as intermittent abnormality of a scanning layer caused by eye position change of a patient) are filtered, and the abnormal condition is not easily judged as pathological changes, so that the accuracy of pathological change diagnosis is improved.
It is to be understood that steps S71 to S73 are merely implementation of this example, and implementation thereof is not limited to steps S71 to S73.
Referring to fig. 5, the method for identifying cornea hierarchy and locating lesions based on image and depth according to the first embodiment of the present invention further includes:
Step S8: and carrying out three-dimensional imaging display based on the identified multiple cornea layers and lesions, and displaying the abnormalities and lesions in the corresponding areas. And
Step S9: based on the identified multiple cornea layers, a tree-type tissue structure diagram of the cornea layers is derived, and the original pictures are derived in batches according to the categories.
It can be understood that in step S8, by sequencing the identified plurality of second cornea images, performing three-dimensional imaging display, and displaying the abnormality and the lesion in the corresponding region, so as to output the corresponding diagnostic image, the whole depth range of the lesion is obtained, and the visual reconstruction is convenient for the user to check the output result, thereby realizing the automation and the intellectualization of inputting, calculating, identifying and outputting, and facilitating the user to perform the diagnosis and other work on the output result.
It will be appreciated that in step S9, the classification previews the pictures in each category by displaying the hierarchical and sequential relationships of the classifications after sorting the identified plurality of second cornea images, and deriving the original pictures in batches according to the categories. The tree classification preview is carried out on all the second cornea images, and the tree classification preview can be derived in a classified batch mode, so that the technology is beneficial to being applied to more scenes such as medical teaching.
Referring to fig. 6, the step S8 specifically further includes the following steps:
Step S81: mapping a plurality of second cornea images into a cornea sagittal section schematic diagram, and rolling to display the three-dimensional depth and the hierarchical positioning of the second cornea images in real time; and
Step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinates of the abnormality in the cornea hierarchy, and colorizing to display the distribution probability of lesions of each hierarchy.
It will be appreciated that in step S81, the position of the cornea layer is displayed in real time by scrolling, so that the user can conveniently view the stereoscopic positioning of the currently output cornea image.
It can be appreciated that in step S82, the lesion extent is visually displayed by displaying different lesion degrees or lesion conditions in a colorization manner, so as to further facilitate the user' S viewing.
It is to be understood that steps S81 to S82 are merely implementation of this example, and implementation thereof is not limited to steps S81 to S82.
Referring to fig. 7, a second embodiment of the present invention provides a cornea hierarchy identification and lesion localization system based on images and depth, comprising:
An information and image acquisition unit 1 for acquiring patient information and a corresponding plurality of first cornea images.
And the image definition detection unit 2 is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with definition meeting the requirement.
And a hierarchy characteristic preliminary screening unit 3 for judging whether the cornea hierarchy of the current second cornea image is identifiable based on the image characteristics.
A hierarchy characteristic identifying unit 4 for identifying the cornea hierarchy of the second cornea image determined to be identifiable in the hierarchy characteristic preliminary screening unit;
And a lesion recognition unit 5 for recognizing whether the second cornea image in the hierarchical feature recognition unit has a lesion.
And the depth recognition unit 6 is used for acquiring the depth value of the second cornea image which is judged to be unidentifiable in the hierarchy characteristic preliminary screening unit and judging the cornea hierarchy of the current second cornea image.
An abnormality determination unit 7 for identifying whether the second cornea image in the depth recognition unit has a lesion.
A visual reconstruction unit 8 for performing three-dimensional imaging display based on the identified plurality of cornea hierarchies and lesions, and displaying the abnormalities and lesions in the corresponding areas. And
And a preview and export unit 9 for exporting a tree-type tissue structure diagram of the cornea hierarchy based on the identified cornea hierarchies, and exporting the original pictures in batches according to the categories.
Referring to fig. 8, the depth recognition unit 6 further includes:
The depth information extraction unit 61 is configured to acquire a corresponding depth value in a specific region of the current second cornea image using a template matching algorithm.
And a hierarchy predicting unit 62 for predicting a cornea hierarchy of the current second cornea image based on the depth values. Or (b)
The depth recognition unit may further include:
An image ordering unit 63 for ordering the second cornea image and the current second cornea image of which the cornea hierarchies have been identified by the hierarchy characteristic identifying unit, based on the corresponding depth values.
A confidence calculating unit 64, configured to calculate a confidence level of the current second cornea image corresponding to the cornea hierarchy based on the depth values of the ordered plurality of second cornea images. And
The level correction unit 65 is used for judging the cornea level of the current second cornea image based on the confidence.
It can be understood that the cornea hierarchy identification and lesion positioning system based on the image and the depth provided by the second embodiment of the invention is particularly suitable for the cornea hierarchy identification and lesion positioning system based on the image and the depth, and the system can accurately identify the cornea hierarchy by combining the image characteristics and the depth values after carrying out preliminary image identification based on the acquired multiple cornea images, automatically detect the anatomical hierarchy in the cornea living body scanning, accurately identify the hierarchy in a lesion or abnormal region, and visually reconstruct the depth range of the lesion, thereby realizing full-automatic hierarchy marking without manual intervention, reducing labor cost, simultaneously identifying the cornea hierarchy by utilizing an analysis method integrated by a plurality of machine learning algorithms, and having high accuracy and stable effect.
Referring to fig. 9, the third embodiment of the present invention further provides an apparatus, specifically an electronic apparatus, which includes a memory 10 and a processor 20. The memory 10 has stored therein an operator program arranged to perform the steps of any of the image and depth based cornea hierarchy identification and lesion localization method embodiments described above at run-time. The processor 20 is arranged to perform the steps of any of the image and depth based cornea hierarchical recognition and lesion localization method embodiments described above by the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computing machine network.
Specifically, the electronic equipment is particularly suitable for cornea hierarchy identification and lesion positioning equipment based on images and depths, the equipment can accurately identify cornea hierarchies by combining image characteristics and depth values after carrying out preliminary image identification based on acquired multiple cornea images, the anatomical hierarchy can be automatically detected in cornea living body scanning, the hierarchy can be accurately identified for lesions or abnormal areas, and visual reconstruction is carried out, so that full-automatic hierarchy marking is realized, manual intervention is not needed, labor cost is reduced, and meanwhile, the cornea hierarchies are identified by utilizing an analysis method integrated by a plurality of machine learning algorithms, so that the accuracy is high and the effect is stable.
Compared with the prior art, the cornea hierarchy identification and lesion positioning method and system based on the image and the depth have the following advantages:
1. After the cornea level image is acquired, the image recognition is carried out by using a deep learning algorithm, the depth numerical analysis is carried out by combining a machine learning algorithm, the anatomical level in the cornea living body scanning is automatically detected, the level in a lesion or abnormal region can be accurately identified, the visual reconstruction is carried out, the full-automatic level marking is realized, the manual intervention is not needed, the labor cost is reduced, meanwhile, the cornea level is identified by utilizing an analysis method integrated by a plurality of machine learning algorithms, the accuracy is high, and the effect is stable.
2. The template matching algorithm is used for scanning and acquiring the depth value corresponding to the cornea hierarchy aiming at the specific area, so that the ineffective calculation amount of depth value acquisition of the whole image is reduced, and the cornea hierarchy identification efficiency is improved.
3. The cornea layer is predicted based on the depth value, then the prediction result is subjected to confidence correction, the positioning result of the cornea layer is further accurately predicted, and the ordered plurality of second cornea images are accurately positioned into the pre-elastic layer and the post-elastic layer which cannot be identified by the common image identification method in a confidence correction mode, so that the identification accuracy of the cornea layer is improved.
4. When the abnormal lesion is identified at the position positioned on the elastic layer, the elastic layer is comprehensively judged by abutting the cornea layer, so that the lesion identification accuracy of the elastic layer is improved, and the problems of large calculated amount and training amount caused by direct identification are reduced. And the abnormality or lesion of the current cornea level is comprehensively judged by continuously scanning a plurality of second cornea images, so that the judgment accuracy of the lesion is improved, the misjudgment of discrete non-lesion abnormality as the erroneous recognition of the lesion is avoided, and the accuracy of cornea level recognition is further improved.
5. The identified multiple second cornea images are sequenced and then displayed in a three-dimensional image mode, and the abnormality and the lesion are displayed in the corresponding areas to output corresponding diagnosis images, so that the whole depth range of the lesion is obtained, visual reconstruction is achieved, a user can conveniently check an output result, automation and intellectualization of inputting, calculating and identifying output are achieved, and diagnosis and other works aiming at the output result are facilitated for the user.
6. Different lesion degrees or lesion conditions are displayed in a colorization mode, and the lesion range is displayed in a visual mode, so that the user can view the lesion range conveniently.
7. And displaying the hierarchical and sequential relation of each classification after sequencing the identified plurality of second cornea images, previewing the pictures in each category in a classification mode, and deriving the original pictures in batches according to the categories. The tree classification preview is carried out on all the second cornea images, and the tree classification preview can be derived in a classified batch mode, so that the technology is beneficial to being applied to more scenes such as medical teaching.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the methods shown in the flowcharts.
The above-described functions defined in the method of the application are performed when the computer program is executed by a processor. It should be noted that, the computer memory according to the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer memory may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing.
More specific examples of computer memory may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable signal medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a digital signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: the processor comprises an information and image acquisition unit, an image definition detection unit, a hierarchical feature learning unit, an image recognition unit, a depth recognition unit and an abnormality recognition unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the abnormality recognition unit may also be described as "a unit that recognizes whether or not there is an abnormality in the current second cornea image".
The above embodiments are merely preferred embodiments of the present invention, and are not intended to limit the present invention, but any modifications, equivalents, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.
Claims (7)
1. The cornea hierarchy identification and lesion positioning method based on the image and the depth is characterized by comprising the following steps of: the cornea layer comprises an epithelial cell layer, an epithelial nerve fiber cluster, a front elastic layer, a cornea stroma layer, a rear elastic layer and an endothelial cell layer;
the method comprises the following steps:
Step S1: acquiring patient information and a plurality of corresponding first cornea images, wherein the first cornea images are multi-cornea confocal microscope images and comprise a plurality of cornea layer-by-layer scanning images and scanning depths corresponding to each image;
Step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with definition meeting requirements;
Step S3: judging whether the cornea hierarchy of the current second cornea image is identifiable or not based on the image characteristics, if so, entering a step S4, and if not, entering a step S6;
step S4: identifying a cornea level of the current second cornea image by image identification; and
Step S5: identifying whether the current second cornea image has lesions or not, and ending the process;
step S6: obtaining a depth value of a current second cornea image, and judging a cornea layer of the current second cornea image; and
Step S7: identifying whether the current second cornea image is abnormal;
the step S6 comprises the following steps:
Step S61: using a template matching algorithm to acquire a corresponding depth value in a specific area of the current second cornea image; and
Step S62: predicting a cornea level of the current second cornea image based on the depth values;
Step S63: sorting the second cornea image and the current second cornea image of which the cornea layers have been identified in step S4 based on the corresponding depth values;
step S64: calculating the confidence coefficient of the cornea hierarchy corresponding to the current second cornea image based on the depth values of the ordered second cornea images; and
Step S65: judging the cornea hierarchy of the current second cornea image based on the confidence;
the step S64 is specifically as follows:
Based on the depth values of the ordered second cornea images, calculating a correction confidence of the target picture classified in each layer according to the following confidence correction function:
Fk=Pk×αk;
The layer which obtains the maximum correction confidence is judged to be the layer of the target picture, and the layer serial number k of the target picture is:
k=argmax(Fk);
Wherein k represents a hierarchy number; wherein k=1, 2,3,4,5 respectively represent the correction confidence of classifying the target picture in the k-level, F k represents the correction confidence of classifying the target picture in the k-level, and P k represents the correction coefficient of classifying the target picture in the k-level according to the scan depth value, α k represents the correction coefficient of classifying the target picture in the k-level;
The calculation process is as follows: based on the corresponding depth values, the second cornea image and the current second cornea image of which the cornea layers are already identified in the step S4 are subjected to incremental sorting to obtain a sequence L (L 1,l2,...,ln), i represents the serial number of the sorted second cornea image, L i represents the layer to which the ith image belongs, and for each value of k, the sequence L is calculated as follows:
Wherein e is a natural logarithm, n is the length of a sequence L, t is the sequence number of the target picture, and k represents the layer to which the target picture belongs;
the step S65 specifically includes: and taking the cornea layer with the highest correction confidence as the corresponding cornea layer.
2. The method for identifying cornea hierarchy and locating lesions based on image and depth as recited in claim 1, wherein: the cornea layer comprises a front elastic layer and a rear elastic layer;
The step S7 specifically includes the following steps:
Step S71: judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering a step S72, and if not, entering a step S73;
Step S72: identifying an abnormal state of an adjacent corneal layer of the current second corneal image to determine the abnormal state of the current second corneal image; and
Step S73: and continuously scanning, judging the number of images with abnormality, judging the pathological changes if the number of images exceeds 3 continuously, and prompting the abnormality if the number of images exceeds 3 continuously.
3. The method for identifying cornea hierarchy and locating lesions based on image and depth as recited in claim 1, wherein: further comprises:
step S8: and carrying out three-dimensional imaging display based on the identified multiple cornea layers and lesions, and displaying the abnormalities and lesions in the corresponding areas.
4. A method for image and depth based cornea hierarchical recognition and lesion localization as set forth in claim 3, wherein: the step S8 specifically includes the following steps:
Step S81: mapping a plurality of second cornea images into a cornea sagittal section schematic diagram, and rolling to display the three-dimensional depth and the hierarchical positioning of the second cornea images in real time; and
Step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinates of the abnormality in the cornea hierarchy, and colorizing to display the distribution probability of lesions of each hierarchy.
5. A method for image and depth based cornea hierarchical recognition and lesion localization as set forth in claim 3, wherein: further comprises:
step S9: based on the identified multiple cornea layers, a tree-type tissue structure diagram of the cornea layers is derived, and the original pictures are derived in batches according to the categories.
6. An image and depth-based cornea hierarchy identification and lesion localization system, characterized in that it is applied to the image and depth-based cornea hierarchy identification and lesion localization method according to any one of claims 1 to 5, comprising:
the information and image acquisition unit is used for acquiring patient information and a plurality of corresponding first cornea images;
The image definition detection unit is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with definition meeting the requirement;
The hierarchy characteristic preliminary screening unit is used for judging whether the cornea hierarchy of the current second cornea image is identifiable or not based on the image characteristics;
The hierarchy characteristic identifying unit is used for identifying the cornea hierarchy of the second cornea image which is judged to be identifiable in the hierarchy characteristic preliminary screening unit;
the lesion recognition unit is used for recognizing whether the second cornea image in the hierarchical feature recognition unit has lesions or not;
the depth identification unit is used for acquiring the depth value of the second cornea image which is judged to be unidentifiable in the hierarchy characteristic preliminary screening unit and judging the cornea hierarchy of the current second cornea image;
an abnormality determination unit for identifying whether the second cornea image in the depth identification unit has a lesion;
the visual reconstruction unit is used for carrying out three-dimensional imaging display based on the identified cornea layers and lesions and displaying the abnormalities and the lesions in the corresponding areas; and
And the preview and export unit is used for exporting a tree-type tissue structure diagram of the cornea hierarchy based on the identified cornea hierarchies and exporting the original pictures in batches according to the categories.
7. The image and depth based cornea hierarchical recognition and lesion localization system of claim 6, wherein: the depth recognition unit further includes:
The depth information extraction unit is used for acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm;
a hierarchy predicting unit for predicting a cornea hierarchy of the current second cornea image based on the depth value; or (b)
The depth recognition unit further includes:
an image sorting unit for sorting the second cornea image having identified the cornea hierarchy and the current second cornea image based on the corresponding depth values;
The confidence calculating unit is used for calculating the confidence of the cornea hierarchy corresponding to the current second cornea image based on the depth values of the ordered second cornea images; and
And the hierarchy correcting unit is used for judging the cornea hierarchy of the current second cornea image based on the confidence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011498043.9A CN112634221B (en) | 2020-12-17 | 2020-12-17 | Cornea hierarchy identification and lesion positioning method and system based on images and depth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011498043.9A CN112634221B (en) | 2020-12-17 | 2020-12-17 | Cornea hierarchy identification and lesion positioning method and system based on images and depth |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112634221A CN112634221A (en) | 2021-04-09 |
CN112634221B true CN112634221B (en) | 2024-07-16 |
Family
ID=75316497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011498043.9A Active CN112634221B (en) | 2020-12-17 | 2020-12-17 | Cornea hierarchy identification and lesion positioning method and system based on images and depth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112634221B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113962995B (en) * | 2021-12-21 | 2022-04-19 | 北京鹰瞳科技发展股份有限公司 | A cataract model training method and cataract identification method |
CN115690092B (en) * | 2022-12-08 | 2023-03-31 | 中国科学院自动化研究所 | Method and device for identifying and counting amoeba cysts in corneal confocal images |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567737A (en) * | 2011-12-28 | 2012-07-11 | 华南理工大学 | Method for locating eyeball cornea |
CN102884551A (en) * | 2010-05-06 | 2013-01-16 | 爱尔康研究有限公司 | Devices and methods for assessing changes in corneal health |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3049256A1 (en) * | 2017-01-11 | 2018-07-19 | University Of Miami | Method and system for three-dimensional thickness mapping of corneal micro-layers and corneal diagnoses |
CN110384582A (en) * | 2019-07-17 | 2019-10-29 | 温州医科大学附属眼视光医院 | A kind of air pocket method Deep liminal keratoplasty device and its application method |
-
2020
- 2020-12-17 CN CN202011498043.9A patent/CN112634221B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102884551A (en) * | 2010-05-06 | 2013-01-16 | 爱尔康研究有限公司 | Devices and methods for assessing changes in corneal health |
CN102567737A (en) * | 2011-12-28 | 2012-07-11 | 华南理工大学 | Method for locating eyeball cornea |
Also Published As
Publication number | Publication date |
---|---|
CN112634221A (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113962311B (en) | Knowledge data and artificial intelligence driven multi-disease identification system for ophthalmology | |
Li et al. | A large-scale database and a CNN model for attention-based glaucoma detection | |
Zeng et al. | Automated diabetic retinopathy detection based on binocular siamese-like convolutional neural network | |
WO2020253629A1 (en) | Detection model training method and apparatus, computer device, and storage medium | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
US7474775B2 (en) | Automatic detection of red lesions in digital color fundus photographs | |
Zhang et al. | Hybrid graph convolutional network for semi-supervised retinal image classification | |
CN106530295A (en) | Fundus image classification method and device of retinopathy | |
CN109919254B (en) | Breast density classification method, system, readable storage medium and computer device | |
CN112634221B (en) | Cornea hierarchy identification and lesion positioning method and system based on images and depth | |
CN116012568A (en) | System for acquiring cardiac rhythm information through photographing electrocardiogram | |
CN117838041A (en) | Method for constructing a prediction model for neonatal retinopathy based on image technology | |
CN113782184A (en) | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning | |
Haja et al. | Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology | |
CN115954101A (en) | Health degree management system and management method based on AI tongue diagnosis image processing | |
Gaber et al. | Comprehensive assessment of facial paralysis based on facial animation units | |
Acosta-Mesa et al. | Cervical cancer detection using colposcopic images: a temporal approach | |
CN117174315A (en) | Neural network-based thyroid nodule preoperative malignant risk intelligent evaluation system | |
Okomba et al. | Development of Glaucoma Detection System using CNN and SVM | |
Gunasekara et al. | A feasibility study for deep learning based automated brain tumor segmentation using magnetic resonance images | |
Boone et al. | Image processing and hierarchical temporal memories for automated retina analysis | |
Ahila et al. | Identification of malignant attributes in breast ultrasound using a fully convolutional deep learning network and semantic segmentation | |
CN115719345B (en) | A method, device, equipment and storage medium for detecting biological tissue images | |
CN119445261B (en) | Intelligent medical data processing system and method based on artificial intelligence | |
Anvesh et al. | Accurate Classification of Tessellated Retinal Disease with Deep Learning Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |