CN118447499A - Respiratory tract inspection result interpretation method and system based on image recognition - Google Patents
Respiratory tract inspection result interpretation method and system based on image recognition Download PDFInfo
- Publication number
- CN118447499A CN118447499A CN202410508787.6A CN202410508787A CN118447499A CN 118447499 A CN118447499 A CN 118447499A CN 202410508787 A CN202410508787 A CN 202410508787A CN 118447499 A CN118447499 A CN 118447499A
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- classification
- result
- respiratory tract
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000002345 respiratory system Anatomy 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000007689 inspection Methods 0.000 title abstract description 9
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000002073 fluorescence micrograph Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 238000011176 pooling Methods 0.000 claims description 26
- 238000007781 pre-processing Methods 0.000 claims description 17
- 239000012535 impurity Substances 0.000 claims description 16
- 238000000926 separation method Methods 0.000 claims description 12
- 238000007619 statistical method Methods 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 4
- 210000004027 cell Anatomy 0.000 description 28
- 230000006872 improvement Effects 0.000 description 16
- 238000012706 support-vector machine Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 210000005056 cell body Anatomy 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010224 classification analysis Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/693—Acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Public Health (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a respiratory tract inspection result interpretation method and system based on image recognition. In the method, fluorescence images of specimens for nine examinations of the respiratory tract are preprocessed; separating the cell outline and the background of the preprocessed fluorescent image to extract the cell outline image, and separating a plurality of small-size images containing cell objects from the fluorescent image; performing forward operation processing of a convolutional neural network on the separated small-size image, and obtaining a classification result; and respectively counting the classification results, and outputting a final interpretation result according to the manually set determination conditions. The invention is based on the software fully trained by adopting a large amount of original data, and can improve the accuracy and the processing efficiency of the interpretation of nine detection items of the respiratory tract. But also can greatly reduce the labor intensity of medical staff and improve the working efficiency.
Description
The application is a divisional application of the application with the application number 2019102566789, and the application date is 2019, 4 and 1.
Technical Field
The invention relates to medical technology, in particular to a respiratory tract examination result interpretation system and method based on image recognition.
Background
The existing nine respiratory tract examination items depend on microscopic equipment observed by naked eyes of operators, are judged and counted based on past experience of the inspectors, and have the defects and constraints of subjectivity in result judgment, high working strength, low efficiency, low speed and the like.
The inspection process is rule recognition on the whole, mainly performs recognition and statistics on images to obtain a final result, and has the premise of automatic calculation processing, but the recognition accuracy of specific medical images is not high and the actual application cannot be reliably performed based on the traditional image processing and classification technology, such as SVM (support vector machine) and the like.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention provides a respiratory tract inspection result interpretation system and method based on image recognition, which implement target recognition, classification and counting with higher accuracy for electronic images by adopting convolutional neural network artificial intelligence technology based on customized optimization structure, so as to implement interpretation of respiratory tract inspection results.
In order to achieve the above purpose, the invention provides a respiratory tract examination result interpretation method based on image recognition, comprising the following steps:
preprocessing a fluorescence image of a specimen for nine examinations of the respiratory tract;
Separating the cell outline and the background of the preprocessed fluorescent image to extract the cell outline image, and separating a plurality of small-size images containing cell objects from the fluorescent image;
Performing forward operation processing of a convolutional neural network on the separated small-size image, and obtaining a classification result;
Counting the classification results respectively, and outputting a final interpretation result according to manually set judgment conditions;
wherein the convolutional neural network comprises 3 convolutional layers: the first layer is an input layer and is used for inputting each separated small-size image, and the first layer is provided with 3 channels and is respectively used for receiving R, G, B three channels of data components of the input image; a first convolution layer for extracting color-related image features;
After the first layer convolution operation is finished, inputting data into a pooling layer, wherein the pooling layer is connected with a second layer convolution layer, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space;
The data processed by the second layer of convolution layer is also accessed to a pooling layer, after pooling processing, the data is accessed to a third layer of convolution layer, and the third layer of convolution layer is used for extracting distinguishing features of fluorescence features and other interference impurity features in a two-dimensional space;
And the processing result of the third layer of convolution layer is sent to an output layer, and the output layer outputs prediction possibility according to three classifications of negative, positive and impurity, namely classification result.
As a further improvement of the present invention, the separation of the cell contour and the background of the pretreated fluorescent image specifically includes: carrying out grey conversion inversion on the preprocessed fluorescent image, and then carrying out contrast enhancement; and then separating a plurality of small-size images only containing the cell objects from the whole image by a watershed algorithm.
As a further improvement of the invention, the watershed algorithm is used for continuously expanding the size of the region by selecting the region with the brightness lower than 20% of the average brightness of the whole picture in the gray level image as a growth seed region and stopping the region expansion until the regions are overlapped or the brightness of the region to be expanded is higher than 80% of the average brightness of the whole picture by an iterative algorithm, wherein the obtained region is the region where the target is located, and a plurality of small-size images only comprising cell objects are separated from the whole image by running the watershed algorithm on the whole image.
As a further improvement of the invention, in the convolutional neural network, a first layer of convolutional layer is used for inputting images with the size of 38x38, the separated small-size images are uniformly scaled to the size of 38x38 by keeping the aspect ratio, the size of the first layer of convolutional kernel is 3x3, and the number of feature images is 6;
the second convolution layer has 8 feature maps with 3x3 convolution kernels;
The third layer of convolution layer has 12 feature maps with a 3x3 convolution kernel.
As a further improvement of the present invention, the counting of the classification results respectively specifically includes:
counting according to three categories of negative, positive and impurity;
And meanwhile, analyzing and counting the size and fluorescence intensity of the input small-size image, and outputting a final interpretation result after integrating the small-size image with the classification result.
As a further improvement of the present invention, the respiratory tract examination result interpretation method further includes:
And (3) feedback training, wherein if the classification counting result is subjected to manual verification and has classification errors, the corrected information is fed back to the classification training database unit, and the convolutional neural network classification processing unit is triggered to retrain so as to generate a new convolutional neural network classification code.
As a further improvement of the invention, the convolutional neural network is obtained after the medical image is actually acquired and trained, and the image label adopted during training is classified and marked by an expert with authority judgment capability, so that the reliability and the accuracy of the image adopted during training are ensured.
The invention also provides a respiratory tract examination result interpretation system based on image recognition, which comprises a convolutional neural network classification processing unit, wherein the convolutional neural network has a multilayer structure;
The convolutional neural network includes a 3-layer convolutional layer: the first layer is an input layer for inputting each small-sized image separated from the fluorescence image of the specimen of the nine examinations of the respiratory tract, the first layer having 3 channels for receiving R, G, B three channels of data components of the input image, respectively; a first convolution layer for extracting color-related image features;
After the first layer convolution operation is finished, inputting data into a pooling layer, wherein the pooling layer is connected with a second layer convolution layer, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space;
The data processed by the second layer of convolution layer is also accessed to a pooling layer, after pooling processing, the data is accessed to a third layer of convolution layer, and the third layer of convolution layer is used for extracting distinguishing features of fluorescence features and other interference impurity features in a two-dimensional space;
And the processing result of the third layer of convolution layer is sent to an output layer, and the output layer outputs prediction possibility according to three classifications of negative, positive and impurity, namely classification result.
As a further improvement of the present invention, the respiratory tract examination result interpretation system further comprises:
The digital image preprocessing unit is used for preprocessing the digital image acquired by the digital image acquisition unit;
And the image target separation unit is used for separating the cell outline from the background through a watershed algorithm and extracting the cell outline image.
As a further improvement of the present invention, the respiratory tract examination result interpretation system further comprises:
The classification result counting and statistical analysis unit is used for carrying out classification statistics on the identified digital image characteristics and then outputting a final statistical result according to preset conditions;
and the result judging and data outputting unit is used for summarizing the classified counting results and outputting the result judging and data formatting according to the manually set judging conditions.
Further, the respiratory tract examination result interpretation system based on image recognition comprises a computer, a microscope and an industrial camera, wherein software for performing respiratory tract examination result interpretation is installed in the computer, the microscope is used for acquiring samples for nine respiratory tract examinations, and the samples are amplified through the microscope to acquire cell images; the industrial camera is used for collecting the image amplified by the microscope so as to obtain a cell electronic image; the computer identifies and processes the electronic image through software, and finally judges and outputs the respiratory tract examination result.
As a further improvement of the present invention, the software includes:
The digital image acquisition unit is used for acquiring an electronic image of the specimen; the image acquisition device in this case is an industrial camera with digital image imaging function or a camera with the same function.
The digital image preprocessing unit is used for preprocessing the digital image acquired by the digital image acquisition unit;
the image target separation unit separates the outline of the cell from the background through a watershed algorithm and also extracts an image of the outline of the cell;
the convolutional neural network classification processing unit is used for extracting the electronic image characteristics processed by the image target separation unit and classifying the characteristics; the convolutional neural network classification processing unit has a convolutional neural network structure with a multi-layer structure, and is obtained by using a classification training database unit obtained after a large amount of data training;
the classification result counting and statistical analysis unit is used for carrying out classification statistics on the identified electronic image characteristics and then outputting a final statistical result according to preset conditions;
and the result judging and data outputting unit is used for summarizing the classified counting results and outputting the result judging and data formatting according to the manually set judging conditions.
As a further improvement of the invention, a classification training database unit is also included for training the existing accurate electronic image by means of the convolutional neural network classification processing unit.
The invention also discloses a respiratory tract examination result interpretation method based on image recognition, which comprises the following steps:
S1, placing a specimen for examination on a slide, amplifying a specimen image on the slide by a microscope, and acquiring an amplified specimen digital image by an image acquisition device;
s2, the digital image preprocessing unit performs noise filtering on the image through a Gaussian filter processing algorithm according to the requirements of further image analysis and feature extraction; then, cell contour detail enhancement is carried out through contrast adjustment;
S3, the image target separation unit firstly carries out gray-scale transformation on the electronic image from the digital image preprocessing unit and then carries out contrast enhancement; separating the target image from the background image by a watershed algorithm, and separating a plurality of small-size images only containing cell objects from the whole image;
The watershed algorithm is used for continuously expanding the size of the region by selecting the region with the brightness lower than 20% of the average brightness of the whole picture in the gray level image as a growth seed region and by using an iterative algorithm until the region is overlapped or the brightness of the region to be expanded is higher than 80% of the average brightness of the whole picture, stopping the region expansion, wherein the obtained region is the region where the target is located, and a plurality of small-size images only containing cell objects can be separated from the whole image by operating the watershed algorithm on the whole image.
Then, automatically determining initial parameters of a watershed algorithm by counting the overall brightness level of the image; finally, automatically discarding the undersize and oversized targets according to the common pixel size range of the targets;
The size of the target is related to the magnification factor of the microscope and the resolution factor of the camera, and in an application scene of matching with a 40-time objective lens and a 10-time electronic eyepiece, if the resolution of the image sensor is 630 ten thousand pixels, the diameter size of the target is usually selected to be in the range of 30-300 pixels.
S4, performing forward operation processing of a convolutional neural network on the small-size image separated in the S2, and obtaining a classification result;
s6, the classification result counting and statistical analysis unit performs comprehensive threshold comparison by combining the classification result output by the convolutional neural network classification processing unit with the classification possibility obtained by prediction in S4 and the statistical parameters of specific image features, and outputs a final interpretation result;
and S7, summarizing the classified counting results by a result judging and data outputting unit, and judging the results and outputting the data in a formatted manner by judging the results according to manually set judging conditions.
As a further improvement of the invention, S1 also comprises a digital image acquisition unit which firstly carries out quick focal length adjustment by counting the red data volume of the image aiming at the red characteristic of the cell body, so that the imaging is focused at the red focus to obtain the best image definition, and meanwhile, the purposes of controlling the lowest background noise of the image, maximizing the contrast between the cell image and the background and highlighting the fluorescent characteristic are achieved by adjusting the exposure parameters.
As a further improvement of the present invention, in S2, the method further comprises halving the dimensions in the horizontal and vertical directions of the original electronic image from the digital image acquisition unit by the digital image preprocessing unit, so that the image area is reduced to 1/4 of the original image; and when scaling, adopting a standard linear interpolation algorithm for processing.
As a further improvement of the present invention, the convolutional neural network classification processing unit in S4 includes a convolutional neural network including 3 convolutional layers: the first layer is an input layer for inputting an image of size 38x38, each small-size image separated by S2 is uniformly scaled to the size of 38x38 by maintaining the aspect ratio, and has 3 channels for receiving R, G, B three channels of data components of the input image, respectively; the size of the first layer convolution kernel is 3x3, the number of the feature images is 6, and the first layer convolution kernel is used for extracting color-related image features;
After the first layer convolution operation is finished, data is input into a pooling layer, the pooling layer is connected with a second layer convolution layer, the second layer convolution layer is provided with 8 feature graphs with 3x3 convolution kernels, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space;
The data processed by the second layer of convolution layer is also connected to a pooling layer, after pooling processing, the data is connected to a third layer of convolution layer, the third layer of convolution layer has 12 feature graphs with 3x3 convolution kernels, and the third layer of convolution layer is used for extracting distinguishing features of fluorescence features and other interference impurity features in a two-dimensional space;
And the processing result of the third layer of convolution layer is sent to an output layer, and the output layer outputs prediction possibility according to three classifications of negative, positive and impurity, namely classification result.
As a further improvement of the invention, the method further comprises S5, training a classification model library unit: the classification training model library unit is obtained by training a large number of medical images which are actually acquired through a convolutional neural network with the same structure as the convolutional neural network classification processing unit, and the image labels adopted in the training are classified and marked by experts with authoritative judgment capability, so that the reliability and the accuracy of the images adopted in the training are ensured.
As a further improvement of the present invention, S6 further includes: counting according to the three types of negative, positive and impurity, and simultaneously analyzing and counting the size and the fluorescence intensity parameters of the input small-size image.
As a further improvement of the invention, the method also comprises S8, feedback training, if classification errors occur after the classification counting result is manually checked, the trimmed information and error information are fed back to the classification training database unit, and the convolutional neural network classification processing unit is triggered to retrain so as to generate a new convolutional neural network classification code.
The beneficial effects of the invention are as follows: the invention is based on the software fully trained by adopting a large amount of original data, and can improve the accuracy and the processing efficiency of the interpretation of nine detection items of the respiratory tract. But also can greatly reduce the labor intensity of medical staff and improve the working efficiency.
Drawings
Fig. 1 is a schematic structural view of the present invention.
Fig. 2 is a schematic diagram of the operational flow of the present invention.
Fig. 3 is a flowchart of the image preprocessing and object separation processing of the present invention.
Fig. 4 is a structural and process flow diagram of a convolutional neural network element of the present invention.
FIG. 5 is a flow chart of the result judging and data outputting unit processing of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
referring to fig. 1-2, the respiratory tract examination result interpretation system of the present embodiment is composed of a computer 100, a microscope 200, and an industrial camera 300, wherein software for performing respiratory tract nine-item examination result interpretation is installed in the computer 100, the microscope is used for acquiring a sample placed for respiratory tract nine-item examination, and the sample is magnified by the microscope 200 to acquire a cell image; the industrial camera 300 is used to acquire an image magnified by the microscope 200, thereby obtaining a cell electronic image. And the computer identifies and processes the electronic image through software, and finally judges nine detection results of the respiratory tract. The respiratory tract examination of the present embodiment is nine respiratory tract examinations.
The software comprises:
A digital image acquisition unit 110 for acquiring an electronic image of a specimen, in this embodiment, the digital image acquisition unit acquires the electronic image of the specimen through an industrial camera and a microscope;
a digital image preprocessing unit 120, configured to perform preprocessing on the digital image acquired by the digital image acquisition unit 110, where the preprocessing includes performing gaussian filtering, enhancing a cell contour in the electronic image, and cropping the electronic image;
the image target separation unit 130 separates the cell outline from the background by a watershed algorithm and also extracts the cell outline image;
the convolutional neural network classification processing unit 140 extracts the electronic image characteristics processed by the image target separation unit and classifies the characteristics;
the classification training database unit 150 is configured to train the existing accurate electronic image through the convolutional neural network classification processing unit, so as to improve the recognition efficiency of the convolutional neural network classification processing unit;
the classification result counting and statistical analysis unit 160 is used for performing classification statistics on the identified electronic image features, and then outputting a final statistical result according to preset conditions;
the result judgment and data output unit 170 gathers the classification count results, and performs result judgment and data formatting output according to the manually set judgment conditions.
The software can be installed and used in computer equipment such as a computer and the like, is connected with an industrial camera through a data interface such as a USB or a network interface, and can be connected with a microscope used for medical project inspection through a standard microscope interface, and the software collects high-definition digital images captured by the camera and processes the high-definition digital images in real time.
Referring to fig. 3-5, the operation method of the respiratory tract examination result interpretation system according to the present embodiment includes the following steps:
S1, placing a specimen for examination on a slide, amplifying a specimen image on the slide by a microscope, and acquiring the amplified specimen digital image by adopting digital image acquisition equipment such as an industrial camera with a digital image imaging function or a camera with the same function;
after the sample slide for nine detection of respiratory tract is imaged under a fluorescence microscope, if the sample is positive, the periphery of the cell presents green fluorescence generated by fluorescence excitation, and the periphery of the cell of the negative sample does not have the characteristic of green fluorescence;
The digital image acquisition unit firstly carries out quick focal length adjustment by counting the red data volume of the image aiming at the red characteristic of the cell body, so that the imaging is correctly focused at the red gathering place to obtain the optimal image definition, and meanwhile, the purposes of controlling the lowest background noise of the image, maximizing the contrast between the cell image and the background and highlighting the fluorescent characteristic are achieved by adjusting the exposure parameters.
The digital image acquisition unit in the embodiment adopts a high-speed data interface such as USB3.0 and the like, so that the high-speed acquisition frame rate in a high-resolution mode can be ensured, and the system can be helped to focus quickly and adjust parameters.
S2, the digital image preprocessing unit performs noise filtering on the image through a Gaussian filter processing algorithm according to the requirements of further image analysis and feature extraction; then, cell contour detail enhancement is performed by contrast adjustment.
In order to reduce the amount of calculation data of the next target separation, in this embodiment, the size of the original collected image from the digital image collecting unit in the horizontal and vertical directions is halved, so that the image area is reduced to 1/4 of the original image; and when scaling, adopting a standard linear interpolation algorithm for processing.
S3, the image target separation unit firstly carries out grey-scale conversion on the electronic image from the digital image preprocessing unit and then carries out contrast enhancement so as to improve the characteristic intensity of the cell edge;
Separating the target image from the background image by a watershed algorithm, and separating a plurality of small-size images only containing cell objects from the whole image;
Then, by counting the overall brightness level (brightness level) of the image, the initial parameters of the watershed algorithm are automatically determined, so that the reasonable target separation quantity is realized; the automatic discarding is performed for undersized and oversized objects according to the usual pixel size range of the objects.
S4, the convolutional neural network classification processing unit is provided with a convolutional neural network structure with a multi-layer structure, and a classification training database unit obtained after a large amount of data training is used;
firstly, carrying out forward operation processing of a convolutional neural network on the small-size image separated by the S2, and obtaining a classification result;
The structure of the convolutional neural network is layered according to the size, color and other characteristics of the target image to be classified, and the convolutional neural network in this embodiment includes 3 convolutional layers: the first layer is an input layer 141 for inputting an image of size 383 x38, and each small-size image separated by S2 is uniformly scaled to the size 38x38 by maintaining the aspect ratio, and has 3 channels for receiving R, G, B three channels of data components of the input image, respectively; the size of the first layer convolution kernel is 3x3, the number of the feature images is 6, and the first layer convolution kernel mainly extracts the image features related to the colors;
After the first layer convolution operation is completed, data is input into a pooling layer, the pooling layer data is connected into a second layer convolution layer 142, the second layer convolution layer is provided with 8 feature images with 3x3 convolution kernels, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space, such as the attachment form between a fluorescence area and a cell body;
The data processed by the second layer of convolution layer is also connected to a pooling layer, after pooling processing, the data is connected to a third layer of convolution layer 143, the third layer of convolution layer has 12 feature graphs with 3x3 convolution kernels, and the third layer of convolution layer is used for extracting distinguishing features of fluorescent features and other features such as interference impurities in a two-dimensional space.
The processing result of the third convolution layer is sent to the output layer 144, and the output layer outputs the prediction possibility according to the three classifications of negative, positive and impurity, namely the classification result.
In the embodiment, the convolutional neural network also ensures that overfitting is avoided by adopting training data enhancement, dropout and other technologies, and improves the comprehensive classification accuracy.
S5, the classification training model library unit is obtained by training a large number of medical images which are actually acquired through a convolutional neural network with the same structure as the convolutional neural network classification processing unit, and the image labels adopted in training are classified and marked by an expert with authority judgment capability. The training model library can be continuously updated, adjusted and optimized in the application deployment stage so as to continuously improve the accuracy of the classification counting result.
S6, the classification result counting and statistical analysis unit counts the classification results output by the convolutional neural network classification processing unit according to three categories of negative, positive and impurity, and simultaneously analyzes and counts parameters such as the size and the fluorescence intensity of an input small-size image so as to provide further data analysis and result filtering, correction and the like for a user;
and (3) carrying out comprehensive threshold comparison by combining the classification possibility obtained by the S4 prediction and the statistical parameters of the specific image features, and outputting a final interpretation result.
And S7, summarizing the classified counting results by the result judging and data outputting unit, and judging the results and outputting the data in a formatted manner according to manually set judging conditions, such as negative, positive or quantitative results, graphical data output and the like.
The relevant results are provided for the inspector of nine respiratory tract inspection items to carry out judgment reference and report generation of the final inspection items.
If classification errors occur after the classification counting result is manually checked, the classification errors can be fed back to the classification training database unit, the convolutional neural network classification processing unit 140 is triggered to retrain, a new convolutional neural network classification code is generated, and the classification counting accuracy is gradually improved through the process.
Part of the term interpretation in this case:
Convolutional neural network: the convolutional neural network (Convolutional Neural Network, CNN) is a feed-forward neural network whose artificial neurons can respond to surrounding cells within a portion of the coverage area, and consists of one or more convolutional layers and top fully connected layers (corresponding to classical neural networks) while also including associated weights and pooling layers (pooling layer). This structure enables the convolutional neural network to take advantage of the two-dimensional structure of the input data. Convolutional neural networks can give better results in terms of image and speech recognition than other deep learning structures. This model may also be trained using a back propagation algorithm. Compared with other deep and feedforward neural networks, the convolutional neural network has fewer parameters to be considered, so that the convolutional neural network becomes an attractive deep learning structure
Artificial intelligence technology: artificial intelligence (english: ARTIFICIAL INTELLIGENCE, abbreviated AI) is also known as machine intelligence, and refers to the intelligence exhibited by machines manufactured by humans. Artificial intelligence generally refers to human intelligence techniques implemented by means of ordinary computer programs.
SVM in machine learning, the SVM support vector machine (support vector machine) is a supervised learning model and associated learning algorithm that analyzes data in classification and regression analysis. Given a set of training instances, each labeled as belonging to one or the other of two classes, the SVM training algorithm creates a model that assigns the new instance to one of the two classes, making it a non-probabilistic binary linear classifier. The SVM model represents instances as points in space such that the mapping causes the instances of individual classes to be separated by as wide a distinct interval as possible. The new instances are then mapped to the same space and the belonging categories are predicted based on which side of the interval they fall on.
The present invention is not described in detail in the present application, and is well known to those skilled in the art.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.
Claims (10)
1. The respiratory tract examination result interpretation method based on image recognition is characterized by comprising the following steps of:
preprocessing a fluorescence image of a specimen for nine examinations of the respiratory tract;
Separating the cell outline and the background of the preprocessed fluorescent image to extract the cell outline image, and separating a plurality of small-size images containing cell objects from the fluorescent image;
Performing forward operation processing of a convolutional neural network on the separated small-size image, and obtaining a classification result;
Counting the classification results respectively, and outputting a final interpretation result according to manually set judgment conditions;
wherein the convolutional neural network comprises 3 convolutional layers: the first layer is an input layer and is used for inputting each separated small-size image, and the first layer is provided with 3 channels and is respectively used for receiving R, G, B three channels of data components of the input image; a first convolution layer for extracting color-related image features;
After the first layer convolution operation is finished, inputting data into a pooling layer, wherein the pooling layer is connected with a second layer convolution layer, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space;
the data processed by the second layer of convolution layer is also accessed to the pooling layer, after pooling processing, the data is accessed to the third layer of convolution layer,
The third convolution layer is used for extracting distinguishing features of fluorescent features and other interference impurity features in a two-dimensional space;
And the processing result of the third layer of convolution layer is sent to an output layer, and the output layer outputs prediction possibility according to three classifications of negative, positive and impurity, namely classification result.
2. The method for interpreting respiratory tract examination results according to claim 1, wherein said separating cell contours and backgrounds of said preprocessed fluoroscopic images comprises: carrying out grey conversion inversion on the preprocessed fluorescent image, and then carrying out contrast enhancement; and then separating a plurality of small-size images only containing the cell objects from the whole image by a watershed algorithm.
3. The method according to claim 2, wherein the watershed algorithm is characterized in that the region with brightness lower than 20% of the average brightness of the whole image in the gray level image is selected as a growth seed region, the region size is continuously expanded by an iterative algorithm until the region expansion is stopped when overlapping occurs between the regions or when the brightness of the region to be expanded is higher than 80% of the average brightness of the whole image, the region obtained at this time is the region where the target is located, and a plurality of small-size images only including cell objects are separated from the whole image by running the watershed algorithm on the whole image.
4. The method for interpreting a respiratory tract examination result as claimed in claim 1, wherein, in said convolutional neural network,
The first layer of convolution layer is used for inputting images with the size of 38x38, the separated small-size images are uniformly scaled to the size of 38x38 by keeping the aspect ratio, the size of the first layer of convolution kernel is 3x3, and the number of feature images is 6;
the second convolution layer has 8 feature maps with 3x3 convolution kernels;
The third layer of convolution layer has 12 feature maps with a 3x3 convolution kernel.
5. The method according to claim 1, wherein the counting the classification results respectively comprises:
counting according to three categories of negative, positive and impurity;
And meanwhile, analyzing and counting the size and fluorescence intensity of the input small-size image, and outputting a final interpretation result after integrating the small-size image with the classification result.
6. The method for interpreting respiratory tract examination results according to claim 1, wherein said method for interpreting respiratory tract examination results further comprises:
And (3) feedback training, wherein if the classification counting result is subjected to manual verification and has classification errors, the corrected information is fed back to the classification training database unit, and the convolutional neural network classification processing unit is triggered to retrain so as to generate a new convolutional neural network classification code.
7. The method for judging and reading the respiratory tract examination result according to claim 1, wherein the convolutional neural network is obtained after training of medical images actually acquired, and image labels adopted during training are classified and marked by experts with authority judgment capability, so that reliability and accuracy of images adopted during training are guaranteed.
8. The respiratory tract examination result interpretation system based on image recognition is characterized by comprising a convolutional neural network classification processing unit, a data processing unit and a data processing unit, wherein the convolutional neural network classification processing unit is provided with a convolutional neural network with a multilayer structure;
The convolutional neural network includes a 3-layer convolutional layer: the first layer is an input layer for inputting each small-sized image separated from the fluorescence image of the specimen of the nine examinations of the respiratory tract, the first layer having 3 channels for receiving R, G, B three channels of data components of the input image, respectively; a first convolution layer for extracting color-related image features;
After the first layer convolution operation is finished, inputting data into a pooling layer, wherein the pooling layer is connected with a second layer convolution layer, and the second layer convolution layer is used for extracting the distribution relation of color features in a two-dimensional space;
The data processed by the second layer of convolution layer is also accessed to a pooling layer, after pooling processing, the data is accessed to a third layer of convolution layer, and the third layer of convolution layer is used for extracting distinguishing features of fluorescence features and other interference impurity features in a two-dimensional space;
And the processing result of the third layer of convolution layer is sent to an output layer, and the output layer outputs prediction possibility according to three classifications of negative, positive and impurity, namely classification result.
9. The respiratory tract examination result interpretation system of claim 8, further comprising:
The digital image preprocessing unit is used for preprocessing the digital image acquired by the digital image acquisition unit;
And the image target separation unit is used for separating the cell outline from the background through a watershed algorithm and extracting the cell outline image.
10. The respiratory tract examination result interpretation system of claim 8, further comprising:
The classification result counting and statistical analysis unit is used for carrying out classification statistics on the identified digital image characteristics and then outputting a final statistical result according to preset conditions;
and the result judging and data outputting unit is used for summarizing the classified counting results and outputting the result judging and data formatting according to the manually set judging conditions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410508787.6A CN118447499A (en) | 2019-04-01 | 2019-04-01 | Respiratory tract inspection result interpretation method and system based on image recognition |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410508787.6A CN118447499A (en) | 2019-04-01 | 2019-04-01 | Respiratory tract inspection result interpretation method and system based on image recognition |
CN201910256678.9A CN109815945B (en) | 2019-04-01 | 2019-04-01 | Respiratory tract examination result interpretation system and method based on image recognition |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910256678.9A Division CN109815945B (en) | 2019-04-01 | 2019-04-01 | Respiratory tract examination result interpretation system and method based on image recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118447499A true CN118447499A (en) | 2024-08-06 |
Family
ID=66611140
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910256678.9A Active CN109815945B (en) | 2019-04-01 | 2019-04-01 | Respiratory tract examination result interpretation system and method based on image recognition |
CN202410508787.6A Pending CN118447499A (en) | 2019-04-01 | 2019-04-01 | Respiratory tract inspection result interpretation method and system based on image recognition |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910256678.9A Active CN109815945B (en) | 2019-04-01 | 2019-04-01 | Respiratory tract examination result interpretation system and method based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN109815945B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126162A (en) * | 2019-11-28 | 2020-05-08 | 东软集团股份有限公司 | Method, device and storage medium for identifying inflammatory cells in image |
CN111134735A (en) * | 2019-12-19 | 2020-05-12 | 复旦大学附属中山医院 | A system, method and computer-readable storage medium for rapid on-site assessment of pulmonary cytopathology |
CN111489833A (en) * | 2019-12-19 | 2020-08-04 | 上海杏脉信息科技有限公司 | Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium |
CN112232327B (en) * | 2020-12-16 | 2021-04-16 | 南京金域医学检验所有限公司 | Anti-nuclear antibody karyotype interpretation method and device based on deep learning |
CN112992336A (en) * | 2021-02-05 | 2021-06-18 | 山西医科大学 | Intelligent pathological diagnosis system |
CN113205709A (en) * | 2021-04-13 | 2021-08-03 | 广东小天才科技有限公司 | Learning content output method and device, electronic equipment and storage medium |
CN113591961A (en) * | 2021-07-22 | 2021-11-02 | 深圳市永吉星光电有限公司 | Minimally invasive medical camera image identification method based on neural network |
CN115326501A (en) * | 2022-07-19 | 2022-11-11 | 广州水石基因科技有限公司 | Method, device, electronic device and readable storage medium for detecting Cryptococcus neoformans |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100483432C (en) * | 2003-08-26 | 2009-04-29 | 深圳市迈科龙医疗设备有限公司 | Genital-infection inteeligent identification system and method |
CN102288607A (en) * | 2011-06-20 | 2011-12-21 | 江南大学 | Woven fabric count detector based on digital microscope |
CN106709421B (en) * | 2016-11-16 | 2020-03-31 | 广西师范大学 | Cell image identification and classification method based on transform domain features and CNN |
CA2948499C (en) * | 2016-11-16 | 2020-04-21 | The Governing Council Of The University Of Toronto | System and method for classifying and segmenting microscopy images with deep multiple instance learning |
CN107064019B (en) * | 2017-05-18 | 2019-11-26 | 西安交通大学 | The device and method for acquiring and dividing for dye-free pathological section high spectrum image |
US11010610B2 (en) * | 2017-06-13 | 2021-05-18 | Google Llc | Augmented reality microscope for pathology |
US10282589B2 (en) * | 2017-08-29 | 2019-05-07 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for detection and classification of cells using convolutional neural networks |
CN107609503A (en) * | 2017-09-05 | 2018-01-19 | 刘宇红 | Intelligent cancerous tumor cell identifying system and method, cloud platform, server, computer |
CN108447047A (en) * | 2018-02-11 | 2018-08-24 | 深圳市恒扬数据股份有限公司 | Acid-fast bacilli detection method and device |
CN109190567A (en) * | 2018-09-10 | 2019-01-11 | 哈尔滨理工大学 | Abnormal cervical cells automatic testing method based on depth convolutional neural networks |
CN109359569B (en) * | 2018-09-30 | 2022-05-13 | 桂林优利特医疗电子有限公司 | Erythrocyte image sub-classification method based on CNN |
CN109389557B (en) * | 2018-10-20 | 2023-01-06 | 南京大学 | A cell image super-resolution method and device based on image prior |
CN109350014A (en) * | 2018-12-10 | 2019-02-19 | 苏州小蓝医疗科技有限公司 | A kind of sound of snoring recognition methods and system |
-
2019
- 2019-04-01 CN CN201910256678.9A patent/CN109815945B/en active Active
- 2019-04-01 CN CN202410508787.6A patent/CN118447499A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN109815945B (en) | 2024-04-30 |
CN109815945A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815945B (en) | Respiratory tract examination result interpretation system and method based on image recognition | |
US9684960B2 (en) | Automated histological diagnosis of bacterial infection using image analysis | |
US8600143B1 (en) | Method and system for hierarchical tissue analysis and classification | |
CN104169945B (en) | To the two-stage classification of the object in image | |
CN111079620B (en) | White blood cell image detection and identification model construction method and application based on transfer learning | |
US11170475B2 (en) | Image noise reduction using stacked denoising auto-encoder | |
CN109684906B (en) | Method for detecting red fat bark beetles based on deep learning | |
CN114170511A (en) | Pavement crack disease identification method based on Cascade RCNN | |
CN113205055B (en) | Fungus microscopic image classification method and system based on multi-scale attention mechanism | |
Haq et al. | Wheat Rust disease classification using edge-AI | |
CN115880245A (en) | A self-supervised approach for breast cancer disease classification | |
CN116665047A (en) | Hyperspectral-based tobacco bacterial wilt identification method, device, equipment and medium | |
CN116524496A (en) | Parasite auxiliary detection system based on deep learning | |
CN112861675B (en) | Detection and identification method for fecal component | |
CN114897823A (en) | Cytology sample image quality control method, system and storage medium | |
CN111401119A (en) | Classification of cell nuclei | |
CN116245850B (en) | A fat content analysis device and method for liver pathological slice images | |
Thakkar et al. | Advancement in Plant Pathology using Image Segmentation and Deep Learning | |
CN118968177A (en) | Breast tissue pathology image classification method based on two-stage feature fusion network | |
CN118864845A (en) | A method for identifying and segmenting characteristic regions of optical microscope images | |
Joseph et al. | Harvestable Black Pepper Recognition Using Computer Vision | |
CN119107643A (en) | A bone marrow cell identification method, system, device and medium based on hyperspectral | |
Setiawan et al. | Detection of Oil Palm Fruit Ripeness through Image Feature Optimization using Convolutional Neural Network Algorithm | |
Mastenko et al. | Application of the algorithm GOOGLeNeT for quality control using computer vision | |
Azimi-Saghin et al. | An Algorithm to Extract the Defective Areas of Potato Tubers Infected with Black Scab Disease Using Fuzzy C Means Clustering for Automatic Grading |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |