[go: up one dir, main page]

CN114764855A - Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning - Google Patents

Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning Download PDF

Info

Publication number
CN114764855A
CN114764855A CN202110037412.2A CN202110037412A CN114764855A CN 114764855 A CN114764855 A CN 114764855A CN 202110037412 A CN202110037412 A CN 202110037412A CN 114764855 A CN114764855 A CN 114764855A
Authority
CN
China
Prior art keywords
deep learning
tumor
bladder
cystoscope
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110037412.2A
Other languages
Chinese (zh)
Inventor
张琦
金新广
楚天广
毕海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110037412.2A priority Critical patent/CN114764855A/en
Publication of CN114764855A publication Critical patent/CN114764855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of medical treatment and artificial intelligence intersection, in particular to a method, a device and equipment for intelligently segmenting cystoscopy-based tumors based on deep learning. The intelligent segmentation method for the cystoscope under-cystoscope tumor based on deep learning comprises the following steps: acquiring a bladder picture under a cystoscope; inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture; and outputting the identification result for the reference of related personnel. Therefore, on the basis of the deep learning technology, the end-to-end image processing and intelligent diagnosis functions are realized by constructing the deep neural network model, so that a doctor is assisted to make a diagnosis, and the misdiagnosis and missed diagnosis rate of the bladder cancer in clinic is reduced.

Description

Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
Technical Field
The invention relates to the field of medical treatment and artificial intelligence intersection, in particular to a cystoscope tumor intelligent segmentation method, device and equipment based on deep learning.
Background
The bladder cancer is the most common malignant tumor of the urinary system, the incidence rate of the bladder cancer is 7 th of all malignant tumors, the bladder cancer is 4 th of male, 61700 patients with bladder cancer are diagnosed every year, and according to statistics, the survival rate of the bladder cancer at the early stage can reach more than 90 percent, but clinically, a plurality of patients still have delayed treatment due to misdiagnosis, missed diagnosis, complicated inspection and the like. At present, cystoscopy is still the gold standard for diagnosing bladder cancer, and cystoscopy is also the standard scheme for bladder cancer postoperative reexamination. However, since bladder tumors have various morphologies, they may exhibit villous, follicular, erythema-like, and often similar to various inflammatory lesions, which may lead to a doctor to mistakenly determine a tumor lesion as a benign lesion, thereby missing a biopsy and causing missed diagnosis.
In addition, the treatment of bladder cancer varies depending on the type of bladder tumor. For example, non-muscle invasive bladder cancer is typically treated by electrocision, while muscle invasive bladder cancer is typically treated by total bladder resection. At present, cystoscope can only observe the intravesical condition of bladder tumor, and the biopsy can only judge whether the cancer is due to superficial material drawing, but cannot judge the specific type and the complete area of the tumor. Experienced urologists can roughly determine the tumor type by bladder tumor morphology, but not all urologists can do this, and such empirical determinations cannot be implemented as gold standards. Therefore, the patient must receive a diagnostic electrical resection after the cystoscopy to completely remove the tumor and the tumor tissue, determine the type and infiltration condition through histopathology, and then determine the next treatment scheme and measure. This electrotomy process greatly burdens the patient and is not an optimal treatment regimen.
Disclosure of Invention
In view of this, a method, an apparatus and a device for intelligently segmenting cystoscopic tumors based on deep learning are provided to solve the problems in the related art.
The invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for intelligently segmenting a cystoscope tumor based on deep learning, where the method includes:
acquiring a bladder picture under a cystoscope;
inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture;
and outputting the identification result for the reference of related personnel.
Optionally, the deep learning model includes: a benign and malignant diagnosis submodel and a bladder tumor segmentation submodel;
the benign and malignant diagnosis submodel is used for identifying the picture as benign lesion or malignant tumor;
the bladder tumor segmentation sub-model is used for identifying the position with the tumor in the picture and segmenting.
Optionally, the training process of the pre-trained question-answer model includes:
obtaining a first training sample;
training a preset benign and malignant diagnosis deep learning model section based on the first training sample to obtain a benign and malignant diagnosis sub-model;
acquiring a second training sample;
and training a preset bladder tumor segmentation deep learning model based on the second training sample to obtain a bladder tumor segmentation sub-model.
Optionally, the obtaining a first training sample includes:
constructing a cystoscope image database and acquiring cystoscope images;
classifying and labeling the cystoscope images; the malignant tumor image is labeled as 1 and the benign lesion is labeled as 0 using a 0-1 label.
Optionally, the obtaining a second training sample includes:
constructing a cystoscope image database and acquiring cystoscope images;
and marking the cystoscope image, and marking the tumor position in the cystoscope image.
Optionally, the inputting the picture into a pre-trained deep learning model to obtain the identification of the tumor pattern in the bladder picture includes:
extracting a network based on the backbone features to obtain a feature map of the picture;
inputting the characteristic diagram into a benign and malignant diagnosis submodel, and judging whether the bladder in the image has benign lesions or malignant tumors;
and inputting the characteristic map into a bladder tumor segmentation sub-model, and identifying and segmenting the position of the bladder with tumor in the picture.
Optionally, in the task of segmenting the bladder tumor focus tissue, the bladder tumor segmentation submodel adopts a spatial multi-scale pooling module and an upsampling module, the spatial multi-scale pooling module extracts context information of different regions of the image, and the upsampling module aggregates information of each scale, so that the capability of acquiring global feature information is improved.
In another aspect, an intelligent segmentation apparatus for cystoscopic tumor based on deep learning is characterized by comprising:
the acquisition module is used for acquiring a cystoscope bladder picture;
the deep learning module is used for inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture;
and the output module is used for outputting the identification result for the reference of related personnel.
In yet another aspect, a depth learning-based intelligent segmentation apparatus for cystoscopic tumors comprises:
a processor, and a memory coupled to the processor;
the memory is configured to store a computer program for performing at least the deep learning-based cystoscopic intelligent segmentation method of the present application;
the processor is used for calling and executing the computer program in the memory.
In yet another aspect, a storage medium stores a computer program which, when executed by a processor, implements the steps of the method for intelligent segmentation of cystoscopic tumor based on deep learning according to the present application.
By adopting the technical scheme, firstly, a cystoscope bladder picture is obtained; inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture; and outputting the identification result for the reference of related personnel. Therefore, based on the deep learning technology, the end-to-end image processing and intelligent diagnosis functions are realized by constructing the deep neural network model, so that the diagnosis of doctors is assisted, the misdiagnosis and missed diagnosis rate of bladder cancer in clinic is reduced, the medical diagnosis efficiency is improved, and the diagnosis and treatment cost of patients is assisted to be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for intelligently segmenting cystoscopic tumors based on deep learning according to an embodiment of the present invention;
FIG. 2 is a diagram of a deep learning model in an embodiment provided by the invention;
fig. 3 is a schematic structural diagram of an intelligent segmentation device for a cystoscope tumor based on deep learning according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent segmentation apparatus for a cystoscope tumor based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Firstly, an application scene of the embodiment of the invention is explained, the bladder cancer is the most common malignant tumor of the urinary system, the incidence rate of the bladder cancer is 7 th of all malignant tumors, and the bladder cancer is 4 th of male patients, 61700 patients with bladder cancer are diagnosed every year, according to statistics, the survival rate of the bladder cancer at the early stage can reach more than 90%, but clinically, a plurality of patients still have delayed treatment due to misdiagnosis, missed diagnosis, fussy examination and the like. In addition, in order to determine the type and area of tumor, clinically, patients must receive a diagnostic electrical excision after cystoscopy to completely excise the tumor and the tumor tissue, determine the type and infiltration condition through histopathology, and then determine the scheme and measures for the next treatment. This electrotomy process greatly increases the burden on the patient. This application has provided the supplementary diagnosis solution based on artificial intelligence technique that corresponds to this problem to supplementary efficiency of diagnosing is diagnose in the supplementary improvement, reduces the misdiagnosis and neglected diagnosis rate, avoids the patient to accept the process of diagnosing of unnecessary electricity surely to a certain extent simultaneously, thereby reduces and diagnoses cost and pressure.
Examples
FIG. 1 is a flowchart of a deep learning-based intelligent segmentation method for cystoscopic tumors according to an embodiment of the present invention; the method can be executed by the intelligent segmentation device for the cystoscope tumor based on deep learning provided by the embodiment of the invention. Referring to fig. 1, the method may specifically include the following steps:
S101, acquiring a cystoscope bladder picture;
it should be noted that, at present, cystoscopy in combination with histopathology is still the gold standard for bladder cancer diagnosis. However, bladder tumors have various forms, are often similar to various inflammatory diseases, and are easily confused, so that misdiagnosis or missed diagnosis in the clinical diagnosis process is caused. Identification of cystoscopy results has remained challenging for young and inexperienced physicians, depending largely on the experience and skill of the examiner. In the field of artificial intelligence technology-assisted cystoscope diagnosis, few achievements exist at home and abroad, O. Eminaga et al propose a deep convolutional neural network model in 2018, evaluate the deep convolutional neural network model by using F1-score, and finally find that the score based on an Xception model is the highest. Ikeda et al in their paper aimed to support CNN-based artificial intelligence cystoscopy in the diagnosis of bladder cancer, validating the likelihood of classification of cystoscopic images, including neoplastic lesions and normal tissue. The method provides experimental feasibility basis for intelligent diagnosis of cystoscopy tumors. The invention provides an intelligent cystoscope tumor segmentation system based on an MPNet deep learning framework by combining a deep learning theory, further realizes the abnormal tissue segmentation facing the cystoscope influence on the basis of tumor classification, and provides guidance and prompt for the tumor diagnosis of a clinician.
S102, inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture;
specifically, the deep learning model includes: a benign and malignant diagnosis submodel and a bladder tumor segmentation submodel;
the benign and malignant diagnosis submodel is used for identifying the picture as benign lesion or malignant tumor;
the bladder tumor segmentation submodel is used for identifying the position with the tumor in the picture and cutting.
Wherein the training process of the pre-trained question-answering model comprises the following steps:
obtaining a first training sample; specifically, the obtaining of the first training sample includes:
constructing a cystoscope image database and acquiring cystoscope images; classifying and labeling the cystoscope images; the malignant tumor image is labeled as 1 and the benign lesion is labeled as 0 using a 0-1 label.
Training a preset benign and malignant diagnosis deep learning model section based on the first training sample to obtain a benign and malignant diagnosis sub-model;
obtaining a second training sample; specifically, the method comprises the following steps: constructing a cystoscope image database and acquiring cystoscope images; and marking the cystoscope image, and marking the tumor position in the cystoscope image.
And training a preset bladder tumor segmentation deep learning model based on the second training sample to obtain a bladder tumor segmentation sub-model.
And S103, outputting the identification result for the reference of related personnel.
By adopting the technical scheme, firstly, a cystoscope bladder picture is obtained; inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture; and outputting the identification result for the reference of related personnel. Therefore, based on the deep learning technology, the end-to-end image processing and intelligent diagnosis functions are realized by constructing the deep neural network model, so that a doctor is assisted to make a diagnosis, the misdiagnosis and missed diagnosis rate of the bladder cancer in clinic are reduced, the medical diagnosis efficiency is improved, unnecessary electrocision confirming process of a patient is avoided to a certain extent, and the diagnosis cost and pressure are reduced.
Specifically, the step of inputting the picture into a pre-trained deep learning model to identify the tumor pattern in the bladder picture includes:
extracting a network based on the backbone features to obtain a feature map of the picture;
inputting the characteristic diagram into a benign and malignant diagnosis submodel, and judging whether the bladder in the image has benign lesions or malignant tumors;
And inputting the characteristic map into a bladder tumor segmentation sub-model, and identifying and segmenting the position of the bladder with tumor in the picture.
The specific training process of the deep learning model is as follows:
firstly, processing data to obtain training data: the system first constructs a cystoscope image database and performs data pre-processing. The bladder tumor forms of the original images are various, and can present villus sample, follicular sample, flat erythema sample and the like, which are similar to various inflammatory lesions and are easy to be confused. The malignant lesion and the benign lesion are difficult to judge accurately by naked eyes, even an experienced professional doctor is not completely accurate when diagnosing malignant tumor according to the cystoscope, and the phenomenon of missed diagnosis sometimes occurs. Therefore, the system learns a large amount of cystoscope image data by constructing a deep learning model, and leads a machine to learn and extract the internal characteristics of malignant tumors and benign lesions, thereby obtaining the prediction classification result of the machine. The conclusion of the machine recognition is referred to by the diagnostician, thereby assisting the diagnostician in further diagnostic analysis.
The first step of data preprocessing is to classify and label the original image data set, and the classification task module researches the two classification problems of bladder cancer and non-cancer, so that the classification labeling of malignant tumor and benign lesion is carried out on the cystoscope image, a 0-1 label is adopted, the cancer image is labeled as 1, the non-cancer image is labeled as 0, and the classification result is encoded by one-hot, and the encoding mode can effectively eliminate the difference between classifications. The second step of data pre-processing is pixel normalization of the image. Before normalization is carried out, the value range of the pixel value of the original image is 0-255, and the image belongs to an RGB three-channel color image and is a high-dimensional tensor in space. If the high-dimensionality pixel point tensor is directly input into the model, huge operation overhead can be brought to computing hardware, and model training can consume time and labor. It is therefore necessary to first normalize the raw image pixel values. The pixel values of the original image are mapped into the interval of 0-1 by adopting a normalization method. The other task of the data preprocessing is to make a semantically segmented data set, namely to make a label file of an original input image, and the semantic segmentation task in the text is to distinguish normal tissues from abnormal tissues, so that the abnormal tissues in the image are marked in the data preprocessing process for training a network model of the semantic segmentation task, which is an important task of the data preprocessing. The second step of the system is to realize the production of a semantic segmentation data set, which is used for the training process of the segmentation task. The system manually marks out the nodular lesion tissues under the cystoscope by using a labelme tool, and makes a semantic segmentation label set for learning and training of a semantic segmentation depth network. (Note: semantic segmentation is segmentation at the pixel level, pixels belonging to the same class are classified.) the tumor lesion tissue labeling herein is done under the direction of a specialist. The definition of tumor and the segmentation of normal and abnormal tissues are guided by professional clinicians. After the contour of the tumor-shaped lesion tissue of each cystoscope image is accurately marked, the non-lesion tissue and the background noise are removed, and a label image corresponding to the bladder tumor (tumor-shaped lesion tissue) image is obtained. The essence of semantic segmentation of an image is based on the problem of classification of pixels, the data annotation of semantic segmentation is equivalent to making a classification for each pixel in the image, and the total loss amount generated by the classification is the sum of all pixel classification losses.
Further, referring to fig. 2, the MPNet deep learning network constructed by the system is a two-channel deep network structure with multiple branches from top to bottom, and is a deep learning model for realizing multi-task parallel processing in two paths (the two paths are respectively a benign and malignant diagnosis submodel and a bladder tumor segmentation submodel). And finally, inputting an image, obtaining a characteristic diagram through a backbone characteristic extraction network, and realizing classification and segmentation tasks of tumor focus tissues by two paths on the basis of the characteristic diagram.
The backbone feature extraction network is a Deep Convolutional Neural Network (DCNN) based on a MobileNet V2 model, the model is an algorithm improvement on the basis of ResNet50/101, a laboratory model of ResNet, which has high requirements on computing resources and hardware computing power, is reduced to a lightweight deep learning model facing a mobile terminal or an embedded device, and prediction accuracy is guaranteed not to be reduced basically. Theoretical support is provided for future embedding of the algorithm program into the medical examination apparatus.
The left branch (benign-malignant diagnostic submodel) is the benign-malignant diagnostic task of bladder tumors. On the basis of the feature map extracted on the first layer of the network, flattening (Flatten) the feature layer into a one-dimensional vector, then connecting a full connection layer (Dense), and obtaining a benign and malignant diagnosis result of the tumor focus through a classifier. On the basis of malignant tumors, the cancer subtypes are classified more.
The right branch (bladder tumor segmentation submodel) is the segmentation task of bladder tumors. In the task of segmentation of bladder tumor lesion tissue, we constructed a deep convolutional neural network based on MobileNetV2 and spatial multi-scale Pooling (also known as Pyramid Pooling, or Pyramid Pooling). The model is a deep learning model for realizing bladder tumor segmentation and intelligent diagnosis (tumor grading) from end to end. In the model, a spatial multi-scale pooling module and an up-sampling module are combined and configured, wherein the spatial multi-scale pooling module extracts context information of different areas of an image, and the up-sampling module aggregates information of all scales so as to improve the capability of acquiring global feature information. The module combination can effectively divide the high and low levels of urothelial carcinoma tumors by fusing the hierarchical features of various scales. The grading of bladder tumor is an important prognostic factor of bladder cancer, and has important significance in clinical diagnosis.
According to the MPNet model architecture, the system can realize the classification prediction result obtained from inputting the original image of the cystoscope at the top, entering a backbone feature extraction network to obtain a feature map at the middle stage, entering a bladder tumor segmentation sub-model to obtain a segmentation prediction map, and entering a benign and malignant diagnosis sub-model, thereby realizing the parallel completion of the two-channel multitask of tumor classification and segmentation under the cystoscope. Through the construction of the MPNet deep learning network, the system realizes code and system development. Experimental results show that the system can realize classification of benign and malignant tumors under a cystoscope, realize abnormal tissue segmentation and labeling of high-grade and low-grade bladder tumors, the classification accuracy in classification experiments on a test set can reach more than 95%, and the class average pixel accuracy in segmentation experiments can reach more than 90%.
Inputting the picture into a pre-trained MPNet deep learning model to identify the tumor pattern in the bladder picture, wherein the identification comprises the following steps:
extracting a network based on the backbone features to obtain a feature map of the picture;
inputting the characteristic diagram into a benign and malignant diagnosis submodel, and judging whether benign lesions or malignant tumors occur in the bladder in the image;
and inputting the characteristic map into a bladder tumor segmentation sub-model, and identifying and segmenting the position of the bladder with tumor in the picture.
Fig. 3 is a schematic structural diagram of an intelligent segmentation device for a cystoscope tumor based on deep learning according to an embodiment of the present invention; the application provides a tumour intelligence segmenting device under cystoscope based on deep learning includes:
an acquiring module 31, configured to acquire a cystoscope bladder picture;
the deep learning module 32 is configured to input the picture into a pre-trained deep learning model to obtain a result of identifying a tumor pattern in the bladder picture;
and the output module 33 is used for outputting the identification result for reference of related personnel.
Fig. 4 is a schematic structural diagram of an intelligent segmentation device for a cystoscope tumor based on deep learning according to an embodiment of the present invention. Referring to fig. 4, an intelligent segmentation apparatus for cystoscopic tumor based on deep learning includes:
A processor 41, and a memory 42 connected to the processor;
the memory 42 is configured to store a computer program for performing at least the deep learning based cystoscopic intelligent segmentation method of the present application;
the processor 41 is configured to call and execute the computer program in the memory.
The present application also provides a storage medium storing a computer program, which when executed by a processor, implements the steps of the intelligent segmentation method for cystoscopic tumors based on deep learning according to the present application.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. An intelligent cystoscope tumor segmentation method based on deep learning is characterized by comprising the following steps:
acquiring a bladder picture under a cystoscope;
inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture;
And outputting the identification result for the reference of related personnel.
2. The method of intelligent segmentation of cystoscopic tumors based on deep learning of claim 1, wherein the deep learning model comprises: a benign and malignant diagnosis submodel and a bladder tumor segmentation submodel;
the benign and malignant diagnosis submodel is used for identifying the picture as benign lesion or malignant tumor;
the bladder tumor segmentation sub-model is used for identifying the position with the tumor in the picture and segmenting.
3. The method for intelligently segmenting the cystoscopic tumor based on deep learning of claim 2, wherein the training process of the pre-trained question-answer model comprises:
acquiring a first training sample;
training a preset benign and malignant diagnosis deep learning model based on the first training sample to obtain a benign and malignant diagnosis submodel;
obtaining a second training sample;
and training a preset bladder tumor segmentation deep learning model based on the second training sample to obtain a bladder tumor segmentation sub-model.
4. The method for intelligently segmenting a cystoscopic tumor based on deep learning of claim 3, wherein the obtaining of the first training sample comprises:
Constructing a cystoscope image database and acquiring cystoscope images;
classifying and labeling the cystoscope images; the malignant tumor image is labeled as 1 and the benign lesion is labeled as 0 using a 0-1 label.
5. The deep learning-based intelligent segmentation method for cystoscopic tumors according to claim 3, wherein the obtaining of the second training sample comprises:
constructing a cystoscope image database and acquiring cystoscope images;
and marking the cystoscope image, and marking the tumor position in the cystoscope image.
6. The method for intelligently segmenting the cystoscopic tumor based on deep learning of claim 1, wherein the inputting the picture into a pre-trained deep learning model to identify the tumor pattern in the bladder picture comprises:
extracting a network based on the backbone features to obtain a feature map of the picture;
inputting the characteristic diagram into a benign and malignant diagnosis submodel, and judging whether the bladder in the image has benign lesions or malignant tumors;
and inputting the characteristic map into a bladder tumor segmentation sub-model, and identifying and segmenting the position of the bladder with tumor in the picture.
7. The intelligent segmentation method for cystoscopic tumors based on deep learning of claim 1, wherein the bladder tumor segmentation submodel adopts a spatial multi-scale pooling module and an upsampling module in the segmentation task of bladder tumor focal tissues, the spatial multi-scale pooling module extracts context information of different areas of an image, and the upsampling module aggregates information of various scales to improve the capability of acquiring global feature information.
8. An intelligent segmentation device for cystoscopic tumors based on deep learning, comprising:
the acquisition module is used for acquiring a bladder picture under a cystoscope;
the deep learning module is used for inputting the picture into a pre-trained deep learning model to identify a tumor pattern in the bladder picture;
and the output module is used for outputting the identification result for the reference of related personnel.
9. An intelligent segmentation device for cystoscopic tumors based on deep learning, comprising:
a processor, and a memory coupled to the processor;
the memory is configured to store a computer program for performing at least the deep learning-based intelligent segmentation method for cystoscopic tumors according to any one of claims 1 to 7;
the processor is used for calling and executing the computer program in the memory.
10. A storage medium storing a computer program which, when executed by a processor, implements the steps of the method for intelligent segmentation of cystoscopic tumors based on deep learning according to any one of claims 1 to 7.
CN202110037412.2A 2021-01-12 2021-01-12 Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning Pending CN114764855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110037412.2A CN114764855A (en) 2021-01-12 2021-01-12 Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110037412.2A CN114764855A (en) 2021-01-12 2021-01-12 Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning

Publications (1)

Publication Number Publication Date
CN114764855A true CN114764855A (en) 2022-07-19

Family

ID=82363286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110037412.2A Pending CN114764855A (en) 2021-01-12 2021-01-12 Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN114764855A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188436A (en) * 2023-03-03 2023-05-30 合肥工业大学 Cystoscope image classification method based on fusion of local features and global features
CN117893792A (en) * 2023-12-14 2024-04-16 中山大学附属第一医院 A bladder tumor classification method based on MR signals and related devices

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188436A (en) * 2023-03-03 2023-05-30 合肥工业大学 Cystoscope image classification method based on fusion of local features and global features
CN116188436B (en) * 2023-03-03 2023-11-10 合肥工业大学 Cystoscope image classification method based on fusion of local features and global features
CN117893792A (en) * 2023-12-14 2024-04-16 中山大学附属第一医院 A bladder tumor classification method based on MR signals and related devices

Similar Documents

Publication Publication Date Title
US11937973B2 (en) Systems and media for automatically diagnosing thyroid nodules
Dundar et al. Computerized classification of intraductal breast lesions using histopathological images
CN114782307B (en) Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
CN107582097A (en) Intelligent aid decision-making system based on multi-mode ultrasound omics
CN108388841A (en) Cervical biopsy area recognizing method and device based on multiple features deep neural network
CN110390678B (en) A tissue type segmentation method for colorectal cancer IHC stained images
CN112263217B (en) A Lesion Area Detection Method Based on Improved Convolutional Neural Network in Pathological Images of Non-melanoma Skin Cancer
CN110728239B (en) An automatic recognition system for gastric cancer enhanced CT images using deep learning
CN113764101B (en) Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN
CN116630680B (en) Dual-mode image classification method and system combining X-ray photography and ultrasound
Songsaeng et al. Multi-scale convolutional neural networks for classification of digital mammograms with breast calcifications
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
CN112071418B (en) Gastric cancer peritoneal metastasis prediction system and method based on enhanced CT image histology
KR102620046B1 (en) Method and system for breast ultrasonic image diagnosis using weakly-supervised deep learning artificial intelligence
Valente et al. A comparative study of deep learning methods for multi-class semantic segmentation of 2d kidney ultrasound images
Sun et al. Liver tumor segmentation and subsequent risk prediction based on Deeplabv3+
CN118736292A (en) A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound
Ibrahim et al. Liver Multi-class Tumour Segmentation and Detection Based on Hyperion Pre-trained Models.
Al-Louzi et al. Progressive multifocal leukoencephalopathy lesion and brain parenchymal segmentation from MRI using serial deep convolutional neural networks
CN116934683B (en) Artificial intelligence-assisted ultrasound diagnosis of spleen trauma
CN118505726A (en) CT image liver based on deep learning and tumor segmentation method thereof
CN117974588A (en) A deep learning model, construction method and system based on multimodal ultrasound images
CN116705295A (en) Diagnosis and evaluation method after endometrial cancer conservation treatment based on pathology deep learning
Jia Polyps auto-detection in wireless capsule endoscopy images using improved method based on image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination