CN113313699B - X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment - Google Patents
X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment Download PDFInfo
- Publication number
- CN113313699B CN113313699B CN202110640995.8A CN202110640995A CN113313699B CN 113313699 B CN113313699 B CN 113313699B CN 202110640995 A CN202110640995 A CN 202110640995A CN 113313699 B CN113313699 B CN 113313699B
- Authority
- CN
- China
- Prior art keywords
- chest
- ray
- features
- feature map
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a weak supervision learning-based X-ray chest disease classification and positioning method and electronic equipment, wherein the method comprises the following steps: collecting chest X-ray films, extracting features of the collected chest X-ray films, and encoding and marking image patches for the extracted features to serve as an input sequence for extracting global semantic information; the coded features are sampled by a decoder, the sampled feature map is combined with the high-resolution CNN feature map, the feature map with the same size as the input image is output, and the focus displayed in the input chest X-ray film is classified and positioned. The invention can more accurately locate the position of the positive case on the premise of ensuring the classification accuracy.
Description
Technical Field
The disclosure relates to the technical field of medical imaging, in particular to a weak supervision learning-based X-ray chest disease classification and positioning method and electronic equipment.
Background
Chest X-ray examination is the most common imaging examination worldwide and plays a decisive role in the screening, diagnosis and management of many life threatening diseases. With the development of the field of computer vision, an automatic chest diagnostic system can be developed by using techniques such as deep learning, and the system can be used for improving the workflow of radiology departments, providing clinical decisions for doctors, and can also play a role in medical scenes such as large-scale disease screening. However, the use of this system in medical settings where the system is required to not only sort the yin-yang nature of the disease, but also locate the disease to interpret the sorting results remains a challenge. But this allows the localization model to be developed using only weakly supervised techniques, since most of the data used to develop the system contains only global annotations of the image.
The field of image analysis is increasingly developing systems that can classify and locate diseases from a weakly annotated training set. Currently, when no position information of the detection target is provided, the convolution units of the respective layers of the Convolutional Neural Network (CNNs) are represented as target detectors, and target detection can be achieved. However, common classification networks are generally composed of multiple convolutional layers and a fully-connected layer through which output is obtained to optimize network parameters. However, the full-connection layer can lose the positioning capability of the convolution layer, and the positioning of the positive case position and the accuracy of classification are affected.
Disclosure of Invention
In view of the above, the embodiments of the present disclosure provide a weak-supervision-learning-based method for classifying and positioning an X-ray chest disease, and an electronic device, where the method combines automatic chest diagnosis for classifying and positioning the disease, and can accurately position positive cases to prove or interpret classification results when classifying the disease.
In order to achieve the above object, the present invention provides the following technical solutions:
an X-ray chest disease classification and positioning method based on weak supervision learning comprises the following steps:
Collecting chest X-ray films, extracting features of the collected chest X-ray films, and encoding and marking image patches for the extracted features to serve as an input sequence for extracting global semantic information;
The coded features are sampled by a decoder, and the sampled feature map is combined with the high-resolution CNN feature map to output a feature map of the same size as the input image.
Further, a DenseNet network was used to perform feature extraction on the collected multiple chest radiographs.
Further, the extracted features are coded and marked with image patches through a transducer model to serve as an input sequence for extracting global semantic information.
Further, the transducer model comprises Multihead Self-attribute and Multi-Layer Perceptron modules.
Further, the process of sampling the encoded features by the decoder specifically includes: the output of the transducer model is restored to spatial features and a cascade up-sampler is introduced to decode the spatial features.
Further, it also includes, prior to the final output layer, performing global averaging pooling on the upsampled feature map and treating these features as features of the fully connected layer that produce the classification.
The invention also provides electronic equipment, which comprises a memory and a processor; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to realize the weak supervised learning-based X-ray chest disease classification and positioning method.
The chest X-ray based disease classification and weak supervision positioning method and the electronic equipment can more accurately position the positive case on the premise of ensuring the classification accuracy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a diagram of a medium network architecture of the method of the present invention;
FIG. 2 is a graph showing classification and detection of a disease of cardiac enlargement in an embodiment of the present invention;
FIG. 3 is a graph showing classification and detection of pulmonary edema disease in accordance with an embodiment of the present invention;
FIG. 4 shows the classification and detection results of lung solid changes in the disease according to the embodiment of the present invention;
FIG. 5 shows the classification and detection results of a disease with atelectasis according to an embodiment of the present invention;
FIG. 6 shows classification and detection results of disease pleural effusion in an embodiment of the present invention.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
As shown in fig. 1, an embodiment of the present disclosure provides a disease classification and weak supervision positioning method based on chest X-ray, wherein DenseNet is used to extract features, a transducer encodes the extracted features to mark an image patch, the image patch is used as an input sequence for extracting global semantic information, the input sequence is mapped into an embedded space of D dimension by using linear projection, and the transducer encoding is implemented by using a module containing L Layer Multihead Self-Attention and Multi-Layer persistence. For positioning purposes, the invention restores the output of the transducer to spatial features and introduces a cascade up-sampler consisting of up-sampling sections that decode the spatial features while outputting feature maps of the same size as the input image by skipping the features of the encoder connected to the aggregate DenseNet at different resolution levels. The present invention performs global averaging pooling on the up-sampled feature map and treats these features as features of the fully connected layer that produce the classification, prior to the final output layer. In the localization stage, the importance of the image region, called class activation map, can be identified by projecting weights of the output layer back into the upsampled feature map, resulting in accurate localization of the class.
The invention provides a detection model with two advantages of a transducer and a U-net, which is used as a powerful auxiliary technology for automatic chest diagnosis. In one aspect, the Transformer encodes a tagged image patch from a Convolutional Neural Network (CNN) feature map as an input sequence to extract a global context. On the other hand, the transform encoded features are sampled with a decoder and then combined with the high resolution CNN feature map to achieve accurate positioning.
The method of the invention was tested on dataset CheXpert selected pathologies: atelectasis, heart enlargement, excessive changes, oedema and pleural effusion. The published training set is used for training and model selection verification, and the published verification set is used for model testing after being marked by a doctor through a bounding box. In order to evaluate the performance of the algorithm provided by the invention, the invention calculates the AUC and the average AUC of each pathology for evaluating the classification accuracy of the model, and calculates IoU and the average IoU of each pathology for evaluating the positioning accuracy of the model.
Table 1 compares the classification results for different models comparing the AUC classification results for the method of the present invention with the two baseline models Inception-v3 and DenseNet 121. Table 2 shows a comparison of the positioning results of the different methods, showing the comparison of the positioning results of the method of the invention with other methods using a saliency map. The result shows that the method can more accurately position the positive cases on the premise of ensuring the classification accuracy.
Table 1:
the heart is enlarged. | Pulmonary edema | The lung is changed. | Non-stretching of lung | Pleural effusion | Average AUC (%) | |
Inception-v3 | 0.841 | 0.876 | 0.891 | 0.833 | 0.921 | 87.24 |
DenseNet121 | 0.818 | 0.828 | 0.938 | 0.934 | 0.928 | 88.92 |
The method of the invention | 0.841 | 0.914 | 0.911 | 0.879 | 0.931 | 90.06 |
Table 2:
the heart is enlarged. | Pulmonary edema | The lung is changed. | Non-stretching of lung | Pleural effusion | Average IoU | |
ChestX-ray8 | 0.32 | 0.24 | 0.12 | 0.27 | 0.19 | 0.23 |
RpSalWeaklyDet | 0.41 | 0.24 | 0.17 | 0.20 | 0.25 | 0.25 |
The method of the invention | 0.44 | 0.34 | 0.45 | 0.45 | 0.27 | 0.39 |
Fig. 2-6 are visual examples of classification and detection results according to the present invention, wherein white boxes are true bounding boxes and dark boxes are predicted bounding boxes. The result of the visual example shows the prediction probability and the positioning effect of the method of the invention on the diseases in the X-ray film under the condition of weak supervision, and can be seen that: the method can more accurately position the positive cases on the premise of accurate classification, thereby providing visual support for classification results.
The method can be used for large-scale X-ray film screening, radiologist auxiliary diagnosis and other clinical scenes. Implementation details are described herein as examples of auxiliary diagnostics. When a radiologist diagnoses that five diseases of atelectasis, heart enlargement, actual transformation, edema and hydrothorax exist in an X-ray film, the method can predict the possibility of each disease, frame the disease position and provide auxiliary diagnosis for the radiologist. The feature extraction network used in the method of the invention is DenseNet121,121, which is a densely connected convolutional neural network with outstanding performance in computer vision tasks. The implementation of the transform coding herein using modules containing L layers Multihead Self-Attention and Multi-Layer perfptron, the transform network being designed for sequence-to-sequence prediction, employing distributed convolution operators, relying entirely on Attention mechanisms, the modules of Multihead Self-Attention and Multi-Layer perfptron being the main structure of the transform network, responsible for modeling Attention and perception modules. The transducer network introduces a self-attention mechanism in the encoder, models global semantic information, and strengthens classification of related image areas. In the decoding stage, the U-Net architecture is used for decoding and sampling the characteristic at the same time, and the characteristic is combined with the DenseNet121,121 characteristic diagram with high resolution, so that the characteristic diagram with high resolution is provided for the global pooling layer. And finally, generating a saliency map by using a global pooling layer for target detection.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the disclosure are intended to be covered by the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (3)
1. The X-ray chest disease classifying and positioning method based on weak supervision learning is characterized by comprising the following steps of:
Collecting chest X-ray films, extracting features of the collected chest X-ray films, and marking image patches by using a transducer model to the extracted features as an input sequence for extracting global semantic information; wherein, using DenseNet network to extract the characteristic of the collected chest X-ray films; chest X-ray film is a positive case film;
Sampling the coded features by a decoder, combining the sampled feature map with a high-resolution CNN feature map, outputting a feature map with the same size as an input image, and classifying and positioning focuses displayed in the input chest X-ray film;
The process of sampling the encoded features by the decoder specifically includes: restoring the output of the transducer model into spatial features, and introducing a cascade up-sampler to decode the spatial features;
The focus is classified and positioned as follows: performing global average pooling on the upsampled feature map to generate a saliency map as a feature of the fully connected layer that generated the classification, prior to the final output layer, wherein the features of the DenseNet encoder are aggregated at different resolution levels by skipping the connection to provide a high resolution feature map for the global pooling layer; and identifying the importance of the image area by projecting the weight of the output layer back to the upsampled feature map, thereby generating accurate localization of the lesion class based on accurate classification of the lesion class.
2. The weak supervised learning based X-ray chest disease classification and localization method as claimed in claim 1, wherein the transducer model comprises Multihead Self-Attention and Multi-Layer Perceptron modules.
3. An electronic device, comprising a memory and a processor; wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the weak supervised learning based X-ray chest disease classification and localization method as set forth in any one of claims 1-2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110640995.8A CN113313699B (en) | 2021-06-09 | 2021-06-09 | X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110640995.8A CN113313699B (en) | 2021-06-09 | 2021-06-09 | X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113313699A CN113313699A (en) | 2021-08-27 |
CN113313699B true CN113313699B (en) | 2024-08-02 |
Family
ID=77377854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110640995.8A Active CN113313699B (en) | 2021-06-09 | 2021-06-09 | X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313699B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888466B (en) * | 2021-09-03 | 2024-12-13 | 武汉科技大学 | A pulmonary nodule image detection method and system based on CT images |
CN114170232B (en) * | 2021-12-02 | 2024-01-26 | 匀熵智能科技(无锡)有限公司 | Transformer-based X-ray chest radiography automatic diagnosis and new crown infection area distinguishing method |
CN114445665B (en) * | 2022-01-25 | 2025-04-08 | 中国人民解放军网络空间部队信息工程大学 | Hyperspectral image classification method based on non-local U-shaped network enhanced by Transformer |
CN117522877B (en) * | 2024-01-08 | 2024-04-05 | 吉林大学 | A method for constructing a chest multi-disease diagnosis model based on visual self-attention |
CN118799296A (en) * | 2024-07-26 | 2024-10-18 | 南充市中心医院 | A method and system for intelligent analysis of chest X-rays based on deep self-supervised comparative learning |
CN119249912A (en) * | 2024-12-03 | 2025-01-03 | 华东交通大学 | A method for predicting the sound field distribution of an electromagnetic ultrasonic transducer and an electronic device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563523A (en) * | 2019-02-14 | 2020-08-21 | 西门子医疗有限公司 | COPD classification using machine trained anomaly detection |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11436725B2 (en) * | 2019-11-15 | 2022-09-06 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a self-supervised chest x-ray image analysis machine-learning model utilizing transferable visual words |
CN111429407B (en) * | 2020-03-09 | 2023-05-09 | 清华大学深圳国际研究生院 | Chest X-ray disease detection device and method based on double-channel separation network |
CN112116571A (en) * | 2020-09-14 | 2020-12-22 | 中国科学院大学宁波华美医院 | An automatic localization method for X-ray lung diseases based on weakly supervised learning |
-
2021
- 2021-06-09 CN CN202110640995.8A patent/CN113313699B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563523A (en) * | 2019-02-14 | 2020-08-21 | 西门子医疗有限公司 | COPD classification using machine trained anomaly detection |
Non-Patent Citations (1)
Title |
---|
Jieneng Chen et al..TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation.《arXiv:2102.04306v1 [cs.CV]》.2021,第1-13页. * |
Also Published As
Publication number | Publication date |
---|---|
CN113313699A (en) | 2021-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113313699B (en) | X-ray chest disease classification and positioning method based on weak supervision learning and electronic equipment | |
Sakib et al. | DL-CRC: deep learning-based chest radiograph classification for COVID-19 detection: a novel approach | |
Fan et al. | Inf-net: Automatic covid-19 lung infection segmentation from ct images | |
Pustokhin et al. | An effective deep residual network based class attention layer with bidirectional LSTM for diagnosis and classification of COVID-19 | |
CN107644419A (en) | Method and apparatus for analyzing medical image | |
JP7503213B2 (en) | Systems and methods for evaluating pet radiological images | |
Khan et al. | Classification and region analysis of COVID-19 infection using lung CT images and deep convolutional neural networks | |
Wang et al. | Automated chest screening based on a hybrid model of transfer learning and convolutional sparse denoising autoencoder | |
Yerukalareddy et al. | Brain tumor classification based on mr images using GAN as a pre-trained model | |
Tang et al. | Detection of COVID-19 using deep convolutional neural network on chest X-ray (CXR) images | |
Majdi et al. | Deep learning classification of chest X-ray images | |
Souid et al. | Xception-ResNet autoencoder for pneumothorax segmentation | |
Kaliyugarasan et al. | Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI | |
Vadisetty et al. | Generative AI: A Pix2pix-GAN-Based Machine Learning Approach for Robust and Efficient Lung Segmentation | |
Pérez-García et al. | RadEdit: stress-testing biomedical vision models via diffusion image editing | |
Chaisangmongkon et al. | External validation of deep learning algorithms for cardiothoracic ratio measurement | |
Waseem Sabir et al. | FibroVit—Vision transformer-based framework for detection and classification of pulmonary fibrosis from chest CT images | |
Badrahadipura et al. | COVID-19 detection in chest X-rays using inception ResNet-v2 | |
Nawaz et al. | Efficient-ECGNet framework for COVID-19 classification and correlation prediction with the cardio disease through electrocardiogram medical imaging | |
CN117393100A (en) | Diagnostic report generation method, model training method, system, equipment and medium | |
CN115564763A (en) | Thyroid ultrasound image processing method, device, medium and electronic equipment | |
Shanthi et al. | A Novel Method for Pneumothorax Diagnosis and Segmentation Using Deep Convolution Neural Network | |
Saleh et al. | How GANs assist in Covid-19 pandemic era: a review | |
Khan | Deep learning based medical X-ray image recognition and classification | |
Dash et al. | CoVaD-GAN: An efficient Data Augmentation technique for COVID CXR Image Classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |