[go: up one dir, main page]

 
 

Topic Editors

Department of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuo-ku, Kobe 650-0017, Japan
Dr. Koji Fujimoto
Department of Advanced Imaging in Medical Magnetic Resonance, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan

Deep Learning for Medical Image Analysis and Medical Natural Language Processing

Abstract submission deadline
closed (20 August 2024)
Manuscript submission deadline
20 November 2024
Viewed by
20886

Topic Information

Dear Colleagues,

This Special Issue mainly focuses on the application of deep learning to medical image analysis and medical natural language processing. We welcome original papers and review papers related to the topics below. In particular, this Special Issue welcomes the papers where both medical image analysis and medical natural language processing are used as multi-modal deep learning.
Research Topics:

  • Cutting-edge methodology/algorithm of deep learning for medical image analysis and medical natural language processing.
  • Clinical application of deep learning to medical image analysis and medical natural language processing which mainly focuses on cancer diagnosis and treatment.
  • Open-source software of deep learning which is used for cancer diagnosis and treatment.
  • Open data for medical image analysis and medical natural language processing which are useful for development and validation of deep learning.
  • Reproducibility/validation study for open-source software of deep learning used for cancer diagnosis and treatment.

Dr. Mizuho Nishio
Dr. Koji Fujimoto
Topic Editors

Keywords

  • deep learning
  • medical image analysis
  • natural language process
  • medical imaging
  • cancer

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Cancers
cancers
4.5 8.0 2009 16.3 Days CHF 2900 Submit
Diagnostics
diagnostics
3.0 4.7 2011 20.5 Days CHF 2600 Submit
Tomography
tomography
2.2 2.7 2015 23.9 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 24110 KiB  
Article
Magnifying Networks for Histopathological Images with Billions of Pixels
by Neofytos Dimitriou, Ognjen Arandjelović and David J. Harrison
Diagnostics 2024, 14(5), 524; https://doi.org/10.3390/diagnostics14050524 - 1 Mar 2024
Viewed by 1054
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, [...] Read more.
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches. Full article
Show Figures

Figure 1

Figure 1
<p>An illustration of the interaction between a MagNet and a WSI. The illustrated WSI has four magnification levels, with the original high-resolution image being magnification level 0, and each of the versions beneath is progressively downsampled by four. As visualized, for the same ROI, magnification level 3 is blurry when compared to magnification level 1. The depicted MagNet model consists of four magnifying layers and a classification layer. DataLoader accesses the right magnification level for each ROI independently based on the depth of zooming so far. Note that the ROIs of the last layer can span across different magnification levels and with varying levels of fidelity, thereby providing information across multiple resolutions and multiple fields of view.</p>
Full article ">Figure 2
<p>An illustration of a single magnifying layer that outputs two patches. The convolutional layers are independent between the two patches. The red squares illustrate the affine transformation based on the outputted thetas. Note that if this was the last magnifying layer, the image size of the patches would have been <math display="inline"><semantics> <mrow> <mn>224</mn> <mo>×</mo> <mn>224</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Randomly sampled WSIs from each hospital.</p>
Full article ">Figure 4
<p>Gradient analysis of using bi-linear sampling (<b>left</b>) vs. the linearized multi-sampling approach (<b>right</b>) [<a href="#B49-diagnostics-14-00524" class="html-bibr">49</a>]. From top to bottom, the image is not downsampled, downsampled by a factor of 4, and downsampled by a factor of 8.</p>
Full article ">Figure 5
<p>A visualization of a forward pass of a WSI <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>, with micro-metastasis (from the testing set) passed through a three-layer MagNet model. The background of the images is shown in white for visualization purposes. In the first magnifying layer, three ROIs can be identified from <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>, namely, <math display="inline"><semantics> <msub> <mi>I</mi> <mn>11</mn> </msub> </semantics></math> (red outline), <math display="inline"><semantics> <msub> <mi>I</mi> <mn>12</mn> </msub> </semantics></math> (blue outline), <math display="inline"><semantics> <msub> <mi>I</mi> <mn>13</mn> </msub> </semantics></math> (green outline), each of which was forwarded to the second magnifying layer. In the second magnifying layer, two ROIs can be identified for <math display="inline"><semantics> <msub> <mi>I</mi> <mn>11</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>I</mi> <mn>12</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>I</mi> <mn>13</mn> </msub> </semantics></math>, resulting in six forwarded images to the third magnifying layer. Finally, three ROIs can be identified in the last magnifying layer for each forwarded image, resulting in 18 images being forwarded to the classification layer. Each of these 18 images can be traced backward based on their annotated name, e.g., <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1311</mn> </msub> </semantics></math> is the first ROI (red outline) of <math display="inline"><semantics> <msub> <mi>I</mi> <mn>131</mn> </msub> </semantics></math>, which is the first ROI (red outline) of <math display="inline"><semantics> <msub> <mi>I</mi> <mn>13</mn> </msub> </semantics></math>, which, finally, is the third ROI (green outline) of <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>. Note how the images forwarded to the classification layer have a <math display="inline"><semantics> <mrow> <mn>224</mn> <mo>×</mo> <mn>224</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> resolution rather than <math display="inline"><semantics> <mrow> <mn>56</mn> <mo>×</mo> <mn>56</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>A WSI from Hospital 3, wherein the four-layer MagNet correctly identified a cancerous region, whereas the three-layer MagNet produced a false negative.</p>
Full article ">Figure 7
<p>The cancerous regions of macro- and micro-metastasis, as identified by the three-layer MagNet model (on the left), and the micro-metastasis identified by the four-layer MagNet (on the right). The pointing red arrows show the cancer regions based on the annotations provided by the pathologists at the highest magnification scale.</p>
Full article ">Figure A1
<p>PyTorch implementation of <span class="html-italic">Conv2D</span>.</p>
Full article ">Figure A2
<p>PyTorch implementation of a <span class="html-italic">Branch</span>.</p>
Full article ">Figure A3
<p>PyTorch implementation of inferring the affine parameters from a given feature map for the STN.</p>
Full article ">
16 pages, 4554 KiB  
Article
Identifying Diabetic Retinopathy in the Human Eye: A Hybrid Approach Based on a Computer-Aided Diagnosis System Combined with Deep Learning
by Şükran Yaman Atcı, Ali Güneş, Metin Zontul and Zafer Arslan
Tomography 2024, 10(2), 215-230; https://doi.org/10.3390/tomography10020017 - 5 Feb 2024
Cited by 5 | Viewed by 1504
Abstract
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of [...] Read more.
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of medical imagery from a patient’s eye to identify the damage caused to blood vessels. Issues with unbalanced datasets, incorrect annotations, a lack of sample images, and improper performance evaluation measures have negatively impacted the performance of deep learning models. Using three benchmark datasets of diabetic retinopathy, we conducted a detailed comparison study comparing various state-of-the-art approaches to address the effect caused by class imbalance, with precision scores of 93%, 89%, 81%, 76%, and 96%, respectively, for normal, mild, moderate, severe, and DR phases. The analyses of the hybrid modeling, including CNN analysis and SHAP model derivation results, are compared at the end of the paper, and ideal hybrid modeling strategies for deep learning classification models for automated DR detection are identified. Full article
Show Figures

Figure 1

Figure 1
<p>Retrinopatric images used in this study were included in the Kaggle DR dataset. Different color illumination on images was detected before filtering images.</p>
Full article ">Figure 2
<p>(<b>a</b>) Retrinopatric images in grayscale and (<b>b</b>) using sigmaX module to clarify DR images with Gauss-based elimination.</p>
Full article ">Figure 3
<p>The proposed schematic diagram illustrates our DR diagnosis system, incorporating an average weight ensemble technique utilizing backbone CNN models.</p>
Full article ">Figure 4
<p>Second-order weighting; Kappa weighting is conducted in the form of LSQ within the matrix set, and the score value obtained here represents the success rate of the modeling.</p>
Full article ">Figure 5
<p>Detection of DR points created by completing CNN segmentation modeling.</p>
Full article ">Figure 6
<p>The illustration above describes five outputs in <a href="#tomography-10-00017-f001" class="html-fig">Figure 1</a> (our five DR levels: 0–5) for extreme values at the edge of SHAP matrixes and random retinopathy images in DR detection zones.</p>
Full article ">Figure 7
<p>Model loss graph for a trained dataset on utilities in SHAP.</p>
Full article ">Figure 8
<p>Results of segmentation model from hybrid CNN-SHAP modeling on Kaggle dataset. Dark blue bars represent the number of DR images included in the modeling, whereas light blue bars represent the accuracy rate of segmented DR degrees (the orange line represents the distribution curve of the modeling).</p>
Full article ">
13 pages, 2117 KiB  
Article
Real-Time Protozoa Detection from Microscopic Imaging Using YOLOv4 Algorithm
by İdris Kahraman, İsmail Rakıp Karaş and Muhammed Kamil Turan
Appl. Sci. 2024, 14(2), 607; https://doi.org/10.3390/app14020607 - 10 Jan 2024
Viewed by 2399
Abstract
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting [...] Read more.
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting in time-consuming analyses and limited scalability. In this study, we propose a real-time protozoa detection framework using the YOLOv4 algorithm, a state-of-the-art deep learning model known for its exceptional speed and accuracy. Our dataset consists of objects of the protozoa species, such as Bdelloid Rotifera, Stylonychia Pustulata, Paramecium, Hypotrich Ciliate, Colpoda, Lepocinclis Acus, and Clathrulina Elegans, which are in freshwaters and have different shapes, sizes, and movements. One of the major properties of our work is to create a dataset by forming different cultures from various water sources like rainwater and puddles. Our network architecture is carefully tailored to optimize the detection of protozoa, ensuring precise localization and classification of individual organisms. To validate our approach, extensive experiments are conducted using real-world microscopic image datasets. The results demonstrate that the YOLOv4-based model achieves outstanding detection accuracy and significantly outperforms traditional methods in terms of speed and precision. The real-time capabilities of our framework enable rapid analysis of large-scale datasets, making it highly suitable for dynamic environments and time-sensitive applications. Furthermore, we introduce a user-friendly interface that allows researchers and environmental professionals to effortlessly deploy our YOLOv4-based protozoa detection tool. We conducted f1-score 0.95, precision 0.92, sensitivity 0.98, and mAP 0.9752 as evaluating metrics. The proposed model achieved 97% accuracy. After reaching high efficiency, a desktop application was developed to provide testing of the model. The proposed framework’s speed and accuracy have significant implications for various fields, ranging from a support tool for paramesiology/parasitology studies to water quality assessments, offering a powerful tool to enhance our understanding and preservation of ecosystems. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic presentation of YOLO algorithm.</p>
Full article ">Figure 2
<p>Intersection over union (IOU), measurement for object detection performance.</p>
Full article ">Figure 3
<p>The examining protozoa with a camera.</p>
Full article ">Figure 4
<p>Protozoan types in the dataset.</p>
Full article ">Figure 5
<p>Labeling with MakeSense.</p>
Full article ">Figure 6
<p>Protozoa species detection test image.</p>
Full article ">Figure 7
<p>mAP and loss graph.</p>
Full article ">
12 pages, 895 KiB  
Article
Artificial Intelligence and Panendoscopy—Automatic Detection of Clinically Relevant Lesions in Multibrand Device-Assisted Enteroscopy
by Francisco Mendes, Miguel Mascarenhas, Tiago Ribeiro, João Afonso, Pedro Cardoso, Miguel Martins, Hélder Cardoso, Patrícia Andrade, João P. S. Ferreira, Miguel Mascarenhas Saraiva and Guilherme Macedo
Cancers 2024, 16(1), 208; https://doi.org/10.3390/cancers16010208 - 1 Jan 2024
Cited by 1 | Viewed by 1570
Abstract
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their [...] Read more.
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their application in DAE. Our group aimed to develop a multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. In total, 338 exams performed in two specialized centers were retrospectively evaluated, with 152 single-balloon enteroscopies (Fujifilm®, Porto, Portugal), 172 double-balloon enteroscopies (Olympus®, Porto, Portugal) and 14 motorized spiral enteroscopies (Olympus®, Porto, Portugal); then, 40,655 images were divided in a training dataset (90% of the images, n = 36,599) and testing dataset (10% of the images, n = 4066) used to evaluate the model. The CNN’s output was compared to an expert consensus classification. The model was evaluated by its sensitivity, specificity, positive (PPV) and negative predictive values (NPV), accuracy and area under the precision recall curve (AUC-PR). The CNN had an 88.9% sensitivity, 98.9% specificity, 95.8% PPV, 97.1% NPV, 96.8% accuracy and an AUC-PR of 0.97. Our group developed the first multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. The development of accurate deep learning models is of utmost importance for increasing the diagnostic yield of DAE-based panendoscopy. Full article
Show Figures

Figure 1

Figure 1
<p>Study flow chart for the training and testing phases. DAE—Device-assisted enteroscopy. The term lesion refers to the presence of clinically relevant lesions.</p>
Full article ">Figure 2
<p>Output obtained from the convolutional neural network. The bars are a representation of the estimated probability by the CNN. The model output was given by the finding with the highest probability. The blue bars represent a correct prediction, whereas incorrect predictions are represented by grey bars.</p>
Full article ">Figure 3
<p>Heatmaps generated by the convolutional neural network selecting the image location responsible for the identification of clinically relevant panendoscopic lesions. The given probability represents the level of certainty in lesion prediction.</p>
Full article ">
18 pages, 32324 KiB  
Article
DTR-GAN: An Unsupervised Bidirectional Translation Generative Adversarial Network for MRI-CT Registration
by Aolin Yang, Tiejun Yang, Xiang Zhao, Xin Zhang, Yanghui Yan and Chunxia Jiao
Appl. Sci. 2024, 14(1), 95; https://doi.org/10.3390/app14010095 - 21 Dec 2023
Cited by 2 | Viewed by 1224
Abstract
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. [...] Read more.
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. Most of the current methods based on image-to-image translation cannot fully leverage the available information, which will affect the subsequent registration performance. To solve the problem, we develop an unsupervised multimodal image registration method named DTR-GAN. Firstly, we design a multimodal registration framework via a bidirectional translation network to transform the multimodal image registration into a unimodal registration, which can effectively use the complementary information of different modalities. Then, to enhance the quality of the transformed images in the translation network, we design a multiscale encoder–decoder network that effectively captures both local and global features in images. Finally, we propose a mixed similarity loss to encourage the warped image to be closer to the target image in deep features. We extensively evaluate methods for MRI-CT image registration tasks of the abdominal cavity with advanced unsupervised multimodal image registration approaches. The results indicate that DTR-GAN obtains a competitive performance compared to other methods in MRI-CT registration. Compared with DFR, DTR-GAN has not only obtained performance improvements of 2.35% and 2.08% in the dice similarity coefficient (DSC) of MRI-CT registration and CT-MRI registration on the Learn2Reg dataset but has also decreased the average symmetric surface distance (ASD) by 0.33 mm and 0.12 mm on the Learn2Reg dataset. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) shows the structure of the DTR-GAN framework. DTR-GAN consists of TP, R, and TR networks. Our proposed method learns two mappings between MRI and CT images in the translation network, which simplifies the complex multimodal image registration to unimodal image registration. (<b>B</b>) shows the structure of the translation network TP for <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>→</mo> <mi>Y</mi> </mrow> </semantics></math> transformation. (<b>C</b>) depicts the structure of the translation network TR for <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>→</mo> <mi>X</mi> </mrow> </semantics></math> transformation.</p>
Full article ">Figure 2
<p>Structure of the registration network.</p>
Full article ">Figure 3
<p>Structure of translation network TP.</p>
Full article ">Figure 4
<p>Instances from the Learn2Reg and CHAOS datasets.</p>
Full article ">Figure 5
<p>Experimental results on the Learn2Reg dataset: (<b>A</b>,<b>E</b>) are warped images, (<b>B</b>,<b>F</b>) are deformation fields, (<b>C</b>,<b>G</b>) are checkboard grids, and (<b>D</b>,<b>H</b>) are overlapping images.</p>
Full article ">Figure 6
<p>Experimental results on the CHAOS dataset. (<b>A</b>,<b>E</b>) are warped images, (<b>B</b>,<b>F</b>) are deformation fields, (<b>C</b>,<b>G</b>) are checkboard grids, and (<b>D</b>,<b>H</b>) are overlapping images.</p>
Full article ">Figure 7
<p>Visual comparison of registration accuracy using various methods is conducted on the Learn2Reg and CHAOS datasets. The first two columns display the original inputs, and all columns show vivid colors to visualize the label.</p>
Full article ">Figure 8
<p>Boxplot of the distributions of the DSC and ASD of the MRI-CT registration on the Learn2Reg dataset.</p>
Full article ">Figure 9
<p>Boxplot of the distributions of the DSC and ASD of the T1-CT registration on the CHAOS dataset.</p>
Full article ">Figure 10
<p>Comparison of our variant methods on the Learn2Reg dataset. (<b>A</b>,<b>E</b>) are warped images, (<b>B</b>,<b>F</b>) are deformation fields, (<b>C</b>,<b>G</b>) are checkboard grids, and (<b>D</b>,<b>H</b>) are overlapping images.</p>
Full article ">Figure 11
<p>Comparison of our variant methods on the CHAOS dataset. (<b>A</b>,<b>E</b>) are warped images, (<b>B</b>,<b>F</b>) are deformation fields, (<b>C</b>,<b>G</b>) are checkboard grids, and (<b>D</b>,<b>H</b>) are overlapping images.</p>
Full article ">
13 pages, 5402 KiB  
Review
Assessment of Computed Tomography Perfusion Research Landscape: A Topic Modeling Study
by Burak B. Ozkara, Mert Karabacak, Konstantinos Margetis, Vivek S. Yedavalli, Max Wintermark and Sotirios Bisdas
Tomography 2023, 9(6), 2016-2028; https://doi.org/10.3390/tomography9060158 - 1 Nov 2023
Viewed by 3431
Abstract
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering [...] Read more.
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering the swift advancements in computed tomography perfusion (CTP), we deem it essential to launch an initiative focused on topic modeling. We conducted a comprehensive search of the Scopus database from 1 January 2000 to 16 August 2023, to identify relevant articles about CTP. Using the BERTopic model, we derived a group of topics along with their respective representative articles. For the 2020s, linear regression models were used to identify and interpret trending topics. From the most to the least prevalent, the topics that were identified include “Tumor Vascularity”, “Stroke Assessment”, “Myocardial Perfusion”, “Intracerebral Hemorrhage”, “Imaging Optimization”, “Reperfusion Therapy”, “Postprocessing”, “Carotid Artery Disease”, “Seizures”, “Hemorrhagic Transformation”, “Artificial Intelligence”, and “Moyamoya Disease”. The model provided insights into the trends of the current decade, highlighting “Postprocessing” and “Artificial Intelligence” as the most trending topics. Full article
Show Figures

Figure 1

Figure 1
<p>Word clouds representing each topic, where the size of each keyword indicates its frequency.</p>
Full article ">Figure 2
<p>Prominence of the topics based on the publication year.</p>
Full article ">Figure 3
<p>Citation quartiles.</p>
Full article ">Figure 4
<p>Trends in this decade.</p>
Full article ">
13 pages, 1034 KiB  
Article
Real-World Evidence on the Clinical Characteristics and Management of Patients with Chronic Lymphocytic Leukemia in Spain Using Natural Language Processing: The SRealCLL Study
by Javier Loscertales, Pau Abrisqueta-Costa, Antonio Gutierrez, José Ángel Hernández-Rivas, Rafael Andreu-Lapiedra, Alba Mora, Carolina Leiva-Farré, María Dolores López-Roda, Ángel Callejo-Mellén, Esther Álvarez-García and José Antonio García-Marco
Cancers 2023, 15(16), 4047; https://doi.org/10.3390/cancers15164047 - 10 Aug 2023
Cited by 2 | Viewed by 2993
Abstract
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® [...] Read more.
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® technology, based on NLP and machine learning. A total of 534 CLL patients were assessed. No treatment was detected in 270 (50.6%) patients (watch-and-wait, W&W). First-line (1L) treatment was identified in 230 (43.1%) patients and relapsed/refractory (2L) treatment was identified in 58 (10.9%). The median age ranged from 71 to 75 years, with a uniform male predominance (54.8–63.8%). The main comorbidities included hypertension (W&W: 35.6%; 1L: 38.3%; 2L: 39.7%), diabetes mellitus (W&W: 24.4%; 1L: 24.3%; 2L: 31%), cardiac arrhythmia (W&W: 16.7%; 1L: 17.8%; 2L: 17.2%), heart failure (W&W 16.3%, 1L 17.4%, 2L 17.2%), and dyslipidemia (W&W: 13.7%; 1L: 18.7%; 2L: 19.0%). The most common antineoplastic treatment was ibrutinib in 1L (64.8%) and 2L (62.1%), followed by bendamustine + rituximab (12.6%), obinutuzumab + chlorambucil (5.2%), rituximab + chlorambucil (4.8%), and idelalisib + rituximab (3.9%) in 1L and venetoclax (15.5%), idelalisib + rituximab (6.9%), bendamustine + rituximab (3.5%), and venetoclax + rituximab (3.5%) in 2L. This study expands the information available on patients with CLL in Spain, describing the diversity in patient characteristics and therapeutic approaches in clinical practice. Full article
Show Figures

Figure 1

Figure 1
<p>Study design and population. Data were extracted from EHRs corresponding to the study period (from 1 January 2016 to 31 December 2018) from the seven participating hospitals and were analyzed using EHRead<sup>®</sup> technology. The Full Analysis Set (i.e., all patients diagnosed with CLL who fulfill all inclusion/exclusion criteria) comprised 534 patients. Please note that patients in 1L can progress to R/R 2L such that the sum of the groups is &gt;100%. 1L: first-line; 2L: second-line; CLL: chronic lymphocytic leukemia; EHRs: electronic health records; ML: machine learning; NLP: natural language processing; R/R: relapse/refractory; W&amp;W: watch and wait.</p>
Full article ">Figure 2
<p>Chord diagram of patients with chronic lymphocytic leukemia treatment switches. Each color is associated with a specific treatment, as shown in the perimeter of the circle: OCl in light green, FCR in green, RCI in dark green, BR in orange, VENR in red, IDER in dark blue, VEN in light blue and IBR in gray. The color of the chords corresponds to the 1L treatment, which crosses the circle towards the R/R 2L treatment, as written in abbreviations outside of the circle. For instance, most patients in BR for 1L (orange) cross over to IBR as R/R 2L (i.e., the chord lands on the IBR fraction, with gray in its perimeter). BR: bendamustine + rituximab; FCR: fludarabine + cyclophosphamide + rituximab; IBR: ibrutinib; IDER: idelalisib + rituximab; OCI: obinutuzumab + chlorambucil; RCI: chlorambucil + rituximab; VEN: venetoclax; VENR: venetoclax + rituximab.</p>
Full article ">
24 pages, 11744 KiB  
Review
A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images
by Reem Jalloul, H. K. Chethan and Ramez Alkhatib
Diagnostics 2023, 13(14), 2460; https://doi.org/10.3390/diagnostics13142460 - 24 Jul 2023
Cited by 12 | Viewed by 5226
Abstract
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The [...] Read more.
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The best information for identifying and diagnosing breast cancer comes from medical pictures. This paper reviews the history of the discipline and examines how deep learning and machine learning are applied to detect breast cancer. The classification of breast cancer, using several medical imaging modalities, is covered in this paper. Numerous medical imaging modalities’ classification systems for tumors, non-tumors, and dense masses are thoroughly explained. The differences between various medical image types are initially examined using a variety of study datasets. Following that, numerous machine learning and deep learning methods exist for diagnosing and classifying breast cancer. Finally, this review addressed the challenges of categorization and detection and the best results of different approaches. Full article
Show Figures

Figure 1

Figure 1
<p>Relationship between artificial intelligence, machine learning, and deep learning.</p>
Full article ">Figure 2
<p>The input and output architecture of RNNs.</p>
Full article ">Figure 3
<p>Mammography images from the DDSM dataset from Kaggle.</p>
Full article ">Figure 4
<p>Different types of ultrasound images from Kaggle.</p>
Full article ">Figure 5
<p>Different types of Breast histopathology images.</p>
Full article ">Figure 6
<p>Thermography Images from Irthermo database.</p>
Full article ">Figure 7
<p>Positron emission tomography scan images.</p>
Full article ">Figure 8
<p>Different types of medical images used.</p>
Full article ">
Back to TopTop