[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (479)

Search Parameters:
Journal = Diagnostics
Section = General

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 374 KiB  
Article
The Systemic Risk Factors for the Development of Infectious Keratitis after Penetrating Keratoplasty: A Population-Based Cohort Study
by Yung-Nan Hsu, Whei-Ling Chiang, Jing-Yang Huang, Chia-Yi Lee, Shih-Chi Su and Shun-Fa Yang
Diagnostics 2024, 14(18), 2013; https://doi.org/10.3390/diagnostics14182013 - 11 Sep 2024
Viewed by 355
Abstract
Penetrating keratoplasty (PK) is a corneal surgery that is employed to repair the full-thickness corneal lesion. This study aimed to survey the possible systemic risk factors of infectious keratitis after penetrating keratoplasty (PK) via the Taiwan National Health Insurance Research Database (NHIRD). A [...] Read more.
Penetrating keratoplasty (PK) is a corneal surgery that is employed to repair the full-thickness corneal lesion. This study aimed to survey the possible systemic risk factors of infectious keratitis after penetrating keratoplasty (PK) via the Taiwan National Health Insurance Research Database (NHIRD). A retrospective case–control study was conducted, and 327 patients who received the PK were enrolled after exclusion. The main outcome was the development of infectious keratitis, and people were divided into those with infectious keratitis and those without the outcome. Cox proportional hazard regression was conducted to produce adjusted hazard ratios (aHRs) and 95% confidence intervals (CIs) of specific demographic indexes and systemic diseases on infectious keratitis. There were 68 patients who developed infectious keratitis after the whole follow-up period. The diabetes mellitus (DM) (aHR: 1.440, 95% CI: 1.122–2.874, p = 0.0310) and chronic ischemic heart disease (aHR: 1.534, 95% CI: 1.259–3.464, p = 0.0273) groups demonstrated a significant association with infectious keratitis. The DM group also revealed significant influence on infectious keratitis development in all the subgroups (all p < 0.05). Nevertheless, the effect of chronic ischemic heart disease on infectious keratitis was only significant on those aged older than 60 years (p = 0.0094) and both sexes (both p < 0.05). In conclusion, the presence of DM and chronic ischemic heart disease are associated with infectious keratitis after PK. However, local risk factors for infectious keratitis developed in those receiving PK had not been evaluated. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of patient selection. PK: penetrating keratoplasty; <span class="html-italic">N</span>: number.</p>
Full article ">
14 pages, 10901 KiB  
Article
EpidermaQuant: Unsupervised Detection and Quantification of Epidermal Differentiation Markers on H-DAB-Stained Images of Reconstructed Human Epidermis
by Dawid Zamojski, Agnieszka Gogler, Dorota Scieglinska and Michal Marczyk
Diagnostics 2024, 14(17), 1904; https://doi.org/10.3390/diagnostics14171904 - 29 Aug 2024
Viewed by 301
Abstract
The integrity of the reconstructed human epidermis generated in vitro can be assessed using histological analyses combined with immunohistochemical staining of keratinocyte differentiation markers. Technical differences during the preparation and capture of stained images may influence the outcome of computational methods. Due to [...] Read more.
The integrity of the reconstructed human epidermis generated in vitro can be assessed using histological analyses combined with immunohistochemical staining of keratinocyte differentiation markers. Technical differences during the preparation and capture of stained images may influence the outcome of computational methods. Due to the specific nature of the analyzed material, no annotated datasets or dedicated methods are publicly available. Using a dataset with 598 unannotated images showing cross-sections of in vitro reconstructed human epidermis stained with DAB-based immunohistochemistry reaction to visualize four different keratinocyte differentiation marker proteins (filaggrin, keratin 10, Ki67, HSPA2) and counterstained with hematoxylin, we developed an unsupervised method for the detection and quantification of immunohistochemical staining. The pipeline consists of the following steps: (i) color normalization; (ii) color deconvolution; (iii) morphological operations; (iv) automatic image rotation; and (v) clustering. The most effective combination of methods includes (i) Reinhard’s normalization; (ii) Ruifrok and Johnston color-deconvolution method; (iii) proposed image-rotation method based on boundary distribution of image intensity; and (iv) k-means clustering. The results of the work should enhance the performance of quantitative analyses of protein markers in reconstructed human epidermis samples and enable the comparison of their spatial distribution between different experimental conditions. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>An example of IHC images of reconstructed human epidermis stained for: (<b>A</b>)—FLG marker; (<b>B</b>)—K10 marker; (<b>C</b>)—Ki67 marker; (<b>D</b>)—HSPA2 marker.</p>
Full article ">Figure 2
<p>EpidermaQuant algorithm flowchart. The main steps of the pipeline include image color normalization, image color deconvolution, background detection, image rotation and crop, initial DAB-area detection step where the algorithm decides whether the image should be further analyzed, image segmentation, and calculation of the DAB percentage on tissue.</p>
Full article ">Figure 3
<p>Comparison of normalization (<b>A</b>) and color-deconvolution (<b>B</b>) methods with the original image, depending on the selected marker using exemplary images.</p>
Full article ">Figure 4
<p>Subsequent steps of background removal (<b>A</b>) and comparison of two image rotation methods for an exemplary image of K10 marker (<b>B</b>).</p>
Full article ">Figure 5
<p>Example of DAB image segmentation result. Interesting regions of markers of human epidermal differentiation are segmented by the k-means method. The selected cluster outlines the DAB-stained areas (red line) on the original image with additional hematoxylin staining (blue).</p>
Full article ">Figure 6
<p>The percentage of DAB-stained tissue. The graph shows the percentage of DAB occupancy on the slide in each sample by study marker group.</p>
Full article ">Figure 7
<p>Exemplary results of EpidermaQuant on HPA dataset. For each analyzed marker (rows), images with the lowest and the highest estimated percentage of DAB occupancy were selected. The red line indicates the estimated area of the tissue.</p>
Full article ">Figure 8
<p>Summary of the ablation study using the K10-marker images. All results are compared to the full pipeline. Panel (<b>A</b>) shows the computational time comparison for selected configurations of pipeline, while panel (<b>B</b>) shows the difference in the percentage of DAB occupancy.</p>
Full article ">
14 pages, 1275 KiB  
Article
VerFormer: Vertebrae-Aware Transformer for Automatic Spine Segmentation from CT Images
by Xinchen Li, Yuan Hong, Yang Xu and Mu Hu
Diagnostics 2024, 14(17), 1859; https://doi.org/10.3390/diagnostics14171859 - 25 Aug 2024
Viewed by 560
Abstract
The accurate and efficient segmentation of the spine is important in the diagnosis and treatment of spine malfunctions and fractures. However, it is still challenging because of large inter-vertebra variations in shape and cross-image localization of the spine. In previous methods, convolutional neural [...] Read more.
The accurate and efficient segmentation of the spine is important in the diagnosis and treatment of spine malfunctions and fractures. However, it is still challenging because of large inter-vertebra variations in shape and cross-image localization of the spine. In previous methods, convolutional neural networks (CNNs) have been widely applied as a vision backbone to tackle this task. However, these methods are challenged in utilizing the global contextual information across the whole image for accurate spine segmentation because of the inherent locality of the convolution operation. Compared with CNNs, the Vision Transformer (ViT) has been proposed as another vision backbone with a high capacity to capture global contextual information. However, when the ViT is employed for spine segmentation, it treats all input tokens equally, including vertebrae-related tokens and non-vertebrae-related tokens. Additionally, it lacks the capability to locate regions of interest, thus lowering the accuracy of spine segmentation. To address this limitation, we propose a novel Vertebrae-aware Vision Transformer (VerFormer) for automatic spine segmentation from CT images. Our VerFormer is designed by incorporating a novel Vertebrae-aware Global (VG) block into the ViT backbone. In the VG block, the vertebrae-related global contextual information is extracted by a Vertebrae-aware Global Query (VGQ) module. Then, this information is incorporated into query tokens to highlight vertebrae-related tokens in the multi-head self-attention module. Thus, this VG block can leverage global contextual information to effectively and efficiently locate spines across the whole input, thus improving the segmentation accuracy of VerFormer. Driven by this design, the VerFormer demonstrates a solid capacity to capture more discriminative dependencies and vertebrae-related context in automatic spine segmentation. The experimental results on two spine CT segmentation tasks demonstrate the effectiveness of our VG block and the superiority of our VerFormer in spine segmentation. Compared with other popular CNN- or ViT-based segmentation models, our VerFormer shows superior segmentation accuracy and generalization. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the VerFormer. It consists of an encoder, a bottleneck, and a decoder. Both the encoder and the decoder consist of three stages, and in each stage, two successive VG blocks are used for learning feature representations. Two successive VG blocks are used in the bottleneck. A <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolutional layer with a stride of 2 is used for downsampling in each stage, and a <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> transposed convolutional layer with a stride of 2 is used for upsampling in each stage. Two <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolutional layers with a stride of 2 are used for the patch partition. Two <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> transposed convolution layers with a stride of 2 are used in the final projection.</p>
Full article ">Figure 2
<p>The overall architecture of the Vertebrae-aware Global (VG) block. In the VG block, a VGQ module is utilized to extract vertebrae-aware contextual information to highlight vertebrae-related query tokens. Then, these tokens are encoded in the multi-head self-attention (MHSA) module. In the VGQ module, two parallel paths are employed to extract global vertebrae-aware contextual information, including a channel attention path and a spatial attention path. In the channel attention path, an SE layer is used by cascading an average pooling (AvgPool) layer, a fully connected (FC) layer, a ReLu activation function, and a fully connected (FC) layer.</p>
Full article ">Figure 3
<p>The mechanism of the Vertebrae-aware Global Query (VGQ) module. The input image is partitioned into patches, and these patches are converted into input query tokens. The VGQ module is utilized to extract vertebrae-aware contextual information by channel and spatial attention mechanisms. Then, vertebrae-related patches or tokens are highlighted by this contextual information. Thus, the VGQ module can leverage global information to locate the spine across the whole image, improving the segmentation capabilities of our VerFormer on the spine from CT images.</p>
Full article ">Figure 4
<p>The visualization of segmentation results from our VerFormer. (<b>A</b>) The visualization of segmentation results from our VerFormer on the VerSe 2019 dataset. (<b>B</b>) The visualization of segmentation results from our VerFormer on the VerSe 2020 dataset.</p>
Full article ">
15 pages, 1070 KiB  
Article
Reproducibility and Repeatability in Focus: Evaluating LVEF Measurements with 3D Echocardiography by Medical Technologists
by Marc Østergaard Nielsen, Arlinda Ljoki, Bo Zerahn, Lars Thorbjørn Jensen and Bent Kristensen
Diagnostics 2024, 14(16), 1729; https://doi.org/10.3390/diagnostics14161729 - 9 Aug 2024
Viewed by 548
Abstract
Three-dimensional echocardiography (3DE) is currently the preferred method for monitoring left ventricular ejection fraction (LVEF) in cancer patients receiving potentially cardiotoxic anti-neoplastic therapy. In Denmark, however, the traditional standard for LVEF monitoring has been rooted in nuclear medicine departments utilizing equilibrium radionuclide angiography [...] Read more.
Three-dimensional echocardiography (3DE) is currently the preferred method for monitoring left ventricular ejection fraction (LVEF) in cancer patients receiving potentially cardiotoxic anti-neoplastic therapy. In Denmark, however, the traditional standard for LVEF monitoring has been rooted in nuclear medicine departments utilizing equilibrium radionuclide angiography (ERNA). Although ERNA remains a principal modality, there is an emerging trend towards the adoption of echocardiography for this purpose. Given this context, assessing the reproducibility of 3DE among non-specialized medical personnel is crucial for its clinical adoption in such departments. To assess the feasibility of 3DE for LVEF measurements by technologists, we evaluated the repeatability and reproducibility of two moderately experienced technologists. They performed 3DE on 12 volunteers over two sessions, with a collaborative review of the results from the first session before the second session. Two-way intraclass correlation values increased from 0.03 to 0.77 across the sessions. This increase in agreement was mainly due to the recognition of false low measurements. Our findings underscore the importance of incorporating reproducibility exercises in the context of 3DE, especially when operated by technologists. Additionally, routine control of the acquisitions by physicians is deemed necessary. Ensuring these hurdles are adequately managed enables the adoption of 3DE for LVEF measurements by technologists. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Boxplots of LVEF measurements by OP1 (<b>a</b>) and OP2 (<b>b</b>), further split by sex and measurement session.</p>
Full article ">Figure 2
<p>Strip-dot plots of LVEF percentages by OP1 (<b>a</b>) and OP2 (<b>b</b>). Triangles represent M1 and circles M2.</p>
Full article ">Figure 3
<p>Bland–Altman plot with LOA (solid, colored lines) with confidence intervals (shaded grey areas). Triangles represent M1 and circles M2. The x-axis represents the average LVEF of OP1 and OP2, while the y-axis represents the difference.</p>
Full article ">Figure A1
<p>Variability chart of the total measurement dataset generated using the “VCR” R package. Individual measurements are depicted as points, with the red plus symbol representing the mean of each replicate set. Grey horizontal bars indicate the mean of all measurements for each subject, while light-blue bars represent the means of the entire measurement session.</p>
Full article ">Figure A2
<p>Paired-line plots or “spaghetti-plots”, representing the change in mean-LVEF between measurement sessions for operator 1 in panel (<b>a</b>) and operator 2 in panel (<b>b</b>).</p>
Full article ">
19 pages, 5027 KiB  
Article
Brain Tumor Detection and Classification Using an Optimized Convolutional Neural Network
by Muhammad Aamir, Abdallah Namoun, Sehrish Munir, Nasser Aljohani, Meshari Huwaytim Alanazi, Yaser Alsahafi and Faris Alotibi
Diagnostics 2024, 14(16), 1714; https://doi.org/10.3390/diagnostics14161714 - 7 Aug 2024
Viewed by 2040
Abstract
Brain tumors are a leading cause of death globally, with numerous types varying in malignancy, and only 12% of adults diagnosed with brain cancer survive beyond five years. This research introduces a hyperparametric convolutional neural network (CNN) model to identify brain tumors, with [...] Read more.
Brain tumors are a leading cause of death globally, with numerous types varying in malignancy, and only 12% of adults diagnosed with brain cancer survive beyond five years. This research introduces a hyperparametric convolutional neural network (CNN) model to identify brain tumors, with significant practical implications. By fine-tuning the hyperparameters of the CNN model, we optimize feature extraction and systematically reduce model complexity, thereby enhancing the accuracy of brain tumor diagnosis. The critical hyperparameters include batch size, layer counts, learning rate, activation functions, pooling strategies, padding, and filter size. The hyperparameter-tuned CNN model was trained on three different brain MRI datasets available at Kaggle, producing outstanding performance scores, with an average value of 97% for accuracy, precision, recall, and F1-score. Our optimized model is effective, as demonstrated by our methodical comparisons with state-of-the-art approaches. Our hyperparameter modifications enhanced the model performance and strengthened its capacity for generalization, giving medical practitioners a more accurate and effective tool for making crucial judgments regarding brain tumor diagnosis. Our model is a significant step in the right direction toward trustworthy and accurate medical diagnosis, with practical implications for improving patient outcomes. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Sample images from Dataset 2.</p>
Full article ">Figure 2
<p>Sample images from Dataset 3.</p>
Full article ">Figure 3
<p>Samples from Dataset 1.</p>
Full article ">Figure 4
<p>The workflow of the proposed model.</p>
Full article ">Figure 5
<p>The Confusion Matrix for Dataset 1.</p>
Full article ">Figure 6
<p>The Confusion Matrix for Dataset 2.</p>
Full article ">Figure 7
<p>The Confusion Matrix for Dataset 3.</p>
Full article ">Figure 8
<p>Accuracy of training and validation for Dataset 1.</p>
Full article ">Figure 9
<p>Training and validation loss in Dataset 1.</p>
Full article ">Figure 10
<p>Accuracy of training and validation using Dataset 2.</p>
Full article ">Figure 11
<p>Training and validation loss in Dataset 2.</p>
Full article ">Figure 12
<p>Accuracy of validation and training using Dataset 3.</p>
Full article ">Figure 13
<p>Losses in training and validation for Dataset 3.</p>
Full article ">
14 pages, 2341 KiB  
Article
Evaluation of Machine Learning Classification Models for False-Positive Reduction in Prostate Cancer Detection Using MRI Data
by Malte Rippa, Ruben Schulze, Georgia Kenyon, Marian Himstedt, Maciej Kwiatkowski, Rainer Grobholz, Stephen Wyler, Alexander Cornelius, Sebastian Schindera and Felice Burn
Diagnostics 2024, 14(15), 1677; https://doi.org/10.3390/diagnostics14151677 - 2 Aug 2024
Viewed by 787
Abstract
In this work, several machine learning (ML) algorithms, both classical ML and modern deep learning, were investigated for their ability to improve the performance of a pipeline for the segmentation and classification of prostate lesions using MRI data. The algorithms were used to [...] Read more.
In this work, several machine learning (ML) algorithms, both classical ML and modern deep learning, were investigated for their ability to improve the performance of a pipeline for the segmentation and classification of prostate lesions using MRI data. The algorithms were used to perform a binary classification of benign and malignant tissue visible in MRI sequences. The model choices include support vector machines (SVMs), random decision forests (RDFs), and multi-layer perceptrons (MLPs), along with radiomic features that are reduced by applying PCA or mRMR feature selection. Modern CNN-based architectures, such as ConvNeXt, ConvNet, and ResNet, were also evaluated in various setups, including transfer learning. To optimize the performance, different approaches were compared and applied to whole images, as well as gland, peripheral zone (PZ), and lesion segmentations. The contribution of this study is an investigation of several ML approaches regarding their performance in prostate cancer (PCa) diagnosis algorithms. This work delivers insights into the applicability of different approaches for this context based on an exhaustive examination. The outcome is a recommendation or preference for which machine learning model or family of models is best suited to optimize an existing pipeline when the model is applied as an upstream filter. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Simplified setup of a sequential lesion segmentation/classification pipeline.</p>
Full article ">Figure 2
<p>Pipeline evaluation mode 1: All classifiers that do not require lesion segmentations are placed at the beginning of the pipeline or after the model that predicts the required anatomy.</p>
Full article ">Figure 3
<p>Pipeline evaluation mode 2: The lesion classifiers are placed at the end of the regular pipeline to produce a classification of the lesion candidates selected by the existing pipeline segmentation models.</p>
Full article ">Figure 4
<p>MSE loss distribution with respect to the two classes.</p>
Full article ">Figure 5
<p>Average 5-fold cross validation loss (binary cross entropy loss) for the training, validation, and test partition on the whole image with the Genesis model over 100 epochs.</p>
Full article ">Figure 6
<p>Average 5-fold cross-validation AUC-ROC for the training, validation, and test partition on the whole image with the Genesis model over 100 epochs.</p>
Full article ">
14 pages, 2354 KiB  
Article
Identifying Acute Aortic Syndrome and Thoracic Aortic Aneurysm from Chest Radiography in the Emergency Department Using Convolutional Neural Network Models
by Yang-Tse Lin, Bing-Cheng Wang and Jui-Yuan Chung
Diagnostics 2024, 14(15), 1646; https://doi.org/10.3390/diagnostics14151646 - 30 Jul 2024
Viewed by 771
Abstract
(1) Background: Identifying acute aortic syndrome (AAS) and thoracic aortic aneurysm (TAA) in busy emergency departments (EDs) is crucial due to their life-threatening nature, necessitating timely and accurate diagnosis. (2) Methods: This retrospective case-control study was conducted in the ED of three hospitals. [...] Read more.
(1) Background: Identifying acute aortic syndrome (AAS) and thoracic aortic aneurysm (TAA) in busy emergency departments (EDs) is crucial due to their life-threatening nature, necessitating timely and accurate diagnosis. (2) Methods: This retrospective case-control study was conducted in the ED of three hospitals. Adult patients visiting the ED between 1 January 2010 and 1 January 2020 with a chief complaint of chest or back pain were enrolled in the study. The collected chest radiography (CXRs) data were divided into training (80%) and testing (20%) datasets. The training dataset was trained by four different convolutional neural network (CNN) models. (3) Results: A total of 1625 patients were enrolled in this study. The InceptionV3 model achieved the highest F1 score of 0.76. (4) Conclusions: Analysis of CXRs using a CNN-based model provides a novel tool for clinicians to interpret ED patients with chest pain and suspected AAS and TAA. The integration of such imaging tools into ED could be considered in the future to enhance the diagnostic workflow for clinically fatal diseases. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>The flow chart of the study.</p>
Full article ">Figure 2
<p>The images on the left (<b>A</b>,<b>B</b>) show the location of a possible aortic lesion on the CXR marked by the CNN model (VGG19), and these areas are colored by CAM to make it easy for users to interpret them. The pictures on the right (<b>C</b>,<b>D</b>) are the CTA images of the same patient showing the true location of the patient’s aortic lesion. After comparing with the CAM image, it was found that the CNN model correctly interpreted the location of the lesion on the CXRs, arrow: false lumen of aortic dissection.</p>
Full article ">Figure 3
<p>Image enhancement and ROI selection of original images for better image quality and normalization before model training.</p>
Full article ">Figure 4
<p>Image augmentation: original images were rotated by 3 degrees clockwise and counterclockwise.</p>
Full article ">Figure 5
<p>(<b>A1</b>–<b>A3</b>) The CAM images of the original CXRs interpreted by the CNN model show that some feature extraction areas are not logically related to the aortic structure. In contrast, (<b>B1</b>–<b>B3</b>) displays the results after the ACF-generated feature extraction model processed the images. Training the CNN model with ROI-focused images shows that the model focuses more on the anatomical structures of the mediastinum, without being distracted by surrounding organs.</p>
Full article ">
1 pages, 120 KiB  
Editorial
Diagnostics Increases Visibility
by Andreas Kjaer
Diagnostics 2024, 14(14), 1554; https://doi.org/10.3390/diagnostics14141554 - 18 Jul 2024
Viewed by 506
Abstract
It is with great pleasure that we announce that our journal has recently received the 2023 CiteScore of 4 [...] Full article
11 pages, 1014 KiB  
Article
Optimizing GPT-4 Turbo Diagnostic Accuracy in Neuroradiology through Prompt Engineering and Confidence Thresholds
by Akihiko Wada, Toshiaki Akashi, George Shih, Akifumi Hagiwara, Mitsuo Nishizawa, Yayoi Hayakawa, Junko Kikuta, Keigo Shimoji, Katsuhiro Sano, Koji Kamagata, Atsushi Nakanishi and Shigeki Aoki
Diagnostics 2024, 14(14), 1541; https://doi.org/10.3390/diagnostics14141541 - 17 Jul 2024
Cited by 1 | Viewed by 725
Abstract
Background and Objectives: Integrating large language models (LLMs) such as GPT-4 Turbo into diagnostic imaging faces a significant challenge, with current misdiagnosis rates ranging from 30–50%. This study evaluates how prompt engineering and confidence thresholds can improve diagnostic accuracy in neuroradiology. Methods: We [...] Read more.
Background and Objectives: Integrating large language models (LLMs) such as GPT-4 Turbo into diagnostic imaging faces a significant challenge, with current misdiagnosis rates ranging from 30–50%. This study evaluates how prompt engineering and confidence thresholds can improve diagnostic accuracy in neuroradiology. Methods: We analyze 751 neuroradiology cases from the American Journal of Neuroradiology using GPT-4 Turbo with customized prompts to improve diagnostic precision. Results: Initially, GPT-4 Turbo achieved a baseline diagnostic accuracy of 55.1%. By reformatting responses to list five diagnostic candidates and applying a 90% confidence threshold, the highest precision of the diagnosis increased to 72.9%, with the candidate list providing the correct diagnosis at 85.9%, reducing the misdiagnosis rate to 14.1%. However, this threshold reduced the number of cases that responded. Conclusions: Strategic prompt engineering and high confidence thresholds significantly reduce misdiagnoses and improve the precision of the LLM diagnostic in neuroradiology. More research is needed to optimize these approaches for broader clinical implementation, balancing accuracy and utility. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Proportion of total cases by disease category. This pie chart shows the distribution of 751 clinical cases in various disease categories.</p>
Full article ">Figure 2
<p>Evaluation of GPT-4 Turbo responses in 751 cases of neuroradiology.</p>
Full article ">Figure 3
<p>Impact of confidence thresholds on diagnostic performance.</p>
Full article ">
12 pages, 2020 KiB  
Article
Differentiation of Acute Internal Carotid Artery Occlusion Etiology on Computed Tomography Angiography: Diagnostic Tree for Preparing Endovascular Treatment
by Bo Kyu Kim, Byungjun Kim and Sung-Hye You
Diagnostics 2024, 14(14), 1524; https://doi.org/10.3390/diagnostics14141524 - 15 Jul 2024
Viewed by 488
Abstract
Background and Purpose: This study aimed to identify the imaging characteristics and discriminate the etiology of acute internal carotid artery occlusion (ICAO) on computed tomography angiography (CTA) in patients with acute ischemic stroke. Materials and Methods: We retrospectively evaluated consecutive patients who underwent [...] Read more.
Background and Purpose: This study aimed to identify the imaging characteristics and discriminate the etiology of acute internal carotid artery occlusion (ICAO) on computed tomography angiography (CTA) in patients with acute ischemic stroke. Materials and Methods: We retrospectively evaluated consecutive patients who underwent endovascular thrombectomy for acute ICAO. Contrast filling of the extracranial ICA in preprocedural CTA was considered apparent ICAO. Non-contrast filling of the extracranial ICA was evaluated according to the contrast-filled lumen configuration, lumen margin and location, Hounsfield units of the non-attenuating segment, and presence of calcification or an intimal flap. Digital subtraction angiography findings were the reference standard for ICAO etiology and the occlusion site. A diagnostic tree was derived using significant variables according to pseudo-occlusion, atherosclerotic vascular disease (ASVD), thrombotic occlusion, and dissection. Results: A total of 114 patients showed apparent ICAO (n = 21), pseudo-occlusion (n = 51), ASVD (n = 27), thrombotic occlusion (n = 9), or dissection (n = 6). Most pseudo-occlusions (50/51, 98.0%) showed dependent locations with ill-defined contrast column margins and classic flame or beak shapes. The most common occlusion site of pseudo-occlusion was the petro-cavernous ICA (n = 32, 62.7%). Apparent ICAO mainly appeared in cases with occlusion distal to the posterior communicating artery orifice. ASVD showed beak or blunt shapes in the presence of low-density plaques or dense calcifications. Dissection revealed flame- or beak-shaped appearances with circumscribed margins. Thrombotic occlusions tended to appear blunt-shaped. The decision-tree model showed a 92.5% overall accuracy. Conclusions: CTA characteristics may help diagnose ICAO etiology. We provide a simple and easy decision-making model to inform endovascular thrombectomy. Full article
(This article belongs to the Topic Diagnosis and Management of Acute Ischemic Stroke)
Show Figures

Figure 1

Figure 1
<p>Representative computed tomography angiography (CTA) images and digital subtraction angiography (DSA) findings of internal carotid artery occlusion (ICAO) etiologies. <b>First row</b>: A patient with ICA terminus occlusion shows non-contrast filling of the left cervical ICA with a flame shape. In the axial image, the contrast is in the dependent portion, with an ill-defined margin. On DSA, delayed angiogram confirms the occlusion site at the ophthalmic ICA (arrow). After recanalization, the ICA shows no underlying stenosis. <b>Second row</b>: CTA shows ICAO due to a low-density plaque with dense calcification. The DSA image shows a blunt-shaped ICA occlusion. The wire had difficulty passing through the occluded segment. After carotid stenting, the aspiration catheter could pass through the stent and navigate to the intracranial tandem occlusion present in the middle cerebral artery (arrow). <b>Third row</b>: Images of a patient with ICA terminus occlusion with a blunt-shaped proximal ICA. On DSA, the occlusion started from the mid-cervical ICA. Upon contact aspiration using a large-bore aspiration catheter, a thrombus was retrieved while attached to the catheter tip (arrows). Multiple uses of contact aspiration resulted in the successful removal of a large amount of thrombus filling from the mid-cervical ICA to the ICA terminus. <b>Fourth row</b>: Images of a patient with ICA terminus occlusion. The CTA image shows a beak shape and circumscribed margin of cervical ICA. The common carotid angiogram shows a flame-shaped appearance with an intimal flap. The false lumen with stagnated contrast media is located posteriorly (arrow). After placing the guiding catheter in the true lumen, the roadmap image shows a dissecting flap and intracranial tandem occlusion. <b>Bottom row</b>: Images of a patient with apparent ICAO. CTA and DSA show ICA occlusion distal to the posterior communicating artery orifice. The contrast reaches the intracranial ICA and fills the ophthalmic and posterior communicating arteries.</p>
Full article ">Figure 2
<p>Flowchart of the patients included in this study. ACA, anterior cerebral artery; CAS, carotid artery stenting; CCA, common carotid artery; CTA, computed tomography angiography; ICA, internal carotid artery; LVO, large vessel occlusion; MCA, middle cerebral artery; MRA, magnetic resonance angiography.</p>
Full article ">Figure 3
<p>Diagnostic tree for the differential diagnosis of cervical internal carotid artery non-attenuation. ASVD, atherosclerotic vascular disease; ICA, internal carotid artery; HU, Hounsfield unit.</p>
Full article ">Figure 4
<p>Pathologies of ICA occlusion according to the rudimentary column shape. ASVD, atherosclerotic vascular disease.</p>
Full article ">
27 pages, 12614 KiB  
Article
Attention-Based Deep Learning Approach for Breast Cancer Histopathological Image Multi-Classification
by Lama A. Aldakhil, Haifa F. Alhasson and Shuaa S. Alharbi
Diagnostics 2024, 14(13), 1402; https://doi.org/10.3390/diagnostics14131402 - 1 Jul 2024
Viewed by 927
Abstract
Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data [...] Read more.
Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the approach employed for breast histology image classification. Our approach begins with dataset processing steps, followed by training the ECSAnet model to extract features and obtain classification predictions, after which we move on to model evaluation and analysis steps.</p>
Full article ">Figure 2
<p>Basic architecture of a CNN [<a href="#B28-diagnostics-14-01402" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Architecture of EfficientNetV2 [<a href="#B17-diagnostics-14-01402" class="html-bibr">17</a>].</p>
Full article ">Figure 4
<p>Computation processes of the attention modules: (<b>a</b>) CBAM, (<b>b</b>) CAM, and (<b>c</b>) SAM [<a href="#B18-diagnostics-14-01402" class="html-bibr">18</a>].</p>
Full article ">Figure 5
<p>Architecture of ECSAnet.</p>
Full article ">Figure 6
<p>Structures of MBConv and Fused-MBConv [<a href="#B17-diagnostics-14-01402" class="html-bibr">17</a>,<a href="#B30-diagnostics-14-01402" class="html-bibr">30</a>,<a href="#B31-diagnostics-14-01402" class="html-bibr">31</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Samples for benign tumor tissues; (<b>b</b>) samples for malignant tumor tissues [<a href="#B32-diagnostics-14-01402" class="html-bibr">32</a>].</p>
Full article ">Figure 8
<p>Demonstration of Reinhard stain normalization on breast histopathology images. (<b>a</b>) Source images before normalization. (<b>b</b>) The target image providing the reference color distribution. (<b>c</b>) Images after applying the Reinhard method for color normalization.</p>
Full article ">Figure 9
<p>Demonstration of implemented data augmentations. Column (<b>a</b>) shows the images in their original state. Column (<b>b</b>) shows the images after stain normalization. Columns (<b>c</b>,<b>d</b>) show the images after horizontal and vertical flips, respectively. Column (<b>e</b>) shows the images after random affine transformations, which include scaling, rotation, and translation. Lastly, column (<b>f</b>) shows the images after AugMix augmentations.</p>
Full article ">Figure 10
<p>ECSAnet performance on the training set across magnification factors.</p>
Full article ">Figure 11
<p>ECSAnet performance on the validation set across magnification factors.</p>
Full article ">Figure 12
<p>Test set confusion matrices for ECSAnet across magnification factors.</p>
Full article ">Figure 13
<p>Regions of interest identified by the model using Grad-CAM. (<b>a</b>) shows example images of benign classes while (<b>b</b>) shows examples for the malignant classes.</p>
Full article ">Figure 14
<p>Comparison of the ECSAnet’s performance curves against variations with removed elements.</p>
Full article ">Figure 15
<p>ECSAnet validation performance compared to other state-of-the-art models on the 40× magnification.</p>
Full article ">Figure 16
<p>ECSAnet validation performance compared to other state-of-the-art models on the 100× magnification.</p>
Full article ">Figure 17
<p>ECSAnet validation performance compared to other state-of-the-art models on the 200× magnification.</p>
Full article ">Figure 18
<p>ECSAnet validation performance compared to other state-of-the-art models on the 400× magnification.</p>
Full article ">Figure 19
<p>Grad-CAM visual explanations, illustrating the predictive focus areas for each class across models, descending vertically from the least accurate (AlexNet) to the most accurate (ECSAnet).</p>
Full article ">
16 pages, 1755 KiB  
Article
Cross-Sectional Area and Echogenicity Reference Values for Sonography of Peripheral Nerves in the Lithuanian Population
by Evelina Grusauskiene, Agne Smigelskyte, Erisela Qerama and Daiva Rastenyte
Diagnostics 2024, 14(13), 1373; https://doi.org/10.3390/diagnostics14131373 - 28 Jun 2024
Viewed by 660
Abstract
Objectives: We aimed to provide data of nerve sizes and echogenicity reference values of the Lithuanian population. Methods: High-resolution ultrasound was bilaterally performed according to the Ultrasound Pattern Sum Score and Neuropathy ultrasound protocols for healthy Lithuanian adults. Cross-sectional area (CSA) measurement and [...] Read more.
Objectives: We aimed to provide data of nerve sizes and echogenicity reference values of the Lithuanian population. Methods: High-resolution ultrasound was bilaterally performed according to the Ultrasound Pattern Sum Score and Neuropathy ultrasound protocols for healthy Lithuanian adults. Cross-sectional area (CSA) measurement and echogenicity were used as the main parameters for investigation. Echogenicity was evaluated using ImageJ, and nerves were categorized in classes according to echogenicity. Results: Of 125 subjects enrolled, 63 were males (mean age 47.57 years, range 25–78 years) and 62 were females (mean age 50.50 years, range 25–80 years). Reference values of nerve sizes and values of echogenicity as a fraction of black in percentage of cervical roots, upper and middle trunks of the brachial plexus and the following nerves: vagal, median, ulnar, radial, superficial radial, tibial, fibular, and sural in standard regions were established. Mild to moderate correlations were found between nerves CSA, echogenicity values and anthropometric measurements with the differences according to sex. Inter-rater (ICC 0.93; 95% CI 0.92–0.94) and intra-rater (ICC 0.94; 95% CI 0.93–0.95) reliability was excellent. Conclusions: Reference values of nerve size and echogenicity of Lithuanians were presented for the first time as a novel such kind of publication from the Baltic countries. Full article
Show Figures

Figure 1

Figure 1
<p>Cross-sectional area measurement methodology. Arrow is pointing to the median nerve at the forearm. CSA of the nerve is circled along the hyperechoic epineurium.</p>
Full article ">Figure 2
<p>Arrow is pointing to the median nerve at the forearm. Image converted into an 8-bit image, each pixel in a range calculated between 0 (black) and 255 (white).</p>
Full article ">Figure 3
<p>Overview over the significant findings among males and females. The figure shows the distribution of CSA of different nerves between males and females. The bars denote the mean and the SD for different measurement sites. Black bars denote females, and blue bars denote males. * <span class="html-italic">p</span> value &lt; 0.05, ** <span class="html-italic">p</span> value &lt; 0.001.</p>
Full article ">Figure 4
<p>Distribution of nerve echogenicity classes according to gender ((<b>A</b>) upper limb, (<b>B</b>) lower limb, vagal nerve, and brachial plexus). (<b>A</b>) Mean, <span class="html-italic">p</span>-values for difference between echogenicity classes between males and females. Black bars denote echogenicity class 1, grey bars denote echogenicity class 2, blue bars denote echogenicity class 3, N—subject number. (<b>B</b>) Mean, <span class="html-italic">p</span>-values for the difference between echogenicity classes between male and female. Black bars denote echogenicity class 1, grey bars denote echogenicity class 2, blue bars denote echogenicity class 3, N—subject number.</p>
Full article ">Figure 4 Cont.
<p>Distribution of nerve echogenicity classes according to gender ((<b>A</b>) upper limb, (<b>B</b>) lower limb, vagal nerve, and brachial plexus). (<b>A</b>) Mean, <span class="html-italic">p</span>-values for difference between echogenicity classes between males and females. Black bars denote echogenicity class 1, grey bars denote echogenicity class 2, blue bars denote echogenicity class 3, N—subject number. (<b>B</b>) Mean, <span class="html-italic">p</span>-values for the difference between echogenicity classes between male and female. Black bars denote echogenicity class 1, grey bars denote echogenicity class 2, blue bars denote echogenicity class 3, N—subject number.</p>
Full article ">
19 pages, 7513 KiB  
Article
Augmenting Radiological Diagnostics with AI for Tuberculosis and COVID-19 Disease Detection: Deep Learning Detection of Chest Radiographs
by Manjur Kolhar, Ahmed M. Al Rajeh and Raisa Nazir Ahmed Kazi
Diagnostics 2024, 14(13), 1334; https://doi.org/10.3390/diagnostics14131334 - 24 Jun 2024
Viewed by 907
Abstract
In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients’ chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of [...] Read more.
In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients’ chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of lung issues from images. To prepare the images for the model’s input requirements, we enhanced them through data augmentation techniques for training purposes. We evaluated the model’s performance by analyzing the precision, recall, and F1 scores across training, validation, and testing datasets. The results show that the ResNet50 model outperformed VGG16 with accuracy and resilience. It displayed superior ROC AUC values in both validation and test scenarios. Particularly impressive were ResNet50’s precision and recall rates, nearing 0.99 for all conditions in the test set. On the hand, VGG16 also performed well during testing—detecting tuberculosis with a precision of 0.99 and a recall of 0.93. Our study highlights the performance of our deep learning method by showcasing the effectiveness of ResNet50 over traditional approaches like VGG16. This progress utilizes methods to enhance classification accuracy by augmenting data and balancing them. This positions our approach as an advancement in using state-of-the-art deep learning applications in imaging. By enhancing the accuracy and reliability of diagnosing ailments such as COVID-19 and tuberculosis, our models have the potential to transform care and treatment strategies, highlighting their role in clinical diagnostics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Working model under training and validation along with data preparation and augmentation.</p>
Full article ">Figure 2
<p>Training and Validation result in terms of loss and accuracy of ResNet50 model.</p>
Full article ">Figure 3
<p>Train and validation confusion matrices for the ResNet50 model at the last epoch.</p>
Full article ">Figure 4
<p>Receiver operating characteristic (ROC) curves for the ResNet50 model in diagnosing lung conditions, on validation dataset.</p>
Full article ">Figure 5
<p>Receiver operating characteristic (ROC) curves for the ResNet50 model in diagnosing lung conditions, on test dataset.</p>
Full article ">Figure 6
<p>Confusion matrix of testing optimized using augmentation techniques.</p>
Full article ">Figure 7
<p>Confusion matrix of validation optimized using augmentation techniques.</p>
Full article ">Figure 8
<p>Confusion matrix of test dataset optimized using augmentation techniques.</p>
Full article ">Figure 9
<p>Receiver operating characteristic (ROC) curves for the VGG16 model in diagnosing lung conditions, on test dataset.</p>
Full article ">Figure 10
<p>Comparison of AUC results for state-of-the-art lung disease detection models [<a href="#B9-diagnostics-14-01334" class="html-bibr">9</a>,<a href="#B10-diagnostics-14-01334" class="html-bibr">10</a>,<a href="#B11-diagnostics-14-01334" class="html-bibr">11</a>,<a href="#B12-diagnostics-14-01334" class="html-bibr">12</a>,<a href="#B13-diagnostics-14-01334" class="html-bibr">13</a>,<a href="#B14-diagnostics-14-01334" class="html-bibr">14</a>,<a href="#B15-diagnostics-14-01334" class="html-bibr">15</a>,<a href="#B16-diagnostics-14-01334" class="html-bibr">16</a>,<a href="#B17-diagnostics-14-01334" class="html-bibr">17</a>,<a href="#B18-diagnostics-14-01334" class="html-bibr">18</a>,<a href="#B19-diagnostics-14-01334" class="html-bibr">19</a>].</p>
Full article ">
14 pages, 3006 KiB  
Article
Lung Disease Detection Using U-Net Feature Extractor Cascaded by Graph Convolutional Network
by Pshtiwan Qader Rashid and İlker Türker
Diagnostics 2024, 14(12), 1313; https://doi.org/10.3390/diagnostics14121313 - 20 Jun 2024
Viewed by 990
Abstract
Computed tomography (CT) scans have recently emerged as a major technique for the fast diagnosis of lung diseases via image classification techniques. In this study, we propose a method for the diagnosis of COVID-19 disease with improved accuracy by utilizing graph convolutional networks [...] Read more.
Computed tomography (CT) scans have recently emerged as a major technique for the fast diagnosis of lung diseases via image classification techniques. In this study, we propose a method for the diagnosis of COVID-19 disease with improved accuracy by utilizing graph convolutional networks (GCN) at various layer formations and kernel sizes to extract features from CT scan images. We apply a U-Net model to aid in segmentation and feature extraction. In contrast with previous research retrieving deep features from convolutional filters and pooling layers, which fail to fully consider the spatial connectivity of the nodes, we employ GCNs for classification and prediction to capture spatial connectivity patterns, which provides a significant association benefit. We handle the extracted deep features to form an adjacency matrix that contains a graph structure and pass it to a GCN along with the original image graph and the largest kernel graph. We combine these graphs to form one block of the graph input and then pass it through a GCN with an additional dropout layer to avoid overfitting. Our findings show that the suggested framework, called the feature-extracted graph convolutional network (FGCN), performs better in identifying lung diseases compared to recently proposed deep learning architectures that are not based on graph representations. The proposed model also outperforms a variety of transfer learning models commonly used for medical diagnosis tasks, highlighting the abstraction potential of the graph representation over traditional methods. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) COVID-19 and (<b>b</b>) non-COVID-19 samples [<a href="#B32-diagnostics-14-01313" class="html-bibr">32</a>].</p>
Full article ">Figure 2
<p>Block diagram of the proposed method.</p>
Full article ">Figure 3
<p>Feature extraction in U-Net architecture.</p>
Full article ">Figure 4
<p>Directed graph example and its accompanying adjacency matrix modified from an image.</p>
Full article ">Figure 5
<p>Evaluation of graph input data with a graph convolutional network.</p>
Full article ">Figure 6
<p>Confusion matrices for the models tested.</p>
Full article ">
14 pages, 2022 KiB  
Article
Comparative Analysis of Lymphocyte Populations in Post-COVID-19 Condition and COVID-19 Convalescent Individuals
by Luisa Berger, Johannes Wolf, Sven Kalbitz, Nils Kellner, Christoph Lübbert and Stephan Borte
Diagnostics 2024, 14(12), 1286; https://doi.org/10.3390/diagnostics14121286 - 18 Jun 2024
Viewed by 774
Abstract
Reduced lymphocyte counts in peripheral blood are one of the most common observations in acute phases of viral infections. Although many studies have already examined the impact of immune (dys)regulation during SARS-CoV-2 infection, there are still uncertainties about the long-term consequences for lymphocyte [...] Read more.
Reduced lymphocyte counts in peripheral blood are one of the most common observations in acute phases of viral infections. Although many studies have already examined the impact of immune (dys)regulation during SARS-CoV-2 infection, there are still uncertainties about the long-term consequences for lymphocyte homeostasis. Furthermore, as persistent cellular aberrations have been described following other viral infections, patients with “Post-COVID-19 Condition” (PCC) may present similarly. In order to investigate cellular changes in the adaptive immune system, we performed a retrospective analysis of flow cytometric data from lymphocyte subpopulations in 106 patients with confirmed SARS-CoV-2 infection who received medical care at our institution. The patients were divided into three groups according to the follow-up date; laboratory analyses of COVID-19 patients were compared with 28 unexposed healthy controls. Regarding B lymphocyte subsets, levels of IgA + CD27+, IgG + CD27+, IgM + CD27− and switched B cells were significantly reduced at the last follow-up compared to unexposed healthy controls (UHC). Of the 106 COVID-19 patients, 56 were clinically classified as featuring PCC. Significant differences between PCC and COVID-19 convalescents compared to UHC were observed in T helper cells and class-switched B cells. However, we did not detect specific or long-lasting immune cellular changes in PCC compared to the non-post-COVID-19 condition. Full article
Show Figures

Figure 1

Figure 1
<p>Representative gating for detection of T, B, and natural killer (NK) cells (<b>A</b>), as well as B cell subpopulations (<b>B</b>). For quantification of lymphocyte subsets, fluorochrome-conjugated monoclonal antibodies against CD45, CD3, CD4, CD8, CD16/CD56, T cell receptor (TCR) γδ, CD19, and CD38 were used. For gating to determine proportions of B cell subpopulations, monoclonal CD19, CD20, CD38, CD138, CD21, CD27, IgG, IgA, and IgM antibodies were applied.</p>
Full article ">Figure 2
<p>Box plots of the 25th to 75th percentile of B cell subpopulation counts (CD21 + low B, IgG + CD27+ B, IgA + CD27+ B, and IgM + CD27+ B cells) as well as Natural Killer (NK) cells and natural killer T cells (NKT). The middle line represents the median, and the upper/lower whiskers represent the max/min value within 1.5× the 75th/25th interquartile range, respectively. N = 28 healthy individuals, group 1 (G1) = 85–150 days after symptom onset (n = 21), group 2 (G2) = 151–210 days after symptom onset (n = 46), group 3 (G3) = 211–320 days after onset (n = 39). Statistical testing was performed using the Shapiro–Wilk normality test and Mann–Whitney U test. * <span class="html-italic">p</span> ≤ 0.05, ** <span class="html-italic">p</span> ≤ 0.01.</p>
Full article ">Figure 3
<p>Box plots of the 25th to 75th percentile of T and B cell counts, as well as T cell subpopulations (T helper, cytotoxic T, activated T helper, activated cytotoxic T, and T cell receptor [TCR] γδ T cells), switched B cells, and transitional B cells. The middle line represents the median, and the upper/lower whiskers represent the max/min value within 1.5× the 75th/25th interquartile range, respectively. N = 28 healthy individuals, group 1 (G1) = 85–150 days after symptom onset (n = 21), group 2 (G2) = 151–210 days after symptom onset (n = 46), group 3 (G3) = 211–320 days after onset (n = 39). Groups 1-3 are colored orange, control group is depicted in green. Statistical testing was performed using the Shapiro–Wilk normality test and Mann–Whitney U test. * <span class="html-italic">p</span> ≤ 0.05, ** <span class="html-italic">p</span> ≤ 0.01, *** <span class="html-italic">p</span> ≤ 0.001.</p>
Full article ">
Back to TopTop