[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (18,446)

Search Parameters:
Keywords = classification accuracy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 749 KiB  
Article
Caries Detection and Classification in Photographs Using an Artificial Intelligence-Based Model—An External Validation Study
by Elisabeth Frenkel, Julia Neumayr, Julia Schwarzmaier, Andreas Kessler, Nour Ammar, Falk Schwendicke, Jan Kühnisch and Helena Dujic
Diagnostics 2024, 14(20), 2281; https://doi.org/10.3390/diagnostics14202281 (registering DOI) - 14 Oct 2024
Abstract
Objective: This ex vivo diagnostic study aimed to externally validate a freely accessible AI-based model for caries detection, classification, localisation and segmentation using an independent image dataset. It was hypothesised that there would be no difference in diagnostic performance compared to previously published [...] Read more.
Objective: This ex vivo diagnostic study aimed to externally validate a freely accessible AI-based model for caries detection, classification, localisation and segmentation using an independent image dataset. It was hypothesised that there would be no difference in diagnostic performance compared to previously published internal validation data. Methods: For the independent dataset, 718 dental images representing different stages of carious (n = 535) and noncarious teeth (n = 183) were retrieved from the internet. All photographs were evaluated by the dental team (reference standard) and the AI-based model (test method). Diagnostic performance was statistically determined using cross-tabulations to calculate accuracy (ACC), sensitivity (SE), specificity (SP) and area under the curve (AUC). Results: An overall ACC of 92.0% was achieved for caries detection, with an ACC of 85.5–95.6%, SE of 42.9–93.3%, SP of 82.1–99.4% and AUC of 0.702–0.909 for the classification of caries. Furthermore, 97.0% of the cases were accurately localised. Fully and partially correct segmentation was achieved in 52.9% and 44.1% of the cases, respectively. Conclusions: The validated AI-based model showed promising diagnostic performance in detecting and classifying caries using an independent image dataset. Future studies are needed to investigate the validity, reliability and practicability of AI-based models using dental photographs from different image sources and/or patient groups. Full article
(This article belongs to the Special Issue Deep Learning in Medical and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Workflow diagram illustrating the methodological steps.</p>
Full article ">Figure 2
<p>ROC curves and the corresponding AUCs for all caries classes.</p>
Full article ">
21 pages, 2680 KiB  
Article
Multi-View Soft Attention-Based Model for the Classification of Lung Cancer-Associated Disabilities
by Jannatul Ferdous Esha, Tahmidul Islam, Md. Appel Mahmud Pranto, Abrar Siam Borno, Nuruzzaman Faruqui, Mohammad Abu Yousuf, AKM Azad, Asmaa Soliman Al-Moisheer, Naif Alotaibi, Salem A. Alyami and Mohammad Ali Moni
Diagnostics 2024, 14(20), 2282; https://doi.org/10.3390/diagnostics14202282 (registering DOI) - 14 Oct 2024
Abstract
Background: The detection of lung nodules at their early stages may significantly enhance the survival rate and prevent progression to severe disability caused by advanced lung cancer, but it often requires manual and laborious efforts for radiologists, with limited success. To alleviate it, [...] Read more.
Background: The detection of lung nodules at their early stages may significantly enhance the survival rate and prevent progression to severe disability caused by advanced lung cancer, but it often requires manual and laborious efforts for radiologists, with limited success. To alleviate it, we propose a Multi-View Soft Attention-Based Convolutional Neural Network (MVSA-CNN) model for multi-class lung nodular classifications in three stages (benign, primary, and metastatic). Methods: Initially, patches from each nodule are extracted into three different views, each fed to our model to classify the malignancy. A dataset, namely the Lung Image Database Consortium Image Database Resource Initiative (LIDC-IDRI), is used for training and testing. The 10-fold cross-validation approach was used on the database to assess the model’s performance. Results: The experimental results suggest that MVSA-CNN outperforms other competing methods with 97.10% accuracy, 96.31% sensitivity, and 97.45% specificity. Conclusions: We hope the highly predictive performance of MVSA-CNN in lung nodule classification from lung Computed Tomography (CT) scans may facilitate more reliable diagnosis, thereby improving outcomes for individuals with disabilities who may experience disparities in healthcare access and quality. Full article
(This article belongs to the Special Issue Artificial Intelligence in Cancers—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Workflow of our proposed work. (1) Data acquisition, (2) data preprocessing, (3) feature extraction and classifier, (4) train model, (5) model evolution, and (6) model comparison.</p>
Full article ">Figure 2
<p>Histogram of radiodensity of LIDC-IDRI-1011.</p>
Full article ">Figure 3
<p>Augmented images using different techniques: (<b>a</b>) original image, (<b>b</b>) random rotation, (<b>c</b>) horizontal flip, (<b>d</b>) vertical flip, (<b>e</b>) translation, and (<b>f</b>) random zoom.</p>
Full article ">Figure 4
<p>Architecture of the proposed model.</p>
Full article ">Figure 5
<p>Schematic of the soft-attention block, featuring 3D convolution, softmax, learnable scaler, and concatenation operations.</p>
Full article ">Figure 6
<p>Visual representation of model classification with soft attention heatmaps for different types of lung nodules. (<b>a</b>) Original CT scans of lung nodules: benign, primary malignant, and metastatic. (<b>b</b>) SA heatmaps showing model focus areas for classification. (<b>c</b>) Final model predictions, confirming accurate identification of each nodule type.</p>
Full article ">Figure 7
<p>Accuracy vs. epoch graph of the proposed model for 10-fold cross-validation.</p>
Full article ">Figure 8
<p>Loss vs. epoch graph of the proposed model for 10-fold cross-validation.</p>
Full article ">Figure 9
<p>Confusion matrix of the proposed model.</p>
Full article ">Figure 10
<p>ROC curve of the proposed model.</p>
Full article ">Figure 11
<p>Comparison of GradCAM and soft attention heatmap.</p>
Full article ">Figure 12
<p>Confusion matrix of the model without soft attention.</p>
Full article ">Figure 13
<p>Performance of the model without using custom weights.</p>
Full article ">
10 pages, 236 KiB  
Review
Artificial Intelligence in Uropathology
by Katia Ramos Moreira Leite and Petronio Augusto de Souza Melo
Diagnostics 2024, 14(20), 2279; https://doi.org/10.3390/diagnostics14202279 (registering DOI) - 14 Oct 2024
Abstract
The global population is currently at unprecedented levels, with an estimated 7.8 billion people inhabiting the planet. We are witnessing a rise in cancer cases, attributed to improved control of cardiovascular diseases and a growing elderly population. While this has resulted in an [...] Read more.
The global population is currently at unprecedented levels, with an estimated 7.8 billion people inhabiting the planet. We are witnessing a rise in cancer cases, attributed to improved control of cardiovascular diseases and a growing elderly population. While this has resulted in an increased workload for pathologists, it also presents an opportunity for advancement. The accurate classification of tumors and identification of prognostic and predictive factors demand specialized expertise and attention. Fortunately, the rapid progression of artificial intelligence (AI) offers new prospects in medicine, particularly in diagnostics such as image and surgical pathology. This article explores the transformative impact of AI in the field of uropathology, with a particular focus on its application in diagnosing, grading, and prognosticating various urological cancers. AI, especially deep learning algorithms, has shown significant potential in improving the accuracy and efficiency of pathology workflows. This comprehensive review is dedicated to providing an insightful overview of the primary data concerning the utilization of AI in diagnosing, predicting prognosis, and determining drug responses for tumors of the urinary tract. By embracing these advancements, we can look forward to improved outcomes and better patient care. Full article
(This article belongs to the Special Issue Urologic Oncology: Biomarkers, Diagnosis, and Management)
20 pages, 16803 KiB  
Article
Construction Jobsite Image Classification Using an Edge Computing Framework
by Gongfan Chen, Abdullah Alsharef and Edward Jaselskis
Sensors 2024, 24(20), 6603; https://doi.org/10.3390/s24206603 (registering DOI) - 13 Oct 2024
Abstract
Image classification is increasingly being utilized on construction sites to automate project monitoring, driven by advancements in reality-capture technologies and artificial intelligence (AI). Deploying real-time applications remains a challenge due to the limited computing resources available on-site, particularly on remote construction sites that [...] Read more.
Image classification is increasingly being utilized on construction sites to automate project monitoring, driven by advancements in reality-capture technologies and artificial intelligence (AI). Deploying real-time applications remains a challenge due to the limited computing resources available on-site, particularly on remote construction sites that have limited telecommunication support or access due to high signal attenuation within a structure. To address this issue, this research proposes an efficient edge-computing-enabled image classification framework for support of real-time construction AI applications. A lightweight binary image classifier was developed using MobileNet transfer learning, followed by a quantization process to reduce model size while maintaining accuracy. A complete edge computing hardware module, including components like Raspberry Pi, Edge TPU, and battery, was assembled, and a multimodal software module (incorporating visual, textual, and audio data) was integrated into the edge computing environment to enable an intelligent image classification system. Two practical case studies involving material classification and safety detection were deployed to demonstrate the effectiveness of the proposed framework. The results demonstrated the developed prototype successfully synchronized multimodal mechanisms and achieved zero latency in differentiating materials and identifying hazardous nails without any internet connectivity. Construction managers can leverage the developed prototype to facilitate centralized management efforts without compromising accuracy or extra investment in computing resources. This research paves the way for edge “intelligence” to be enabled for future construction job sites and promote real-time human-technology interactions without the need for high-speed internet. Full article
(This article belongs to the Special Issue Sensing and Mobile Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Co-occurrence of trending research topics.</p>
Full article ">Figure 2
<p>Co-occurrence map of the implemented device/hardware.</p>
Full article ">Figure 3
<p>Edge computing implementation framework.</p>
Full article ">Figure 4
<p>MobileNet architecture and model development process.</p>
Full article ">Figure 5
<p>Confusion matrix of trained MobileNetV2 on the material classification task.</p>
Full article ">Figure 6
<p>Material classification prototype implementation in the lab.</p>
Full article ">Figure 7
<p>Confusion matrix of trained MobileNetV1 on the nail detection task.</p>
Full article ">Figure 8
<p>Real-time edge computing prototype implementation in the lab environment: Scenario 1 is the detection of a “board with nails”; Scenario 2 is the detection of a “board”.</p>
Full article ">Figure 9
<p>Experimental setup at a real construction site: Location 1 is an image taken from inside the building under construction showing downtown Raleigh, NC; Location 2 shows an interior room without scattered materials; Location 3 shows scattered building materials; Location 4 shows cluttered construction materials and debris; and Location 5 shows grid and buffer materials on the grounds of the building site.</p>
Full article ">
19 pages, 714 KiB  
Article
Enhanced COVID-19 Detection from X-ray Images with Convolutional Neural Network and Transfer Learning
by Qanita Bani Baker, Mahmoud Hammad, Mohammed Al-Smadi, Heba Al-Jarrah, Rahaf Al-Hamouri and Sa’ad A. Al-Zboon
J. Imaging 2024, 10(10), 250; https://doi.org/10.3390/jimaging10100250 (registering DOI) - 13 Oct 2024
Abstract
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using [...] Read more.
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using Chest X-ray (CXR) imaging has significant potential for facilitating large-scale screening and epidemic control efforts. This paper introduces a novel approach that employs state-of-the-art Convolutional Neural Network models (CNNs) for accurate COVID-19 detection. The employed datasets each comprised 15,000 X-ray images. We addressed both binary (Normal vs. Abnormal) and multi-class (Normal, COVID-19, Pneumonia) classification tasks. Comprehensive evaluations were performed by utilizing six distinct CNN-based models (Xception, Inception-V3, ResNet50, VGG19, DenseNet201, and InceptionResNet-V2) for both tasks. As a result, the Xception model demonstrated exceptional performance, achieving 98.13% accuracy, 98.14% precision, 97.65% recall, and a 97.89% F1-score in binary classification, while in multi-classification it yielded 87.73% accuracy, 90.20% precision, 87.73% recall, and an 87.49% F1-score. Moreover, the other utilized models, such as ResNet50, demonstrated competitive performance compared with many recent works. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>General framework of the proposed approach.</p>
Full article ">Figure 2
<p>X-ray images of Normal, COVID-19, and Pneumonia used in binary classification task.</p>
Full article ">Figure 3
<p>X-ray images of Normal and Abnormal.</p>
Full article ">Figure 4
<p>Image-augmentation techniques.</p>
Full article ">Figure 5
<p>Proposed transfer learning-based technique.</p>
Full article ">
17 pages, 5155 KiB  
Article
Developing a New Method to Rapidly Map Eucalyptus Distribution in Subtropical Regions Using Sentinel-2 Imagery
by Chunxian Tang, Xiandie Jiang, Guiying Li and Dengsheng Lu
Forests 2024, 15(10), 1799; https://doi.org/10.3390/f15101799 (registering DOI) - 13 Oct 2024
Abstract
Eucalyptus plantations with fast growth and short rotation play an important role in improving economic conditions for local farmers and governments. It is necessary to map and update eucalyptus distribution in a timely manner, but to date, there is a lack of suitable [...] Read more.
Eucalyptus plantations with fast growth and short rotation play an important role in improving economic conditions for local farmers and governments. It is necessary to map and update eucalyptus distribution in a timely manner, but to date, there is a lack of suitable approaches for quickly mapping its spatial distribution in a large area. This research aims to develop a uniform procedure to map eucalyptus distribution at a regional scale using the Sentinel-2 imagery on the Google Earth Engine (GEE) platform. Different seasonal Senstinel-2 images were first examined, and key vegetation indices from the selected seasonal images were identified using random forest and Pearson correlation analysis. The selected key vegetation indices were then normalized and summed to produce new indices for mapping eucalyptus distribution based on the calculated best cutoff values using the ROC (Receiver Operating Characteristic) curve. The uniform procedure was tested in both experimental and test sites and then applied to the entire Fujian Province. The results indicated that the best season to distinguish eucalyptus forests from other forest types was winter. The composite indices for eucalyptus–coniferous forest separation (CIEC) and for eucalyptus–broadleaf forest separation (CIEB), which were synthesized from the enhanced vegetation index (EVI), plant senescing reflectance index (PSRI), shortwave infrared water stress index (SIWSI), and MERIS terrestrial chlorophyll index (MTCI), can effectively differentiate eucalyptus from other forest types. The proposed procedure with the best cutoff values (0.58 for CIEC and 1.29 for CIEB) achieved accuracies of above 90% in all study sites. The eucalyptus classification accuracies in Fujian Province, with a producer’s accuracy of 91%, user’s accuracy of 97%, and overall accuracy of 94%, demonstrate the strong robustness and transferability of this proposed procedure. This research provided a new insight into quickly mapping eucalyptus distribution in subtropical regions. However, more research is still needed to explore the robustness and transferability of this proposed method in tropical regions or in other subtropical regions with different environmental conditions. Full article
Show Figures

Figure 1

Figure 1
<p>The locations of two typical sites in Fujian Province (<b>a</b>): the experimental site was located in Minhou and Minqing Counties with sparse eucalyptus distribution (<b>b</b>), and the test site was located in Yunxiao County with extensive eucalyptus distribution (<b>c</b>). Both (<b>b</b>,<b>c</b>) were false color composites based on Sentinel-2A imagery.</p>
Full article ">Figure 2
<p>Framework of designing a uniform procedure to map eucalyptus distribution at a regional scale.</p>
Full article ">Figure 3
<p>The strategy of extracting eucalyptus from other tree species ((<b>a</b>)—masking out non-eucalyptus using the selected vegetation indices; (<b>b</b>)—development of new indices; (<b>c</b>)—determination of thresholds for separating eucalyptus from other forests).</p>
Full article ">Figure 4
<p>Spectral curves of eucalyptus and other forest types in different seasons. (<b>a</b>–<b>d</b>) Spectral curves of eucalyptus and other forest types in spring, summer, autumn, and winter, respectively. The spring image was acquired on 8 April 2022; the summer one on 22 July 2022; the autumn one on 25 September 2022; and the winter one on 22 December 2022. The grey semitransparent boxes represent bands with significant differences in reflectance values.</p>
Full article ">Figure 5
<p>The importance of potential indices in the experimental site.</p>
Full article ">Figure 6
<p>The correlation coefficients between vegetation indices: (<b>a</b>) eucalyptus and coniferous forests; (<b>b</b>) eucalyptus and broadleaf forests. Note: the <span class="html-italic">p</span>−level for the coefficients between each index was less than 0.01.</p>
Full article ">Figure 7
<p>ROC curves and AUC values under the combinations with different numbers of vegetation indices for differentiating eucalyptus from coniferous forests (<b>a</b>) and from broadleaf forests (<b>b</b>).</p>
Full article ">Figure 8
<p>Spatial distribution of eucalyptus in the experimental site (<b>a</b>) and test site (<b>b</b>), (<b>c</b>,<b>d</b>) represent the local distribution of eucalyptus in experimental site. The green color represents eucalyptus plantations.</p>
Full article ">Figure 9
<p>Spatial distribution of eucalyptus coverage (percent) in Fujian Province (<b>a</b>), the percent values in the legend represent the proportion of eucalyptus within a 1 km × 1 km grid; (<b>b</b>–<b>d</b>) represent different proportions of eucalyptus plantations.</p>
Full article ">
25 pages, 4229 KiB  
Article
Convolutional Neural Network Incorporating Multiple Attention Mechanisms for MRI Classification of Lumbar Spinal Stenosis
by Juncai Lin, Honglai Zhang and Hongcai Shang
Bioengineering 2024, 11(10), 1021; https://doi.org/10.3390/bioengineering11101021 (registering DOI) - 13 Oct 2024
Abstract
Background: Lumbar spinal stenosis (LSS) is a common cause of low back pain, especially in the elderly, and accurate diagnosis is critical for effective treatment. However, manual diagnosis using MRI images is time consuming and subjective, leading to a need for automated methods. [...] Read more.
Background: Lumbar spinal stenosis (LSS) is a common cause of low back pain, especially in the elderly, and accurate diagnosis is critical for effective treatment. However, manual diagnosis using MRI images is time consuming and subjective, leading to a need for automated methods. Objective: This study aims to develop a convolutional neural network (CNN)-based deep learning model integrated with multiple attention mechanisms to improve the accuracy and robustness of LSS classification via MRI images. Methods: The proposed model is trained on a standardized MRI dataset sourced from multiple institutions, encompassing various lumbar degenerative conditions. During preprocessing, techniques such as image normalization and data augmentation are employed to enhance the model’s performance. The network incorporates a Multi-Headed Self-Attention Module, a Slot Attention Module, and a Channel and Spatial Attention Module, each contributing to better feature extraction and classification. Results: The model achieved 95.2% classification accuracy, 94.7% precision, 94.3% recall, and 94.5% F1 score on the validation set. Ablation experiments confirmed the significant impact of the attention mechanisms in improving the model’s classification capabilities. Conclusion: The integration of multiple attention mechanisms enhances the model’s ability to accurately classify LSS in MRI images, demonstrating its potential as a tool for automated diagnosis. This study paves the way for future research in applying attention mechanisms to the automated diagnosis of lumbar spinal stenosis and other complex spinal conditions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Workflow of the MRI image classification system for lumbar spinal stenosis.</p>
Full article ">Figure 2
<p>Workflow of dataset preprocessing.</p>
Full article ">Figure 3
<p>Overall architecture of the proposed model. The model consists of three major parts: the head, body, and tail modules. The head module includes convolutional layers (Conv), Batch Normalization (BN), and Enhanced Inception Modules (EIM) for feature extraction, followed by Max-Pooling (Max-Pool) layers to downsample the feature maps. The body module incorporates four attention mechanisms: the Channel Attention Module (CAM), Spatial Attention Module (SPAM), Multi-Head Self-Attention Module (MHSAM), and Slot Attention Module (SAM), which collectively enhance feature selection and improve classification performance. Additionally, the Convolutional Block Attention Module (CBAM) combines the Channel Attention Module and the Spatial Attention Module to refine features in both channel and spatial dimensions. The tail module applies Global Average Pooling (GAP), fully connected (FC) layers, Layer Normalization (LN), and Dropout to refine the final classification output.</p>
Full article ">Figure 4
<p>Structure of the Enhanced Inception Module. This module consists of multiple parallel paths for extracting features at various scales. Depth Separable Convolutions (DSC) of varying kernel sizes (1 × 1, 3 × 3, 5 × 5) are applied in parallel, along with a Max-Pooling (3 × 3) operation. The results from all branches are concatenated (Filter concatenation) before being passed through a 1 × 1 convolution, followed by Batch Normalization (BN) and a ReLU activation function. This structure allows for efficient multi-scale feature extraction while reducing computational complexity through depth-wise separable convolutions.</p>
Full article ">Figure 5
<p>Structure of the Channel Attention Module (CAM). The CAM begins by applying Global Average Pooling (GAP) and Global Max Pooling (GMP) operations to the input feature map to capture channel-wise statistics. These pooled feature maps are then processed independently through convolutional layers followed by a ReLU activation function. The outputs from both pathways are summed and passed through another convolutional layer to generate the channel attention weights, which are multiplied with the original input feature map to refine it along the channel dimension, highlighting the most informative channels.</p>
Full article ">Figure 6
<p>Structure of the Spatial Attention Module (SPAM). The SPAM module first computes the average and max-pooling across the channel dimension of the input feature map. The resulting two spatial feature maps are concatenated along the channel axis, forming a combined representation of spatial information. This concatenated feature map is then passed through a convolutional layer followed by a sigmoid activation to generate spatial attention weights. These weights are multiplied with the input feature map, focusing the model’s attention on the most relevant spatial regions, thus improving feature localization for subsequent layers.</p>
Full article ">Figure 7
<p>Structure of the MHSAM. The MHSAM employs multi-head self-attention to enhance the model’s ability to focus on different aspects of the input feature representation. The input feature map is first linearly projected into query (Q), key (K), and value (V) matrices. Each of these matrices is split into multiple heads, which allows the model to attend to information at different positions simultaneously. The scaled dot-product attention (SDPA) is computed for each head, capturing the relationships between different spatial locations in the feature map. Finally, the outputs from all heads are concatenated and transformed through a fully connected (FC) layer to generate the refined feature representation, which is passed on to subsequent layers for further processing.</p>
Full article ">Figure 8
<p>Comparison of the proposed model with other models.</p>
Full article ">Figure 9
<p>Confusion matrix for the DenseNet201 model. The matrix illustrates the classification performance of the DenseNet201 model, with 0 denoting normal or mild cases and 1 indicating severe cases. Furthermore, darker colors represent higher accuracy for the corresponding class.</p>
Full article ">Figure 10
<p>Confusion matrix for the proposed model. This matrix presents the classification outcomes for the proposed model, with 0 representing normal or mild cases and 1 denoting severe cases. Compared to the DenseNet201 model (<a href="#bioengineering-11-01021-f009" class="html-fig">Figure 9</a>), the proposed model demonstrates improved accuracy, particularly in reducing false positives for severe cases, suggesting its potential for more reliable clinical application. Additionally, darker colors represent a higher level of accuracy in classification.</p>
Full article ">Figure 11
<p>ROC curves for DenseNet201 and proposed model across conditions.</p>
Full article ">Figure 12
<p>Misclassified MRI images in lumbar spinal stenosis diagnosis: severe cases incorrectly labeled as normal/mild.</p>
Full article ">
9 pages, 1460 KiB  
Article
Atmospheric Gravity Wave Detection in Low-Light Images: A Transfer Learning Approach
by Beimin Xiao, Shensen Hu, Weihua Ai and Yi Li
Electronics 2024, 13(20), 4030; https://doi.org/10.3390/electronics13204030 (registering DOI) - 13 Oct 2024
Abstract
Atmospheric gravity waves, as a key fluctuation in the atmosphere, have a significant impact on climate change and weather processes. Traditional observation methods rely on manually identifying and analyzing gravity wave stripe features from satellite images, resulting in a limited number of gravity [...] Read more.
Atmospheric gravity waves, as a key fluctuation in the atmosphere, have a significant impact on climate change and weather processes. Traditional observation methods rely on manually identifying and analyzing gravity wave stripe features from satellite images, resulting in a limited number of gravity wave events for parameter analysis and excitation mechanism studies, which restricts further related research. In this study, we focus on the gravity wave events in the South China Sea region and utilize a one-year low-light satellite dataset processed with wavelet transform noise reduction and light pixel replacement. Furthermore, transfer learning is employed to adapt the Inception V3 model to the classification task of a small-sample dataset, performing the automatic identification of gravity waves in low-light images. By employing sliding window cutting and data enhancement techniques, we further expand the dataset and enhance the generalization ability of the model. We compare the results of transfer learning detection based on the Inception V3 model with the YOLO v10 model, showing that the results of the Inception V3 model are greatly superior to those of the YOLO v10 model. The accuracy on the test dataset is 88.2%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of transfer learning structure.</p>
Full article ">Figure 2
<p>Flowchart of the gravity wave detection algorithm for low-light images based on deep learning.</p>
Full article ">Figure 3
<p>Training process of the Inception V3 model.</p>
Full article ">Figure 4
<p>Performance of YOLO v10 model.</p>
Full article ">
22 pages, 3301 KiB  
Article
Task-Level Customized Pruning for Image Classification on Edge Devices
by Yanting Wang, Feng Li, Han Zhang and Bojie Shi
Electronics 2024, 13(20), 4029; https://doi.org/10.3390/electronics13204029 (registering DOI) - 13 Oct 2024
Abstract
Convolutional neural networks (CNNs) are widely utilized in image classification. Nevertheless, CNNs typically require substantial computational resources, posing challenges for deployment on resource-constrained edge devices and limiting the spread of AI-driven applications. While various pruning approaches have been proposed to mitigate this issue, [...] Read more.
Convolutional neural networks (CNNs) are widely utilized in image classification. Nevertheless, CNNs typically require substantial computational resources, posing challenges for deployment on resource-constrained edge devices and limiting the spread of AI-driven applications. While various pruning approaches have been proposed to mitigate this issue, they often overlook a critical fact that edge devices are typically tasked with handling only a subset of classes rather than the entire set. Moreover, the specific combinations of subcategories that each device must discern vary, highlighting the need for fine-grained task-specific adjustments. Unfortunately, these oversights result in pruned models that still contain unnecessary category redundancies, thereby impeding the potential for further model optimization and lightweight design. To bridge this gap, we propose a task-level customized pruning (TLCP) method via utilizing task-level information, i.e., class combination information relevant to edge devices. Specifically, TLCP first introduces channel control gates to assess the importance of each convolutional channel for individual classes. These class-level control gates are then aggregated through linear combinations, resulting in a pruned model customized to the specific tasks of edge devices. Experiments on various customized tasks demonstrate that TLCP can significantly reduce the number of parameters, by up to 33.9% on CIFAR-10 and 14.0% on CIFAR-100, compared to other baseline methods, while maintaining almost the same inference accuracy. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>An example of the importance of neurons for different classes.</p>
Full article ">Figure 2
<p>The framework contains two phases, i.e., mapping of image information to channel control gates (phase one) and class-level control gate fusions (phase two). For each input image, TLCP introduces a control gate associated with each layer’s output channel to quantify contributions of different channels. The mapping process from image information to control gate is completed in phase one. In phase two, we combine control gates corresponding to the targeted classes using a linear fusion model. For task-aware customized control gates, we perform pruning based on the gate value.</p>
Full article ">Figure 3
<p>Control gates are multiplied by the layer’s output, a smaller gate value means the associated channel contributes less to the final model prediction; removing such channels has little effect on the model’s inference performance.</p>
Full article ">Figure 4
<p>Comparison of control gate values across all convolutional filters in the 3rd (<b>a</b>) and 7th (<b>b</b>) convolutional layers of VGG-16 on CIFAR-10 for all-class input. Comparison of control gate values across all convolutional filters in the 3rd (<b>c</b>) and 7th (<b>d</b>) convolutional layers of VGG-16 on CIFAR-10 for class 3 input. Bright and dark colors indicate high and low gate values, respectively.</p>
Full article ">Figure 5
<p>An example of three-class fusion. Given three targeted classes, we introduce a coefficient <math display="inline"><semantics> <msub> <mi>c</mi> <mi>i</mi> </msub> </semantics></math> for each <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">Y</mi> <mi mathvariant="bold-italic">v</mi> </msub> </semantics></math> and adopt a linear fusion model to merge different class-level control gates.</p>
Full article ">Figure 6
<p>Pruning ratio comparison under different numbers of targeted classes on (<b>a</b>) CIFAR-10 and (<b>b</b>) CIFAR-100.</p>
Full article ">Figure 7
<p>Comparison of TLCP based on PSO (Algorithm 1) and GA (Algorithm 2) under different class combinations.</p>
Full article ">Figure 8
<p>Effect of fine-tuning under different class combinations when pruning ratio is 0.82.</p>
Full article ">Figure 9
<p>Pruning ratios versus the number of targeted classes under accuracy drop within 1% (blue) and 5% (green) for (<b>a</b>) VGG-16 on CIFAR-10, (<b>b</b>) VGG-16 on CIFAR-100, (<b>c</b>) ResNet-50 on ImageNet, and (<b>d</b>) ResNet-18 on ImageNet.</p>
Full article ">Figure 10
<p>Accuracy loss versus pruning ratio when the numbers of targeted classes are 3 (<b>a</b>), 5 (<b>b</b>), and 8 (<b>c</b>).</p>
Full article ">
11 pages, 1513 KiB  
Article
Identification of Phospholipids Relevant to Cancer Tissue Using Differential Ion Mobility Spectrometry
by Patrik Sioris, Meri Mäkelä, Anton Kontunen, Markus Karjalainen, Antti Vehkaoja, Niku Oksala and Antti Roine
Int. J. Mol. Sci. 2024, 25(20), 11002; https://doi.org/10.3390/ijms252011002 (registering DOI) - 13 Oct 2024
Abstract
Phospholipids are the main building components of cell membranes and are also used for cell signaling and as energy storages. Cancer cells alter their lipid metabolism, which ultimately leads to an increase in phospholipids in cancer tissue. Surgical energy instruments use electrical or [...] Read more.
Phospholipids are the main building components of cell membranes and are also used for cell signaling and as energy storages. Cancer cells alter their lipid metabolism, which ultimately leads to an increase in phospholipids in cancer tissue. Surgical energy instruments use electrical or vibrational energy to heat tissues, which causes intra- and extracellular water to expand rapidly and degrade cell structures, bursting the cells, which causes the formation of a tissue aerosol or smoke depending on the amount of energy used. This gas phase analyte can then be analyzed via gas analysis methods. Differential mobility spectrometry (DMS) is a method that can be used to differentiate malignant tissue from benign tissues in real time via the analysis of surgical smoke produced by energy instruments. Previously, the DMS identification of cancer tissue was based on a ‘black box method’ by differentiating the 2D dispersion plots of samples. This study sets out to find datapoints from the DMS dispersion plots that represent relevant target molecules. We studied the ability of DMS to differentiate three subclasses of phospholipids (phosphatidylcholine, phosphatidylinositol, and phosphatidylethanolamine) from a control sample using a bovine skeletal muscle matrix with a 5 mg addition of each phospholipid subclass to the sample matrix. We trained binary classifiers using linear discriminant analysis (LDA) and support vector machines (SVM) for sample classification. We were able to identify phosphatidylcholine, -inositol, and -ethanolamine with SVM binary classification accuracies of 91%, 73%, and 66% and with LDA binary classification accuracies of 82%, 74%, and 72%, respectively. Phosphatidylcholine was detected with a reliable classification accuracy, but ion separation setups should be adjusted in future studies to reliably detect other relevant phospholipids such as phosphatidylinositol and phosphatidylethanolamine and improve DMS as a microanalysis method and identify other phospholipids relevant to cancer tissue. Full article
Show Figures

Figure 1

Figure 1
<p>Four-class classification of all of the phospholipid classes, including the control class, using SVM. PC = phosphatidylcholine, PI = phosphatidylinositol, PE = phosphatidylethanolamine.</p>
Full article ">Figure 2
<p>KS test and statistically significant regions of phosphatidylcholine samples.</p>
Full article ">Figure 3
<p>Measurement amounts (n) and excluded measurements.</p>
Full article ">Figure 4
<p>Measurements of PL samples with a diathermy blade. PC = phosphatidylcholine; PI = phosphatidylinositol; PE = phosphatidylethanolamine.</p>
Full article ">
28 pages, 4011 KiB  
Article
Advanced Deep Learning Fusion Model for Early Multi-Classification of Lung and Colon Cancer Using Histopathological Images
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2024, 14(20), 2274; https://doi.org/10.3390/diagnostics14202274 (registering DOI) - 12 Oct 2024
Abstract
Background: In recent years, the healthcare field has experienced significant advancements. New diagnostic techniques, treatments, and insights into the causes of various diseases have emerged. Despite these progressions, cancer remains a major concern. It is a widespread illness affecting individuals of all ages [...] Read more.
Background: In recent years, the healthcare field has experienced significant advancements. New diagnostic techniques, treatments, and insights into the causes of various diseases have emerged. Despite these progressions, cancer remains a major concern. It is a widespread illness affecting individuals of all ages and leads to one out of every six deaths. Lung and colon cancer alone account for nearly two million fatalities. Though it is rare for lung and colon cancers to co-occur, the spread of cancer cells between these two areas—known as metastasis—is notably high. Early detection of cancer greatly increases survival rates. Currently, histopathological image (HI) diagnosis and appropriate treatment are key methods for reducing cancer mortality and enhancing survival rates. Digital image processing (DIP) and deep learning (DL) algorithms can be employed to analyze the HIs of five different types of lung and colon tissues. Methods: Therefore, this paper proposes a refined DL model that integrates feature fusion for the multi-classification of lung and colon cancers. The proposed model incorporates three DL architectures: ResNet-101V2, NASNetMobile, and EfficientNet-B0. Each model has limitations concerning variations in the shape and texture of input images. To address this, the proposed model utilizes a concatenate layer to merge the pre-trained individual feature vectors from ResNet-101V2, NASNetMobile, and EfficientNet-B0 into a single feature vector, which is then fine-tuned. As a result, the proposed DL model achieves high success in multi-classification by leveraging the strengths of all three models to enhance overall accuracy. This model aims to assist pathologists in the early detection of lung and colon cancer with reduced effort, time, and cost. The proposed DL model was evaluated using the LC25000 dataset, which contains colon and lung HIs. The dataset was pre-processed using resizing and normalization techniques. Results: The model was tested and compared with recent DL models, achieving impressive results: 99.8% for precision, 99.8% for recall, 99.8% for F1-score, 99.96% for specificity, and 99.94% for accuracy. Conclusions: Thus, the proposed DL model demonstrates exceptional performance across all classification categories. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

Figure 1
<p>40× Tissue samples of the LC25000 dataset: (<b>a</b>) NSCLC, (<b>b</b>) SCLC, (<b>c</b>) benign lung tissue, (<b>d</b>) colon cancer tissue, and (<b>e</b>) benign colon tissue.</p>
Full article ">Figure 2
<p>The steps of the proposed DL model.</p>
Full article ">Figure 3
<p>The overall model architecture.</p>
Full article ">Figure 4
<p>The ResNet-101V2’s architecture.</p>
Full article ">Figure 5
<p>The architecture of the reduction cell and NASNet normal.</p>
Full article ">Figure 6
<p>The architecture of EfficientNet-B0.</p>
Full article ">Figure 7
<p>Training and validation loss of the three CNN models and the proposed fusion model.</p>
Full article ">Figure 8
<p>Training and validation accuracy of the three CNN models and the proposed fusion model.</p>
Full article ">Figure 9
<p>The confusion matrix for the three CNN models and the proposed fusion model on the test set.</p>
Full article ">
20 pages, 4520 KiB  
Article
Employing Different Algorithms of Lightweight Convolutional Neural Network Models in Image Distortion Classification
by Ismail Taha Ahmed, Falah Amer Abdulazeez and Baraa Tareq Hammad
Computers 2024, 13(10), 268; https://doi.org/10.3390/computers13100268 (registering DOI) - 12 Oct 2024
Abstract
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and [...] Read more.
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and reliably classifying distorted images. In this paper, we offer a comprehensive analysis of models of both non-lightweight and lightweight deep convolutional neural networks (CNNs) for the classification of distorted images. Subsequently, an effective method is proposed to enhance the overall performance of distortion image classification. This method involves selecting features from the pretrained models’ capabilities and using a strong classifier. The experiments utilized the kadid10k dataset to assess the effectiveness of the results. The K-nearest neighbor (KNN) classifier showed better performance than the naïve classifier in terms of accuracy, precision, error rate, recall and F1 score. Additionally, SqueezeNet outperformed other deep CNN models, both lightweight and non-lightweight, across every evaluation metric. The experimental results demonstrate that combining SqueezeNet with KNN can effectively and accurately classify distorted images into the correct categories. The proposed SqueezeNet-KNN method achieved an accuracy rate of 89%. As detailed in the results section, the proposed method outperforms state-of-the-art methods in accuracy, precision, error, recall, and F1 score measures. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Common Distortion Types. (<b>a</b>) Original Image; (<b>b</b>) Blur; (<b>c</b>) Noise; (<b>d</b>) Sharpness; (<b>e</b>) Contrast Change; (<b>f</b>) Compression.</p>
Full article ">Figure 2
<p>The Taxonomy of Distortion Image Classification Techniques.</p>
Full article ">Figure 3
<p>Comparative Study and the Proposed Methodology.</p>
Full article ">Figure 4
<p>Mechanism of Naïve Bayes Classifier.</p>
Full article ">Figure 5
<p>Experimentation Properties Description.</p>
Full article ">Figure 6
<p>Various Pristine Images. Samples Collected from the KADID-10k Dataset.</p>
Full article ">Figure 7
<p>The Dispersion of Distortion across Classes inside the KADID-10k Dataset.</p>
Full article ">Figure 8
<p>A Detailed Description of Every Term [<a href="#B28-computers-13-00268" class="html-bibr">28</a>].</p>
Full article ">Figure 9
<p>Comparison of Performance of Current State-of-the-Art Methods [<a href="#B7-computers-13-00268" class="html-bibr">7</a>,<a href="#B8-computers-13-00268" class="html-bibr">8</a>,<a href="#B9-computers-13-00268" class="html-bibr">9</a>,<a href="#B10-computers-13-00268" class="html-bibr">10</a>,<a href="#B11-computers-13-00268" class="html-bibr">11</a>,<a href="#B13-computers-13-00268" class="html-bibr">13</a>,<a href="#B16-computers-13-00268" class="html-bibr">16</a>].</p>
Full article ">
25 pages, 16714 KiB  
Article
An Innovative Tool for Monitoring Mangrove Forest Dynamics in Cuba Using Remote Sensing and WebGIS Technologies: SIGMEM
by Alexey Valero-Jorge, Raúl González-Lozano, Roberto González-De Zayas, Felipe Matos-Pupo, Rogert Sorí and Milica Stojanovic
Remote Sens. 2024, 16(20), 3802; https://doi.org/10.3390/rs16203802 (registering DOI) - 12 Oct 2024
Abstract
The main objective of this work was to develop a viewer with web output, through which the changes experienced by the mangroves of the Gran Humedal del Norte de Ciego de Avila (GHNCA) can be evaluated from remote sensors, contributing to the understanding [...] Read more.
The main objective of this work was to develop a viewer with web output, through which the changes experienced by the mangroves of the Gran Humedal del Norte de Ciego de Avila (GHNCA) can be evaluated from remote sensors, contributing to the understanding of the spatiotemporal variability of their vegetative dynamics. The achievement of this objective is supported by the use of open-source technologies such as MapStore, GeoServer and Django, as well as Google Earth Engine, which combine to offer a robust and technologically independent solution to the problem. In this context, it was decided to adopt an action model aimed at automating the workflow steps related to data preprocessing, downloading, and publishing. A visualizer with web output (Geospatial System for Monitoring Mangrove Ecosystems or SIGMEM) is developed for the first time, evaluating changes in an area of central Cuba from different vegetation indices. The evaluation of the machine learning classifiers Random Forest and Naive Bayes for the automated mapping of mangroves highlighted the ability of Random Forest to discriminate between areas occupied by mangroves and other coverages with an Overall Accuracy (OA) of 94.11%, surpassing the 89.85% of Naive Bayes. The estimated net change based on the year 2020 of the areas determined during the classification process showed a decrease of 5138.17 ha in the year 2023 and 2831.76 ha in the year 2022. This tool will be fundamental for researchers, decision makers, and students, contributing to new research proposals and sustainable management of mangroves in Cuba and the Caribbean. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
21 pages, 6286 KiB  
Article
Classification of Infant Crying Sounds Using SE-ResNet-Transformer
by Feng Li, Chenxi Cui and Yashi Hu
Sensors 2024, 24(20), 6575; https://doi.org/10.3390/s24206575 (registering DOI) - 12 Oct 2024
Abstract
Recently, emotion analysis has played an important role in the field of artificial intelligence, particularly in the study of speech emotion analysis, which can help understand one of the most direct ways of human emotional communication—speech. This study focuses on the emotion analysis [...] Read more.
Recently, emotion analysis has played an important role in the field of artificial intelligence, particularly in the study of speech emotion analysis, which can help understand one of the most direct ways of human emotional communication—speech. This study focuses on the emotion analysis of infant crying. Within cries lies a variety of information, including hunger, pain, and discomfort. This paper proposes an improved classification model using ResNet and transformer. It utilizes modified Mel-frequency cepstral coefficient Mel-frequency cepstral coefficient (MFCC) features obtained through feature engineering from infant cries and integrates SE attention mechanism modules into residual blocks to enhance the model’s ability to adjust channel weights. The proposed method achieved 93% accuracy rate in experiments, offering advantages of shorter training time and higher accuracy compared to other traditional models. It provides an efficient and stable solution for infant cry classification. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Feature map of pain. (<b>b</b>) Feature map of hunger. (<b>c</b>) Feature map of uncomfortable.</p>
Full article ">Figure 2
<p>Structure of residual block.</p>
Full article ">Figure 3
<p>Structure of SENet.</p>
Full article ">Figure 4
<p>Structure of the encoder part of the transformer.</p>
Full article ">Figure 5
<p>Structure of original residual module and SE-ResNet module.</p>
Full article ">Figure 6
<p>The workflow of the proposed method in this study.</p>
Full article ">Figure 7
<p>Performance of different features of hungry cries.</p>
Full article ">Figure 8
<p>Performance of different features of pain cries.</p>
Full article ">Figure 9
<p>Performance of different features of uncomfortable cries.</p>
Full article ">Figure 10
<p>(<b>a</b>) Proposed method; (<b>b</b>) CNN-Transformer; (<b>c</b>) CNN; (<b>d</b>) GRU; (<b>e</b>) LSTM.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) Proposed method; (<b>b</b>) CNN-Transformer; (<b>c</b>) CNN; (<b>d</b>) GRU; (<b>e</b>) LSTM.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) Proposed method; (<b>b</b>) CNN-Transformer; (<b>c</b>) CNN; (<b>d</b>) GRU; (<b>e</b>) LSTM.</p>
Full article ">Figure 11
<p>(<b>a</b>) The reduction is 16; (<b>b</b>) the reduction is 32; (<b>c</b>) the reduction is 64.</p>
Full article ">
20 pages, 610 KiB  
Article
Comparative Study of Computational Methods for Classifying Red Blood Cell Elasticity
by Hynek Bachratý, Peter Novotný, Monika Smiešková, Katarína Bachratá and Samuel Molčan
Appl. Sci. 2024, 14(20), 9315; https://doi.org/10.3390/app14209315 (registering DOI) - 12 Oct 2024
Abstract
The elasticity of red blood cells (RBCs) is crucial for their ability to fulfill their role in the blood. Decreased RBC deformability is associated with various pathological conditions. This study explores the application of machine learning to predict the elasticity of RBCs using [...] Read more.
The elasticity of red blood cells (RBCs) is crucial for their ability to fulfill their role in the blood. Decreased RBC deformability is associated with various pathological conditions. This study explores the application of machine learning to predict the elasticity of RBCs using both image data and detailed physical measurements derived from simulations. We simulated RBC behavior in a microfluidic channel. The simulation results provided the basis for generating data on which we applied machine learning techniques. We analyzed the surface-area-to-volume ratio of RBCs as an indicator of elasticity, employing statistical methods to differentiate between healthy and diseased RBCs. The Kolmogorov–Smirnov test confirmed significant differences between healthy and diseased RBCs, though distinctions among different types of diseased RBCs were less clear. We used decision tree models, including random forests and gradient boosting, to classify RBC elasticity based on predictors derived from simulation data. The comparison of the results with our previous work on deep neural networks shows improved classification accuracy in some scenarios. The study highlights the potential of machine learning to automate and enhance the analysis of RBC elasticity, with implications for clinical diagnostics. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) Technologies in Biomedicine)
Show Figures

Figure 1

Figure 1
<p>Summary of the framework. Blue and magenta arrows represent actions performed with training and evaluation data, respectively. Blue boxes represent input/output; orange boxes represent actions performed with the data.</p>
Full article ">Figure 2
<p>On the left, microfluidic channel topology is shown. Only the basic part with five obstacles (depicted with blue colour) was simulated. The figure on the right shows the scheme of the simulation box with the dimensions of the individual parts.</p>
Full article ">Figure 3
<p>Time series plot of surface-area-to-volume (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> <mo>:</mo> <mi>V</mi> </mrow> </semantics></math>) ratio for a single healthy RBC.</p>
Full article ">Figure 4
<p>Minimum, maximum, and average <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> <mo>:</mo> <mi>V</mi> </mrow> </semantics></math> ratio for nine healthy RBCs. Cells are sorted by average.</p>
Full article ">Figure 5
<p>Average, minimum, maximum, and variance of <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> <mo>:</mo> <mi>V</mi> </mrow> </semantics></math> ratio for all 4 cell types. Cells of each type are sorted by the observed characteristic for each plot.</p>
Full article ">Figure 6
<p>Variance of surface area and volume for cell types 0 and 3.</p>
Full article ">Figure 7
<p>Dependence of classification results on <span class="html-italic">S</span> when predicting 4 classes.</p>
Full article ">Figure 8
<p>Dependence of classification results on <span class="html-italic">S</span> when predicting 2 classes.</p>
Full article ">Figure 9
<p>Dependence of classification results on predictor set when predicting 4 classes.</p>
Full article ">Figure 10
<p>Dependence of classification results on predictor set when predicting 2 classes.</p>
Full article ">Figure 11
<p>Importance of predictors from the 6th set.</p>
Full article ">Figure 12
<p>Importance of predictors from the 4th set.</p>
Full article ">
Back to TopTop