[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (22,052)

Search Parameters:
Keywords = classification methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1332 KiB  
Article
Development of a Predictive Model for Carbon Dioxide Corrosion Rate and Severity Based on Machine Learning Algorithms
by Zhenzhen Dong, Min Zhang, Weirong Li, Fenggang Wen, Guoqing Dong, Lu Zou and Yongqiang Zhang
Materials 2024, 17(16), 4046; https://doi.org/10.3390/ma17164046 (registering DOI) - 14 Aug 2024
Abstract
Carbon dioxide corrosion is a pervasive issue in pipelines and the petroleum industry, posing substantial risks to equipment safety and longevity. Accurate prediction of corrosion rates and severity is essential for effective material selection and equipment maintenance. This paper begins by addressing the [...] Read more.
Carbon dioxide corrosion is a pervasive issue in pipelines and the petroleum industry, posing substantial risks to equipment safety and longevity. Accurate prediction of corrosion rates and severity is essential for effective material selection and equipment maintenance. This paper begins by addressing the limitations of traditional corrosion prediction methods and explores the application of machine learning algorithms in CO2 corrosion prediction. Conventional models often fail to capture the complex interactions among multiple factors, resulting in suboptimal prediction accuracy, limited adaptability, and poor generalization. To overcome these limitations, this study systematically organized and analyzed the data, performed a correlation analysis of the data features, and examined the factors influencing corrosion. Subsequently, prediction models were developed using six algorithms: Random Forest (RF), K-Nearest Neighbors (KNN), Gradient Boosting Decision Tree (GBDT), Support Vector Machine (SVM), XGBoost, and LightGBM. The results revealed that SVM exhibited the lowest performance on both training and test sets, while RF achieved the best results with R2 values of 0.92 for the training set and 0.88 for the test set. In the classification of corrosion severity, RF, LightGBM, SVM, and KNN were utilized, with RF demonstrating superior performance, achieving an accuracy of 99% and an F1-score of 0.99. This study highlights that machine learning algorithms, particularly Random Forest, offer substantial potential for predicting and classifying CO2 corrosion. These algorithms provide innovative approaches and valuable insights for practical applications, enhancing predictive accuracy and operational efficiency in corrosion management. Full article
19 pages, 383 KiB  
Article
Corun: Concurrent Inference and Continuous Training at the Edge for Cost-Efficient AI-Based Mobile Image Sensing
by Yu Liu, Anurag Andhare and Kyoung-Don Kang
Sensors 2024, 24(16), 5262; https://doi.org/10.3390/s24165262 (registering DOI) - 14 Aug 2024
Abstract
Intelligent mobile image sensing powered by deep learning analyzes images captured by cameras from mobile devices, such as smartphones or smartwatches. It supports numerous mobile applications, such as image classification, face recognition, and camera scene detection. Unfortunately, mobile devices often lack the resources [...] Read more.
Intelligent mobile image sensing powered by deep learning analyzes images captured by cameras from mobile devices, such as smartphones or smartwatches. It supports numerous mobile applications, such as image classification, face recognition, and camera scene detection. Unfortunately, mobile devices often lack the resources necessary for deep learning, leading to increased inference latency and rapid battery consumption. Moreover, the inference accuracy may decline over time due to potential data drift. To address these issues, we introduce a new cost-efficient framework, called Corun, designed to simultaneously handle multiple inference queries and continual model retraining/fine-tuning of a pre-trained model on a single commodity GPU in an edge server to significantly improve the inference throughput, upholding the inference accuracy. The scheduling method of Corun undertakes offline profiling to find the maximum number of concurrent inferences that can be executed along with a retraining job on a single GPU without incurring an out-of-memory error or significantly increasing the latency. Our evaluation verifies the cost-effectiveness of Corun. The inference throughput provided by Corun scales with the number of concurrent inference queries. However, the latency of inference queries and the length of a retraining epoch increase at substantially lower rates. By concurrently processing multiple inference and retraining tasks on one GPU instead of using a separate GPU for each task, Corun could reduce the number of GPUs and cost required to deploy mobile image sensing applications based on deep learning at the edge. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>FCFS Scheduling and multiplexed dispatch for concurrent CNN training and inferences.</p>
Full article ">Figure 2
<p>Original and scaled-down request patterns of Microsoft Azure FaaS function invocation traces.</p>
Full article ">Figure 3
<p>Normalized epoch times of training in the presence of concurrent CNN inferences (# of infer: number of concurrent inferences).</p>
Full article ">Figure 4
<p>Normalized epoch times of training in the presence of concurrent Transformer inferences (# of infer: number of concurrent inferences).</p>
Full article ">
27 pages, 626 KiB  
Review
Review of Phonocardiogram Signal Analysis: Insights from the PhysioNet/CinC Challenge 2016 Database
by Bing Zhu, Zihong Zhou, Shaode Yu, Xiaokun Liang, Yaoqin Xie and Qiuirui Sun
Electronics 2024, 13(16), 3222; https://doi.org/10.3390/electronics13163222 (registering DOI) - 14 Aug 2024
Abstract
The phonocardiogram (PCG) is a crucial tool for the early detection, continuous monitoring, accurate diagnosis, and efficient management of cardiovascular diseases. It has the potential to revolutionize cardiovascular care and improve patient outcomes. The PhysioNet/CinC Challenge 2016 database, a large and influential resource, [...] Read more.
The phonocardiogram (PCG) is a crucial tool for the early detection, continuous monitoring, accurate diagnosis, and efficient management of cardiovascular diseases. It has the potential to revolutionize cardiovascular care and improve patient outcomes. The PhysioNet/CinC Challenge 2016 database, a large and influential resource, encourages contributions to accurate heart sound state classification (normal versus abnormal), achieving promising benchmark performance (accuracy: 99.80%; sensitivity: 99.70%; specificity: 99.10%; and score: 99.40%). This study reviews recent advances in analytical techniques applied to this database, and 104 publications on PCG signal analysis are retrieved. These techniques encompass heart sound preprocessing, signal segmentation, feature extraction, and heart sound state classification. Specifically, this study summarizes methods such as signal filtering and denoising; heart sound segmentation using hidden Markov models and machine learning; feature extraction in the time, frequency, and time-frequency domains; and state-of-the-art heart sound state recognition techniques. Additionally, it discusses electrocardiogram (ECG) feature extraction and joint PCG and ECG heart sound state recognition. Despite significant technical progress, challenges remain in large-scale high-quality data collection, model interpretability, and generalizability. Future directions include multi-modal signal fusion, standardization and validation, automated interpretation for decision support, real-time monitoring, and longitudinal data analysis. Continued exploration and innovation in heart sound signal analysis are essential for advancing cardiac care, improving patient outcomes, and enhancing user trust and acceptance. Full article
(This article belongs to the Special Issue Signal, Image and Video Processing: Development and Applications)
Show Figures

Figure 1

Figure 1
<p>The number of technical publications per year since the database was released.</p>
Full article ">Figure 2
<p>LR-HSMM, a recommended heart sound segmentation algorithm.</p>
Full article ">
19 pages, 1846 KiB  
Article
Protein Language Models and Machine Learning Facilitate the Identification of Antimicrobial Peptides
by David Medina-Ortiz, Seba Contreras, Diego Fernández, Nicole Soto-García, Iván Moya, Gabriel Cabas-Mora and Álvaro Olivera-Nappa
Int. J. Mol. Sci. 2024, 25(16), 8851; https://doi.org/10.3390/ijms25168851 (registering DOI) - 14 Aug 2024
Abstract
Peptides are bioactive molecules whose functional versatility in living organisms has led to successful applications in diverse fields. In recent years, the amount of data describing peptide sequences and function collected in open repositories has substantially increased, allowing the application of more complex [...] Read more.
Peptides are bioactive molecules whose functional versatility in living organisms has led to successful applications in diverse fields. In recent years, the amount of data describing peptide sequences and function collected in open repositories has substantially increased, allowing the application of more complex computational models to study the relations between the peptide composition and function. This work introduces AMP-Detector, a sequence-based classification model for the detection of peptides’ functional biological activity, focusing on accelerating the discovery and de novo design of potential antimicrobial peptides (AMPs). AMP-Detector introduces a novel sequence-based pipeline to train binary classification models, integrating protein language models and machine learning algorithms. This pipeline produced 21 models targeting antimicrobial, antiviral, and antibacterial activity, achieving average precision exceeding 83%. Benchmark analyses revealed that our models outperformed existing methods for AMPs and delivered comparable results for other biological activity types. Utilizing the Peptide Atlas, we applied AMP-Detector to discover over 190,000 potential AMPs and demonstrated that it is an integrative approach with generative learning to aid in de novo design, resulting in over 500 novel AMPs. The combination of our methodology, robust models, and a generative design strategy offers a significant advancement in peptide-based drug discovery and represents a pivotal tool for therapeutic applications. Full article
Show Figures

Figure 1

Figure 1
<p><b>Proposed methodology to generate and evaluate predictive models.</b> (<b>A</b>) Numerical representation of sequence datasets. Here, we explore different encoding strategies, including classic methods such as one-hot encoders, physicochemical property-based encoders, and embedding based on pre-trained models. All different methods are applied individually. Once the input dataset is encoded, it is randomly split in a 90:10 ratio, using the first part to develop models and the second as a benchmark dataset. (<b>B</b>) Using the model development dataset and all of its possible numerical representations, we explore different 80:20 partitions to use for model training and validation. We explore and evaluate different models and hyperparameters using classic performance metrics. As this stage is repeated an arbitrary number of times, we obtain distributions of performance for each model. (<b>C</b>) Based on the distribution of performance, the best-performing combinations of algorithms and numerical representations are selected based on statistical criteria. These models undergo a hyperparameter optimization procedure based on Bayesian criteria. (<b>D</b>) Finally, we evaluate the performance of the models generated (and other tools/methods used to compare them) using the benchmark dataset and export the best strategy for future use.</p>
Full article ">Figure 2
<p><b>Performance distributions in the model exploration stage.</b> Our model training and validation pipeline outputs several performance metrics, which are used for model selection. The training performance reports the mean over several k-cross-validations in 80% of the dataset, while the validation performance is a single value obtained when applying the generated model on the 20% remaining. As the 80:20 partition is repeated several times and the results are aggregated over different categories, we obtain distributions instead of single performance values. In blue, the distribution of the mean performance in training is narrower than the distribution in validation (orange) as a consequence of the central limit theorem. We present these performance measures for different numerical representation strategies (<b>A</b>), supervised learning algorithms on the whole dataset (<b>B</b>) and supervised learning algorithms filtering for only embedding-based encoders (<b>C</b>), for embedding representation through pre-trained models (<b>D</b>), and for the different classification tasks (<b>E</b>).</p>
Full article ">Figure 3
<p><b>Physicochemical property distribution analysis reveals concordance between existing and generated peptide sequences</b>. Using modlAMP, we explored the similarities between sequences of different sources for nine physicochemical properties. The de novo generated sequences showed differing distributions for the molecular weight, charge, and aromaticity; however, no significant differences were observed in other properties. This suggests that the models generated are reliable and produce sequences consistent with those previously reported. The Y-axes have been removed from the subplots for clarity; the values do not play any role when comparing the shapes of the probability distributions.</p>
Full article ">Figure 4
<p><b>Visualization of generated antimicrobial peptides by applying VAE approaches.</b> (<b>A</b>). Averagefrequency of amino acids in the studied sequences depends on their source of origin. Sequences created using pre-trained VAEs tend to have slightly more cysteine and glycine instances, regardless of whether the original input was an AMP or not. On the other hand, raw AMPs, potential AMPs identified in the Peptide Atlas, and AMPs generated using VAE trained with AMPs all show similar patterns, except for isoleucine and leucine. In these cases, the peptides generated using VAEs have a lower or higher frequency, respectively (see <a href="#app1-ijms-25-08851" class="html-app">Table S4 in the Supplementary Materials</a> for more details). (<b>B</b>). Embedding visualization via t-SNE for the numerical representations generated by the ProTrans t5 Uniref pre-trained model for the different sources analyzed. The sequences generated by the VAE trained with AMP sequences show greater dispersion and visual separation compared to other sources, indicating possible new behaviors. This is reflected in the variations in the amino acid properties and frequency. The representations for the potential AMPs generated via the pre-trained VAE exhibit similar behavior. The same is true for the raw AMP sequences and the potential AMPs identified in the Peptide Atlas, consistent with the analysis of the properties and amino acid frequencies.</p>
Full article ">
48 pages, 894 KiB  
Review
Earlier Decision on Detection of Ransomware Identification: A Comprehensive Systematic Literature Review
by Latifa Albshaier, Seetah Almarri and M. M. Hafizur Rahman
Information 2024, 15(8), 484; https://doi.org/10.3390/info15080484 (registering DOI) - 14 Aug 2024
Abstract
Cybersecurity is normally defined as protecting systems against all kinds of cyberattacks; however, due to the rapid and permanent expansion of technology and digital transformation, the threats are also increasing. One of those new threats is ransomware, which is a form of malware [...] Read more.
Cybersecurity is normally defined as protecting systems against all kinds of cyberattacks; however, due to the rapid and permanent expansion of technology and digital transformation, the threats are also increasing. One of those new threats is ransomware, which is a form of malware that aims to steal user’s money. Ransomware is a form of malware that encrypts a victim’s files. The attacker then demands a ransom from the victim to restore access to the data upon a large payment. Ransomware is a way of stealing money in which a user’s files are encrypted and the decrypted key is held by the attacker until a ransom amount is paid by the victim. This systematic literature review (SLR) highlights recent papers published between 2020 and 2024. This paper examines existing research on early ransomware detection methods, focusing on the signs, frameworks, and techniques used to identify and detect ransomware before it causes harm. By analyzing a wide range of academic papers, industry reports, and case studies, this review categorizes and assesses the effectiveness of different detection methods, including those based on signatures, behavior patterns, and machine learning (ML). It also looks at new trends and innovative strategies in ransomware detection, offering a classification of detection techniques and pointing out the gaps in current research. The findings provide useful insights for cybersecurity professionals and researchers, helping guide future efforts to develop strong and proactive ransomware detection systems. This review emphasizes the need for ongoing improvements in detection technologies to keep up with the constantly changing ransomware threat landscape. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

Figure 1
<p>Paper selection for literature review using PRISMA [<a href="#B7-information-15-00484" class="html-bibr">7</a>,<a href="#B8-information-15-00484" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Total values received by ransomware attackers in the last 5 years [<a href="#B10-information-15-00484" class="html-bibr">10</a>].</p>
Full article ">Figure 3
<p>How the ransomware attacks work [<a href="#B11-information-15-00484" class="html-bibr">11</a>].</p>
Full article ">Figure 4
<p>Types of ransomware attacks.</p>
Full article ">Figure 5
<p>Artificial intelligence techniques [<a href="#B35-information-15-00484" class="html-bibr">35</a>].</p>
Full article ">Figure 6
<p>Machine learning: detection algorithm.</p>
Full article ">
14 pages, 2697 KiB  
Article
An Improved Medical Image Classification Algorithm Based on Adam Optimizer
by Haijing Sun, Wen Zhou, Jiapeng Yang, Yichuan Shao, Lei Xing, Qian Zhao and Le Zhang
Mathematics 2024, 12(16), 2509; https://doi.org/10.3390/math12162509 (registering DOI) - 14 Aug 2024
Abstract
Due to the complexity and illegibility of medical images, it brings inconvenience and difficulty to the diagnosis of medical personnel. To address these issues, an optimization algorithm called GSL(Gradient sine linear) based on Adam algorithm improvement is proposed in this paper, which introduces [...] Read more.
Due to the complexity and illegibility of medical images, it brings inconvenience and difficulty to the diagnosis of medical personnel. To address these issues, an optimization algorithm called GSL(Gradient sine linear) based on Adam algorithm improvement is proposed in this paper, which introduces gradient pruning strategy, periodic adjustment of learning rate, and linear interpolation strategy. The gradient trimming technique can scale the gradient to prevent gradient explosion, while the periodic adjustment of the learning rate and linear interpolation strategy adjusts the learning rate according to the characteristics of the sinusoidal function, accelerating the convergence while reducing the drastic parameter fluctuations, improving the efficiency and stability of training. The experimental results show that compared to the classic Adam algorithm, this algorithm can demonstrate better classification accuracy, the GSL algorithm achieves an accuracy of 78% and 75.2% on the MobileNetV2 network and ShuffleNetV2 network under the Gastroenterology dataset; and on the MobileNetV2 network and ShuffleNetV2 network under the Glaucoma dataset, an accuracy of 84.72% and 83.12%. The GSL optimizer achieved significant performance improvement on various neural network structures and datasets, proving its effectiveness and practicality in the field of deep learning, and also providing new ideas and methods for solving the difficulties in medical image recognition. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental training plot for the GSL algorithm Glaucoma dataset.</p>
Full article ">Figure 2
<p>Experimental procedure flowchart.</p>
Full article ">Figure 3
<p>Gastroenterology dataset image preprocessing process.</p>
Full article ">Figure 4
<p>Performance comparison of different algorithms on MobileNetV2 for the Gastroenterology dataset on the validation set: (<b>a</b>) accuracy comparison; (<b>b</b>) loss value comparison.</p>
Full article ">Figure 5
<p>Performance comparison of different algorithms on ShuffleNetV2 for the Gastroenterology dataset on the validation set: (<b>a</b>) accuracy comparison; (<b>b</b>) loss value comparison.</p>
Full article ">Figure 6
<p>Performance comparison of different algorithms on MobileNetV2 for the Glaucoma dataset on the validation set: (<b>a</b>) accuracy comparison; (<b>b</b>) loss value comparison.</p>
Full article ">Figure 7
<p>Performance comparison of different algorithms on ShuffleNetV2 for the Glaucoma dataset on the validation set: (<b>a</b>) accuracy comparison; (<b>b</b>) loss value comparison.</p>
Full article ">
8 pages, 558 KiB  
Article
Risk of Subsequent Hip Fractures across Varying Treatment Patterns for Index Vertebral Compression Fractures
by Andy Ton, Jennifer A. Bell, William J. Karakash, Thomas D. Alter, Mary Kate Erdman, Hyunwoo Paco Kang, Emily S. Mills, Jonathan Mina Ragheb, Mirbahador Athari, Jeffrey C. Wang, Ram K. Alluri and Raymond J. Hah
J. Clin. Med. 2024, 13(16), 4781; https://doi.org/10.3390/jcm13164781 - 14 Aug 2024
Abstract
Introduction: Vertebral compression fractures (VCFs) pose a considerable healthcare burden and are linked to elevated morbidity and mortality. Despite available anti-osteoporotic treatments (AOTs), guideline adherence is lacking. This study aims to evaluate subsequent hip fracture incidence after index VCF and to elucidate AOT [...] Read more.
Introduction: Vertebral compression fractures (VCFs) pose a considerable healthcare burden and are linked to elevated morbidity and mortality. Despite available anti-osteoporotic treatments (AOTs), guideline adherence is lacking. This study aims to evaluate subsequent hip fracture incidence after index VCF and to elucidate AOT prescribing patterns in VCF patients, further assessing the impact of surgical interventions on these patterns. Materials and Methods: Patients with index VCFs between 2010 and 2021 were identified using the PearlDiver database. Diagnostic and procedural data were recorded using International Classification of Diseases (ICD-9, ICD-10) and Current Procedural Terminology (CPT) codes. Patients under age 50 and follow-up <one year following index VCF were excluded. Patients were categorized based on whether they received AOT within one year, preceding and after index VCF, and were subsequently propensity-matched 1:3 based on age, sex, and Elixhauser Comorbidity Index (ECI) score to compare hip fracture incidence following index VCF. Sub-analysis was performed for operatively managed VCFs (kyphoplasty/vertebroplasty). Statistical tests included Chi-squared for categorical outcomes, and Kruskal–Wallis for continuous measures. Results: Of 637,701 patients, 72.6% were female. The overall subsequent hip fracture incidence was 2.6% at one year and 12.9% for all-time follow-up. Propensity-matched analysis indicated higher subsequent hip fracture rates in patients initiated on AOT post-index VCF (one year: 3.8% vs. 3.5%, p = 0.0013; all-time: 14.3% vs. 13.0%, p < 0.0001). Conclusions: The study reveals an unexpected increase in subsequent hip fractures among patients initiated on AOT post-index VCF, likely due to selection bias. These findings highlight the need for refined osteoporosis-management strategies to improve guideline adherence, thereby mitigating patient morbidity and mortality. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

Figure 1
<p>Patient-selection flowchart. VCF = vertebral compression fracture, AOT = anti-osteoporotic treatment, ECI = Elixhauser Comorbidity Index.</p>
Full article ">
14 pages, 9929 KiB  
Article
Diagnosis of Pressure Ulcer Stage Using On-Device AI
by Yujee Chang, Jun Hyung Kim, Hyun Woo Shin, Changjin Ha, Seung Yeob Lee and Taesik Go
Appl. Sci. 2024, 14(16), 7124; https://doi.org/10.3390/app14167124 - 14 Aug 2024
Abstract
Pressure ulcers are serious healthcare concerns, especially for the elderly with reduced mobility. Severe pressure ulcers are accompanied by pain, degrading patients’ quality of life. Thus, speedy and accurate detection and classification of pressure ulcers are vital for timely treatment. The conventional visual [...] Read more.
Pressure ulcers are serious healthcare concerns, especially for the elderly with reduced mobility. Severe pressure ulcers are accompanied by pain, degrading patients’ quality of life. Thus, speedy and accurate detection and classification of pressure ulcers are vital for timely treatment. The conventional visual examination method requires professional expertise for diagnosing pressure ulcer severity but it is difficult for the lay carer in domiciliary settings. In this study, we present a mobile healthcare platform incorporated with a light-weight deep learning model to exactly detect pressure ulcer regions and classify pressure ulcers into six severities such as stage 1–4, deep tissue pressure injury, and unstageable. YOLOv8 models were trained and tested using 2800 annotated pressure ulcer images. Among the five tested YOLOv8 models, the YOLOv8m model exhibited promising detection performance with overall classification accuracy of 84.6% and a mAP@50 value of 90.8%. The mobile application (app) was also developed applying the trained YOLOv8m model. The mobile app returned the diagnostic result within a short time (≒3 s). Accordingly, the proposed on-device AI app can contribute to early diagnosis and systematic management of pressure ulcers. Full article
Show Figures

Figure 1

Figure 1
<p>YOLOv8 model architecture. <span class="html-italic">d</span>, <span class="html-italic">w</span>, and <span class="html-italic">r</span> indicate the depth multiple, width multiple, and ratio of each module, respectively. <span class="html-italic">k</span>, <span class="html-italic">s</span>, and <span class="html-italic">p</span> denote kernel size, stride, and padding number, respectively.</p>
Full article ">Figure 2
<p>YOLOv8m training results.</p>
Full article ">Figure 3
<p>Confusion matrix of pressure ulcer classification by the trained YOLOv8m model.</p>
Full article ">Figure 4
<p>Representative pressure ulcer detection results by the trained YOLOv8m model. (<b>a</b>–<b>c</b>) Single stage detection in each image. (<b>d</b>–<b>f</b>) Multiple stage detections in each image.</p>
Full article ">Figure 5
<p>Development of a mobile app for detecting pressure ulcers. (<b>a</b>) Log-in page. (<b>b</b>) App home page. (<b>c</b>) Detection results after inspection. (<b>d</b>) Provision of instructions and information according to the results.</p>
Full article ">Figure 6
<p>Failure cases of detecting stage 2. (<b>a</b>,<b>d</b>) Original images. (<b>b</b>,<b>e</b>) Ground–truth labels. (<b>c</b>,<b>f</b>) Prediction results.</p>
Full article ">
16 pages, 4414 KiB  
Article
A Hybrid Forecasting Structure Based on Arima and Artificial Neural Network Models
by Adil Atesongun and Mehmet Gulsen
Appl. Sci. 2024, 14(16), 7122; https://doi.org/10.3390/app14167122 - 14 Aug 2024
Viewed by 122
Abstract
This study involves the development of a hybrid forecasting framework that integrates two different models in a framework to improve prediction capability. Although the concept of hybridization is not a new issue in forecasting, our approach presents a new structure that combines two [...] Read more.
This study involves the development of a hybrid forecasting framework that integrates two different models in a framework to improve prediction capability. Although the concept of hybridization is not a new issue in forecasting, our approach presents a new structure that combines two standard simple forecasting models uniquely for superior performance. Hybridization is significant for complex data sets with multiple patterns. Such data sets do not respond well to simple models, and hybrid models based on the integration of various forecasting tools often lead to better forecasting performance. The proposed architecture includes serially connected ARIMA and ANN models. The original data set is first processed by ARIMA. The error (i.e., residuals) of the ARIMA is sent to the ANN for secondary processing. Between these two models, there is a classification mechanism where the raw output of the ARIMA is categorized into three groups before it is sent to the secondary model. The algorithm is tested on well-known forecasting cases from the literature. The proposed model performs better than existing methods in most cases. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>A serially connected hybrid model.</p>
Full article ">Figure 2
<p>Proposed neural network structure (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>N</mi> </mrow> <mrow> <mo>(</mo> <mi>p</mi> <mo>−</mo> <mi>q</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </mrow> </semantics></math>) [<a href="#B6-applsci-14-07122" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>Proposed hybrid structure.</p>
Full article ">Figure 4
<p>Hybrid forecast vs. actual data over the test data (sunspot data—67 observations).</p>
Full article ">Figure 5
<p>Hybrid forecast vs. actual data over the test data (Canadian lynx—14 observations).</p>
Full article ">Figure 6
<p>Hybrid forecast vs. actual data over the test data (hourly electricity rates—24 observations).</p>
Full article ">Figure 7
<p>Monthly airline passenger data.</p>
Full article ">Figure 8
<p>Hybrid forecast vs. actual data over the test data (airline passenger data).</p>
Full article ">Figure 9
<p>Historical wheat yield in Turkey.</p>
Full article ">Figure 10
<p>Hybrid forecast vs. actual data over the test data (wheat yield).</p>
Full article ">
24 pages, 3195 KiB  
Review
Historic Built Environment Assessment and Management by Deep Learning Techniques: A Scoping Review
by Valeria Giannuzzi and Fabio Fatiguso
Appl. Sci. 2024, 14(16), 7116; https://doi.org/10.3390/app14167116 - 13 Aug 2024
Viewed by 310
Abstract
Recent advancements in digital technologies and automated analysis techniques applied to Historic Built Environment (HBE) demonstrate significant advantages in efficiently collecting and interpreting data for building conservation activities. Integrating digital image processing through Artificial Intelligence approaches further streamlines data analysis for diagnostic assessments. [...] Read more.
Recent advancements in digital technologies and automated analysis techniques applied to Historic Built Environment (HBE) demonstrate significant advantages in efficiently collecting and interpreting data for building conservation activities. Integrating digital image processing through Artificial Intelligence approaches further streamlines data analysis for diagnostic assessments. In this context, this paper presents a scoping review based on Scopus and Web of Science databases, following the PRISMA protocol, focusing on applying Deep Learning (DL) architectures for image-based classification of decay phenomena in the HBE, aiming to explore potential implementations in decision support system. From the literature screening process, 29 selected articles were analyzed according to methods for identifying buildings’ surface deterioration, cracks, and post-disaster damage at a district scale, with a particular focus on the innovative DL architectures developed, the accuracy of results obtained, and the classification methods adopted to understand limitations and strengths. The results highlight current research trends and the potential of DL approaches for diagnostic purposes in the built heritage conservation field, evaluating methods and tools for data acquisition and real-time monitoring, and emphasizing the advantages of implementing the adopted techniques in interoperable environments for information sharing among stakeholders. Future challenges involve implementing DL models in mobile apps, using sensors and IoT systems for on-site defect detection and long-term monitoring, integrating multimodal data from non-destructive inspection techniques, and establishing direct connections between data, intervention strategies, timing, and costs, thereby improving heritage diagnosis and management practices. Full article
(This article belongs to the Special Issue Advanced Technologies in Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Methodological workflow of the research process.</p>
Full article ">Figure 2
<p>Number of publications per year during 2019–2024.</p>
Full article ">Figure 3
<p>Distribution of publications by country.</p>
Full article ">Figure 4
<p>Percentage of publications by subject area.</p>
Full article ">Figure 5
<p>Visualization of co-occurrence of author keywords and indexed keywords, with 32 items (4 clusters). VOSviewer.</p>
Full article ">
40 pages, 4080 KiB  
Article
Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
by Bihi Sabiri, Amal Khtira, Bouchra El Asri and Maryem Rhanoui
J. Imaging 2024, 10(8), 196; https://doi.org/10.3390/jimaging10080196 - 13 Aug 2024
Viewed by 298
Abstract
In recent years, contrastive learning has been a highly favored method for self-supervised representation learning, which significantly improves the unsupervised training of deep image models. Self-supervised learning is a subset of unsupervised learning in which the learning process is supervised by creating pseudolabels [...] Read more.
In recent years, contrastive learning has been a highly favored method for self-supervised representation learning, which significantly improves the unsupervised training of deep image models. Self-supervised learning is a subset of unsupervised learning in which the learning process is supervised by creating pseudolabels from the data themselves. Using supervised final adjustments after unsupervised pretraining is one way to take the most valuable information from a vast collection of unlabeled data and teach from a small number of labeled instances. This study aims firstly to compare contrastive learning with other traditional learning models; secondly to demonstrate by experimental studies the superiority of contrastive learning during classification; thirdly to fine-tune performance using pretrained models and appropriate hyperparameter selection; and finally to address the challenge of using contrastive learning techniques to produce data representations with semantic meaning that are independent of irrelevant factors like position, lighting, and background. Relying on contrastive techniques, the model efficiently captures meaningful representations by discerning similarities and differences between modified copies of the same image. The proposed strategy, involving unsupervised pretraining followed by supervised fine-tuning, improves the robustness, accuracy, and knowledge extraction of deep image models. The results show that even with a modest 5% of data labeled, the semisupervised model achieves an accuracy of 57.72%. However, the use of supervised learning with a contrastive approach and careful hyperparameter tuning increases accuracy to 85.43%. Further adjustment of the hyperparameters resulted in an excellent accuracy of 88.70%. Full article
Show Figures

Figure 1

Figure 1
<p>Classification of contrastive self-supervised images.</p>
Full article ">Figure 2
<p>The contrastive learning model’s purpose is to increase the distance <math display="inline"><semantics> <msup> <mi>D</mi> <mo>−</mo> </msup> </semantics></math> while minimizing the distance <math display="inline"><semantics> <msup> <mi>D</mi> <mo>+</mo> </msup> </semantics></math>.</p>
Full article ">Figure 3
<p>The learning of self-supervised representations from rotating input images.</p>
Full article ">Figure 4
<p>Discrimination method.</p>
Full article ">Figure 5
<p>Losses comparing supervised and unsupervised techniques. Supervised contrastive learning considers many samples from the same class as positive examples in addition to augmented versions. The colors of the arrows distinguish between the two types of images: positive and negative.</p>
Full article ">Figure 6
<p>Adjusting image embeddings proximity. The colored circles define a type of categorization in the embedding space, projecting the objects into distinct zones based on their classification.</p>
Full article ">Figure 7
<p>Pairwise ranking: The arrows on the left indicate the transformation from image to vector representation. The arrows on the right represent the convergence of similar images and the divergence of dissimilar images in the vector space.</p>
Full article ">Figure 8
<p>Traditional self-supervised model: The horizontal line for each curve reflects its average value.</p>
Full article ">Figure 9
<p>Self-supervised-contrastive model: Each curve’s horizontal line reflects its average value.</p>
Full article ">Figure 10
<p>Classical semisupervised model: Each curve’s horizontal line denotes its average value.</p>
Full article ">Figure 11
<p>Semisupervised contrastive model: Horizontal line on each curve reflects the mean value.</p>
Full article ">Figure 12
<p>Loss comparison of contrastive and conventional autosupervised models.</p>
Full article ">Figure 13
<p>Images I confirme for moving the iamge contrasting augmentation reveals changes.</p>
Full article ">Figure 14
<p>Supervised baseline model accuracy/loss: (<b>a</b>) Shows convergence of model accuracy for the supervised baseline model. (<b>b</b>) Demonstrates model loss for the supervised baseline model.</p>
Full article ">Figure 15
<p>Baseline, self-supervised, and fine-tuning model accuracy/loss: (<b>a</b>) Demonstrates higher validation accuracy for baseline, self-supervised, and fine-tuning models. (<b>b</b>) Shows lower validation loss for baseline, self-supervised, and fine-tuning models.</p>
Full article ">Figure 16
<p>Supervised baseline model accuracy/loss: (<b>a</b>) Demonstrates model accuracy convergence over training and validation data. (<b>b</b>) Demonstrates model loss for baseline classification model.</p>
Full article ">Figure 17
<p>Frozen encoder model accuracy/loss. (<b>a</b>) Demonstrates model accuracy convergence with a frozen encoder. (<b>b</b>) Demonstrates model loss convergence with a frozen encoder.</p>
Full article ">Figure 18
<p>Fine-tuned model accuracy/loss: (<b>a</b>) Demonstrates model accuracy convergence. (<b>b</b>) Demonstrates model loss convergence.</p>
Full article ">Figure 19
<p>Demonstrates decreasing learning rate helps the model converge to an optimal solution.</p>
Full article ">Figure 20
<p>Convergence of linear model accuracy/loss: (<b>a</b>) Demonstrates the convergence of linear model to an optimal solution for accuracy. (<b>b</b>) Demonstrates the convergence of linear model to an optimal solution for loss.</p>
Full article ">
12 pages, 529 KiB  
Article
National 30-Day Readmission Trends in IBD 2014–2020—Are We Aiming for Improvement?
by Irēna Teterina, Veronika Mirzajanova, Viktorija Mokricka, Maksims Zolovs, Dins Šmits and Juris Pokrotnieks
Medicina 2024, 60(8), 1310; https://doi.org/10.3390/medicina60081310 - 13 Aug 2024
Viewed by 385
Abstract
Background: Inflammatory bowel disease (IBD) prevalence in Eastern Europe is increasing. The 30-day readmission rate is a crucial quality metric in healthcare, reflecting the effectiveness of initial treatment and the continuity of care post-discharge; however, such parameters are rarely analyzed. The aim of [...] Read more.
Background: Inflammatory bowel disease (IBD) prevalence in Eastern Europe is increasing. The 30-day readmission rate is a crucial quality metric in healthcare, reflecting the effectiveness of initial treatment and the continuity of care post-discharge; however, such parameters are rarely analyzed. The aim of this study was to explore the trends in 30-day readmissions among patients with inflammatory bowel disease in Latvia between 2014 and 2020. Methods: This is a retrospective trends study in IBD—ulcerative colitis and Crohn’s disease (UC and CD)—patients in Latvia between 2014 and 2020, involving all IBD patients identified in the National Health service database in the International Classification of Diseases-10 (ICD) classification (K50.X and K51.X) and having at least one prescription for IBD diagnoses. We assessed all IBD-related hospitalizations (discharge ICD codes K50X and K51X), as well as hospitalizations potentially related to IBD comorbidities. We analyzed hospitalization trends and obtained the 30 day all-cause readmission rate, disease specific readmission rate and readmission proportion for specific calendar years. Trends in readmissions and the mean length of stay (LOS) for CD and UC were calculated. Results: Despite a decrease in admission rates observed in 2020, the total number of readmissions for CD and UC has increased. Female patients prevailed through the study period and were significantly older than male patients in both the CD and UC groups, p < 0.05. We noted that there was no trend for 30 day all-cause readmission rate for CD (p > 0.05); however, there was a statistically significant trend for 30 day all-cause readmission for UC patients (p-trend = 0.018) in the period from 2014 to 2019. There was a statistically significant trend for CD-specific readmission rate (p < 0.05); however, no statistically significant trend was observed for UC-specific readmission (p > 0.05). An exploratory analysis did not reveal any statistically significant differences between treated and not-treated IBD patients (p > 0.05). The increasing trend is statistically significant over the period 2014–2018 (p < 0.05); however, the trend interrupts in 2020, which can be associated with the COVID-19 global pandemic and the related changes in admission flows where the gastroenterology capacity was reallocated to accommodate increasing numbers of COVID-19 patients. More studies are needed to evaluate the long-term impact of COVID-19 pandemic and 30-day readmissions. No significant dynamics were observed in the mean total hospitalization costs over the 2014–2020 period. Full article
(This article belongs to the Section Gastroenterology & Hepatology)
Show Figures

Figure 1

Figure 1
<p>Trends of 30 day readmission following Crohn’s disease and ulcerative colitis hospitalizations. Abbreviations: CD—Crohn’s disease, UC—ulcerative colitis.</p>
Full article ">
21 pages, 9368 KiB  
Article
Advanced Neural Classifier-Based Effective Human Assistance Robots Using Comparable Interactive Input Assessment Technique
by Mohammed Albekairi, Khaled Kaaniche, Ghulam Abbas, Paolo Mercorelli, Meshari D. Alanazi and Ahmad Almadhor
Mathematics 2024, 12(16), 2500; https://doi.org/10.3390/math12162500 - 13 Aug 2024
Viewed by 234
Abstract
The role of robotic systems in human assistance is inevitable with the bots that assist with interactive and voice commands. For cooperative and precise assistance, the understandability of these bots needs better input analysis. This article introduces a Comparable Input Assessment Technique (CIAT) [...] Read more.
The role of robotic systems in human assistance is inevitable with the bots that assist with interactive and voice commands. For cooperative and precise assistance, the understandability of these bots needs better input analysis. This article introduces a Comparable Input Assessment Technique (CIAT) to improve the bot system’s understandability. This research introduces a novel approach for HRI that uses optimized algorithms for input detection, analysis, and response generation in conjunction with advanced neural classifiers. This approach employs deep learning models to enhance the accuracy of input identification and processing efficiency, in contrast to previous approaches that often depended on conventional detection techniques and basic analytical methods. Regardless of the input type, this technique defines cooperative control for assistance from previous histories. The inputs are cooperatively validated for the instruction responses for human assistance through defined classifications. For this purpose, a neural classifier is used; the maximum possibilities for assistance using self-detected instructions are recommended for the user. The neural classifier is divided into two categories according to its maximum comparable limits: precise instruction and least assessment inputs. For this purpose, the robot system is trained using previous histories and new assistance activities. The learning process performs comparable validations between detected and unrecognizable inputs with a classification that reduces understandability errors. Therefore, the proposed technique was found to reduce response time by 6.81%, improve input detection by 8.73%, and provide assistance by 12.23% under varying inputs. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed CIAT Technique.</p>
Full article ">Figure 2
<p>Bot Control Decision Based on Different Inputs.</p>
Full article ">Figure 3
<p>Mapping of <math display="inline"><semantics> <mrow> <mo> </mo> <mi mathvariant="bold-italic">A</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Deep Neural Network Representation for Classification.</p>
Full article ">Figure 5
<p>Input Detection.</p>
Full article ">Figure 6
<p>Input Analyses.</p>
Full article ">Figure 7
<p>Assistance Response.</p>
Full article ">Figure 8
<p>Error Detection.</p>
Full article ">Figure 9
<p>Response Time.</p>
Full article ">
10 pages, 482 KiB  
Article
Evaluation of Systemic Risk Factors in Patients with Diabetes Mellitus for Detecting Diabetic Retinopathy with Random Forest Classification Model
by Ramesh Venkatesh, Priyanka Gandhi, Ayushi Choudhary, Rupal Kathare, Jay Chhablani, Vishma Prabhu, Snehal Bavaskar, Prathiba Hande, Rohit Shetty, Nikitha Gurram Reddy, Padmaja Kumari Rani and Naresh Kumar Yadav
Diagnostics 2024, 14(16), 1765; https://doi.org/10.3390/diagnostics14161765 - 13 Aug 2024
Viewed by 211
Abstract
Background: This study aims to assess systemic risk factors in diabetes mellitus (DM) patients and predict diabetic retinopathy (DR) using a Random Forest (RF) classification model. Methods: We included DM patients presenting to the retina clinic for first-time DR screening. Data on age, [...] Read more.
Background: This study aims to assess systemic risk factors in diabetes mellitus (DM) patients and predict diabetic retinopathy (DR) using a Random Forest (RF) classification model. Methods: We included DM patients presenting to the retina clinic for first-time DR screening. Data on age, gender, diabetes type, treatment history, DM control status, family history, pregnancy history, and systemic comorbidities were collected. DR and sight-threatening DR (STDR) were diagnosed via a dilated fundus examination. The dataset was split 80:20 into training and testing sets. The RF model was trained to detect DR and STDR separately, and its performance was evaluated using misclassification rates, sensitivity, and specificity. Results: Data from 1416 DM patients were analyzed. The RF model was trained on 1132 (80%) patients. The misclassification rates were 0% for DR and ~20% for STDR in the training set. External testing on 284 (20%) patients showed 100% accuracy, sensitivity, and specificity for DR detection. For STDR, the model achieved 76% (95% CI-70.7%–80.7%) accuracy, 53% (95% CI-39.2%–66.6%) sensitivity, and 80% (95% CI-74.6%–84.7%) specificity. Conclusions: The RF model effectively predicts DR in DM patients using systemic risk factors, potentially reducing unnecessary referrals for DR screening. However, further validation with diverse datasets is necessary to establish its reliability for clinical use. Full article
(This article belongs to the Special Issue Diagnostics for Ocular Diseases: Its Importance in Patient Care)
Show Figures

Figure 1

Figure 1
<p>Flow chart depicting the process of model training and testing in the study.</p>
Full article ">
22 pages, 2171 KiB  
Article
Intelligent Checking Method for Construction Schemes via Fusion of Knowledge Graph and Large Language Models
by Hao Li, Rongzheng Yang, Shuangshuang Xu, Yao Xiao and Hongyu Zhao
Buildings 2024, 14(8), 2502; https://doi.org/10.3390/buildings14082502 - 13 Aug 2024
Viewed by 235
Abstract
In the construction industry, the professional evaluation of construction schemes represents a crucial link in ensuring the safety, quality and economic efficiency of the construction process. However, due to the large number and diversity of construction schemes, traditional expert review methods are limited [...] Read more.
In the construction industry, the professional evaluation of construction schemes represents a crucial link in ensuring the safety, quality and economic efficiency of the construction process. However, due to the large number and diversity of construction schemes, traditional expert review methods are limited in terms of timeliness and comprehensiveness. This leads to an increasingly urgent requirement for intelligent check of construction schemes. This paper proposes an intelligent compliance checking method for construction schemes that integrates knowledge graphs and large language model (LLM). Firstly, a method for constructing a multi-dimensional, multi-granular knowledge graph for construction standards is introduced, which serves as the foundation for domain-specific knowledge support to the LLM. Subsequently, a parsing module based on text classification and entity extraction models is proposed to automatically parse construction schemes and construct pathways for querying the knowledge graph of construction standards. Finally, an LLM is leveraged to achieve an intelligent compliance check. The experimental results demonstrate that the proposed method can effectively integrate domain knowledge to guide the LLM in checking construction schemes, with an accuracy rate of up to 72%. Concurrently, the well-designed prompt template and the comprehensiveness of the knowledge graph facilitate the stimulation of the LLM’s reasoning ability. This work contributes to exploring the application of LLMs and knowledge graphs in the vertical domain of text compliance checking. Future work will focus on optimizing the integration of LLMs and domain knowledge to further improve the accuracy and practicality of the intelligent checking system. Full article
(This article belongs to the Special Issue Smart and Digital Construction in AEC Industry)
Show Figures

Figure 1

Figure 1
<p>Technology route of intelligent checking method.</p>
Full article ">Figure 2
<p>Example of document structure layer (part).</p>
Full article ">Figure 3
<p>All possible semantic paths contained in the subgraph of the clause.</p>
Full article ">Figure 4
<p>Attributes of Event Nodes.</p>
Full article ">Figure 5
<p>Cypher statement.</p>
Full article ">Figure 6
<p>Standard checking points.</p>
Full article ">Figure 7
<p>Prompt template.</p>
Full article ">Figure 8
<p>Prompt template B.</p>
Full article ">Figure 9
<p>An excerpt of checking results under different templates.</p>
Full article ">Figure 10
<p>An excerpt of the check result of the LLM with mismatch between the statement and checking rules.</p>
Full article ">
Back to TopTop