[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,860)

Search Parameters:
Keywords = recognition system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 558 KiB  
Article
The Usefulness of Carotid Artery Doppler Measurement as a Predictor of Early Death in Sepsis Patients Admitted to the Emergency Department
by Su-Il Kim, Yun-Deok Jang, Jae-Gu Ji, Yong-Seok Kim, In-Hye Kang, Seong-Ju Kim, Seong-Min Han and Min-Seok Choi
J. Clin. Med. 2024, 13(22), 6912; https://doi.org/10.3390/jcm13226912 (registering DOI) - 16 Nov 2024
Abstract
Background: This study aims to verify whether the blood flow velocity and the diameter size, measured through intra-carotid artery Doppler measurements performed on sepsis patients visiting the emergency department, are useful as tools for predicting the risk of early death. Methods: As [...] Read more.
Background: This study aims to verify whether the blood flow velocity and the diameter size, measured through intra-carotid artery Doppler measurements performed on sepsis patients visiting the emergency department, are useful as tools for predicting the risk of early death. Methods: As a prospective study, this research was performed on sepsis patients who visited a local emergency medical center from August 2021 to February 2023. The sepsis patients’ carotid artery was measured using Doppler imaging, and they were divided into patients measured for the size of systolic and diastolic mean blood flow velocity and diameter size: those measured for their qSOFA (quick sequential organ failure assessment) score and those measured using the SIRS (systemic inflammatory response syndrome) criteria. By measuring and comparing their mortality prediction accuracies, this study sought to verify the usefulness of blood flow velocity and the diameter size of the intra-carotid artery as tools to predict early death. Results: This study was conducted on 1026 patients, excluding 45 patients out of the total of 1071 patients. All sepsis patients were measured using systolic and diastolic blood flow velocity and diameter by Doppler imaging of the intra-carotid artery, assessed using qSOFA and evaluated using SIRS criteria. The results of the analysis performed to compare the mortality prediction accuracy were as follows. First, the hazard ratio (95% CI) of the intra-carotid artery was significant (p < 0.05), at 1.020 (1.004–1.036); the hazard ratio (95% CI) of qSOFA was significant (p < 0.05), at 3.871 (2.526–5.931); and the hazard ratio (95% CI) of SIRS showed no significant difference, at 1.002 (0.995–1.009). After 2 h of infusion treatment, the diameter size was 4.72 ± 1.23, showing a significant difference (p < 0.05). After 2 h of fluid treatment, the blood flow velocity was 101 m/s ± 21.12, which showed a significant difference (p < 0.05). Conclusions: Measuring the mean blood flow velocity in the intra-carotid arteries of sepsis patients who visit the emergency department is useful for predicting the risk of death at an early stage. And this study showed that Doppler measurement of the diameter size of the carotid artery significantly increased after performing fluid treatment after early recognition. Full article
(This article belongs to the Special Issue Emergency Ultrasound: State of the Art and Perspectives)
18 pages, 3514 KiB  
Article
Influence of the Nucleo-Shuttling of the ATM Protein on the Response of Skin Fibroblasts from Marfan Syndrome to Ionizing Radiation
by Dagmara Jakubowska, Joëlle Al-Choboq, Laurène Sonzogni, Michel Bourguignon, Dorota Slonina and Nicolas Foray
Int. J. Mol. Sci. 2024, 25(22), 12313; https://doi.org/10.3390/ijms252212313 (registering DOI) - 16 Nov 2024
Abstract
Marfan syndrome (MFS) is an autosomal dominant connective-tissue disorder affecting multiple systems, such as skeletal, cardiovascular, and ocular systems. MFS is predominantly caused by mutations in the FBN1 gene, which encodes the fibrillin-1 protein, crucial for connective-tissue integrity. FBN1 mutations lead to defective [...] Read more.
Marfan syndrome (MFS) is an autosomal dominant connective-tissue disorder affecting multiple systems, such as skeletal, cardiovascular, and ocular systems. MFS is predominantly caused by mutations in the FBN1 gene, which encodes the fibrillin-1 protein, crucial for connective-tissue integrity. FBN1 mutations lead to defective fibrillin, resulting in structurally compromised connective tissues. Additionally, these mutations cause aberrant TGF-β expression, contributing to vascular issues and increased susceptibility to radiation-induced fibrosis. Studies about the potential radiosensitivity of MFS are rare and generally limited to case reports. Here, we aimed to investigate the radiation-induced ATM nucleo-shuttling (RIANS) model to explore the molecular and cellular radiation response in fibroblasts from MFS patients. The results showed that the MFS fibroblast cell lines tested are associated with moderate but significant radiosensitivity, high yield of micronuclei, and impaired recognition of DNA double-strand breaks (DSBs) caused by a diminished RIANS. The diminished RIANS is supported by the sequestration of ATM protein in the cytoplasm not only by mutated FBN1 protein but also by overexpressed TGF-β. This report is the first molecular and cellular characterization of the radiation response of MFS fibroblasts and highlights the importance of the FBN1-TGF-β complex after irradiation. Full article
(This article belongs to the Section Biochemistry)
17 pages, 2888 KiB  
Article
Research on Fault Diagnosis of Agricultural IoT Sensors Based on Improved Dung Beetle Optimization–Support Vector Machine
by Sicheng Liang, Pingzeng Liu, Ziwen Zhang and Yong Wu
Sustainability 2024, 16(22), 10001; https://doi.org/10.3390/su162210001 (registering DOI) - 16 Nov 2024
Viewed by 82
Abstract
The accuracy of data perception in Internet of Things (IoT) systems is fundamental to achieving scientific decision-making and intelligent control. Given the frequent occurrence of sensor failures in complex environments, a rapid and accurate fault diagnosis and handling mechanism is crucial for ensuring [...] Read more.
The accuracy of data perception in Internet of Things (IoT) systems is fundamental to achieving scientific decision-making and intelligent control. Given the frequent occurrence of sensor failures in complex environments, a rapid and accurate fault diagnosis and handling mechanism is crucial for ensuring the stable operation of the system. Addressing the challenges of insufficient feature extraction and sparse sample data that lead to low fault diagnosis accuracy, this study explores the construction of a fault diagnosis model tailored for agricultural sensors, with the aim of accurately identifying and analyzing various sensor fault modes, including but not limited to bias, drift, accuracy degradation, and complete failure. This study proposes an improved dung beetle optimization–support vector machine (IDBO-SVM) diagnostic model, leveraging the optimization capabilities of the former to finely tune the parameters of the Support Vector Machine (SVM) to enhance fault recognition under conditions of limited sample data. Case analyses were conducted using temperature and humidity sensors in air and soil, with comprehensive performance comparisons made against mainstream algorithms such as the Backpropagation (BP) neural network, Sparrow Search Algorithm–Support Vector Machine (SSA-SVM), and Elman neural network. The results demonstrate that the proposed model achieved an average diagnostic accuracy of 94.91%, significantly outperforming other comparative models. This finding fully validates the model’s potential in enhancing the stability and reliability of control systems. The research results not only provide new ideas and methods for fault diagnosis in IoT systems but also lay a foundation for achieving more precise, efficient intelligent control and scientific decision-making. Full article
Show Figures

Figure 1

Figure 1
<p>IoT sensing device.</p>
Full article ">Figure 2
<p>Sensor fault waveform characteristics diagram.</p>
Full article ">Figure 3
<p>Performance comparison chart of optimization algorithms.</p>
Full article ">Figure 4
<p>IDBO-SVM troubleshooting flow.</p>
Full article ">Figure 5
<p>(<b>a</b>) Confusion matrix for classification of temperature sensor fault prediction. (<b>b</b>) Confusion matrix for classification of humidity sensor fault prediction. (<b>c</b>) Confusion matrix for classification of soil temperature sensor fault prediction. (<b>d</b>) Confusion matrix for classification of soil humidity sensor fault prediction.</p>
Full article ">Figure 6
<p>Fault diagnosis model accuracy comparison.</p>
Full article ">
16 pages, 6692 KiB  
Article
Behavior Tracking and Analyses of Group-Housed Pigs Based on Improved ByteTrack
by Shuqin Tu, Haoxuan Ou, Liang Mao, Jiaying Du, Yuefei Cao and Weidian Chen
Animals 2024, 14(22), 3299; https://doi.org/10.3390/ani14223299 (registering DOI) - 16 Nov 2024
Viewed by 134
Abstract
Daily behavioral analysis of group-housed pigs provides critical insights into early warning systems for pig health issues and animal welfare in smart pig farming. In this study, our main objective was to develop an automated method for monitoring and analyzing the behavior of [...] Read more.
Daily behavioral analysis of group-housed pigs provides critical insights into early warning systems for pig health issues and animal welfare in smart pig farming. In this study, our main objective was to develop an automated method for monitoring and analyzing the behavior of group-reared pigs to detect health problems and improve animal welfare promptly. We have developed the method named Pig-ByteTrack. Our approach addresses target detection, Multi-Object Tracking (MOT), and behavioral time computation for each pig. The YOLOX-X detection model is employed for pig detection and behavior recognition, followed by Pig-ByteTrack for tracking behavioral information. In 1 min videos, the Pig-ByteTrack algorithm achieved Higher Order Tracking Accuracy (HOTA) of 72.9%, Multi-Object Tracking Accuracy (MOTA) of 91.7%, identification F1 Score (IDF1) of 89.0%, and ID switches (IDs) of 41. Compared with ByteTrack and TransTrack, the Pig-ByteTrack achieved significant improvements in HOTA, IDF1, MOTA, and IDs. In 10 min videos, the Pig-ByteTrack achieved the results with 59.3% of HOTA, 89.6% of MOTA, 53.0% of IDF1, and 198 of IDs, respectively. Experiments on video datasets demonstrate the method’s efficacy in behavior recognition and tracking, offering technical support for health and welfare monitoring of pig herds. Full article
Show Figures

Figure 1

Figure 1
<p>Process diagram of tracking and behavioral time statistics for group-housed pigs.</p>
Full article ">Figure 2
<p>Flow chart of Pig-ByteTrack algorithm.</p>
Full article ">Figure 3
<p>The flow chart of the Byte data association algorithm.</p>
Full article ">Figure 4
<p>Comparison of tracking box between Pig-ByteTrack and ByteTrack.</p>
Full article ">Figure 5
<p>Comparison of Pig-BytetTrack, ByteTrack and TransTrack results on private datasets.</p>
Full article ">Figure 6
<p>The visualized tracking results comparison of Pig-BytetTrack, ByteTrack, and TransTrack.</p>
Full article ">Figure 7
<p>The visualized tracking results of Pig-BytetTrack in the 10 min videos. (The red arrows in the figure indicate pigs with id transformations).</p>
Full article ">Figure 8
<p>Pig behavior statistics for videos 14–17.</p>
Full article ">
18 pages, 4049 KiB  
Article
Comparative Analysis of PGRP Family in Polymorphic Worker Castes of Solenopsis invicta
by Zhanpeng Zhu, Hongxin Wu, Liangjie Lin, Ao Li, Zehong Kang, Jie Zhang, Fengliang Jin and Xiaoxia Xu
Int. J. Mol. Sci. 2024, 25(22), 12289; https://doi.org/10.3390/ijms252212289 (registering DOI) - 15 Nov 2024
Viewed by 218
Abstract
Peptidoglycan recognition proteins (PGRPs) are a class of pattern recognition receptors (PRRs) that activate the innate immune system in response to microbial infection by detection of peptidoglycan, a distinct component of bacterial cell walls. Bioinformatic studies have revealed four PGRPs in the red [...] Read more.
Peptidoglycan recognition proteins (PGRPs) are a class of pattern recognition receptors (PRRs) that activate the innate immune system in response to microbial infection by detection of peptidoglycan, a distinct component of bacterial cell walls. Bioinformatic studies have revealed four PGRPs in the red imported fire ant Solenopsis invicta; nonetheless, the mechanism of the immune response of S. invicta induced by pathogens is still poorly understood. The peptidoglycan recognition protein full-length cDNA (designated as SiPGRP-S1/S2/S3/L) from S. invicta was used in this investigation. According to the sequencing analysis, there was a significant degree of homology between the anticipated amino acid sequence of SiPGRPs and other members of the PGRPs superfamily. Molecular docking studies demonstrated that SiPGRPs show strong binding affinity for a variety of PGN substrates. Additionally, tissue distribution analysis indicated that SiPGRPs are primarily expressed in several tissues of naïve larvae, including fat body, hemocytes, head, and thorax, as detected by quantitative real-time PCR (RT-qPCR). Microbial challenges resulted in variable changes in mRNA levels across different tissues. Furthermore, the antibacterial effects of antimicrobial peptides (AMPs) produced by major ants infected with Metarhizium anisopliae were assessed. These AMPs demonstrated inhibitory effects against M. anisopliae, Staphylococcus aureus, and Escherichia coli, with the most pronounced effect observed against E. coli. In conclusion, SiPGRPs act as pattern recognition receptors (PRRs) that identify pathogens and initiate the expression of AMPs in S. invicta, this mechanism contributes to the development of biopesticides designed for the targeted control of invasive agricultural pests. Full article
(This article belongs to the Collection Feature Papers in “Molecular Biology”)
18 pages, 5616 KiB  
Article
Hyperspectral Imaging Combined with Deep Learning for the Early Detection of Strawberry Leaf Gray Mold Disease
by Yunmeng Ou, Jingyi Yan, Zhiyan Liang and Baohua Zhang
Agronomy 2024, 14(11), 2694; https://doi.org/10.3390/agronomy14112694 - 15 Nov 2024
Viewed by 172
Abstract
The presence of gray mold can seriously affect the yield and quality of strawberries. Due to their susceptibility and the rapid spread of this disease, it is important to develop early, accurate, rapid, and non-destructive disease identification strategies. In this study, the early [...] Read more.
The presence of gray mold can seriously affect the yield and quality of strawberries. Due to their susceptibility and the rapid spread of this disease, it is important to develop early, accurate, rapid, and non-destructive disease identification strategies. In this study, the early detection of strawberry leaf diseases was performed using hyperspectral imaging combining multi-dimensional features like spectral fingerprints and vegetation indices. Firstly, hyperspectral images of healthy and early affected leaves (24 h) were acquired using a hyperspectral imaging system. Then, spectral reflectance (616) and vegetation index (40) were extracted. Next, the CARS algorithm was used to extract spectral fingerprint features (17). Pearson correlation analysis combined with the SPA method was used to select five significant vegetation indices. Finally, we used five deep learning methods (LSTMs, CNNs, BPFs, and KNNs) to build disease detection models for strawberries based on individual and fusion characteristics. The results showed that the accuracy of the recognition model based on fused features ranged from 88.9% to 96.6%. The CNN recognition model based on fused features performed best, with a recognition accuracy of 96.6%. Overall, the fused feature-based model can reduce the dimensionality of the classification data and effectively improve the predicting accuracy and precision of the classification algorithm. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of the hyperspectral imaging system.</p>
Full article ">Figure 2
<p>Healthy and gray leaf mold.</p>
Full article ">Figure 3
<p>Flowchart of the work.</p>
Full article ">Figure 4
<p>Spectral behaviors of different types of strawberry leaves: (<b>a</b>) the hyperspectral cube of the gray mold-infected strawberry leaf; (<b>b</b>) spectra of gray mold-infected strawberry leaves samples; (<b>c</b>) spectra of healthy strawberry leaves samples; and (<b>d</b>) the comparison of original spectra of healthy and disease leaves.</p>
Full article ">Figure 5
<p>(<b>a</b>) Regression coefficients of each variable; (<b>b</b>) spectral fingerprint feature distribution.</p>
Full article ">Figure 6
<p>(<b>a</b>) The correlation coefficients diagram of 40 vegetation indices; (<b>b</b>) the detail of the correlation coefficients diagram.</p>
Full article ">Figure 7
<p>The COSS of 21 VIs obtained by SPA.</p>
Full article ">Figure 8
<p>Classification accuracy comparison of various machine learning models based on different input features. (<b>a</b>) Full wavelength and fingerprint features; (<b>b</b>) full wavelength and significant vegetation index; (<b>c</b>) full wavelength and full vegetation index; and (<b>d</b>) fingerprint feature, significance, and fusion feature.</p>
Full article ">Figure 9
<p>The five models are based on the confusion matrix of mixed features.</p>
Full article ">
11 pages, 1880 KiB  
Article
Development of a Real-Time Wearable Humming Detector Device
by Amine Mazouzi and Alexandre Campeau-Lecours
Sensors 2024, 24(22), 7296; https://doi.org/10.3390/s24227296 - 15 Nov 2024
Viewed by 209
Abstract
This study focuses on the development of a wearable real-time Humming Detector Device (HDD) aimed at enhancing the control of assistive devices through humming. As the need for portable user-friendly tools in assistive technology grows, the HDD offers a non-invasive solution to detect [...] Read more.
This study focuses on the development of a wearable real-time Humming Detector Device (HDD) aimed at enhancing the control of assistive devices through humming. As the need for portable user-friendly tools in assistive technology grows, the HDD offers a non-invasive solution to detect vocal cord vibrations. Vibrations, detected thanks to an accelerometer worn on the neck, are processed in real time using a Fast Fourier Transform (FFT) to identify specific humming frequencies, which are then translated into commands for controlling assistive devices via Bluetooth Low Energy (BLE) transmission. The device was tested with 13 healthy subjects to validate its potential and determine the optimal number of distinct commands that users can achieve. The HDD’s portability and precision make it a promising alternative to traditional voice recognition systems, particularly for individuals with speech impairments. Full article
(This article belongs to the Special Issue Wearable and Mobile Sensors and Data Processing—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Installation of the accelerometer collar around the neck.</p>
Full article ">Figure 2
<p>System assembly of the humming detector device.</p>
Full article ">Figure 3
<p>Functional scheme of the Humming Detector Device.</p>
Full article ">Figure 4
<p>Time signal of the “Do” humming note.</p>
Full article ">Figure 5
<p>Frequency spectrum of the “Do” humming note.</p>
Full article ">Figure 6
<p>Fundamental and harmonic frequencies of each humming note obtained from trials.</p>
Full article ">Figure 7
<p>RC car control with humming detector device.</p>
Full article ">Figure 8
<p>LED montage for humming tests.</p>
Full article ">Figure 9
<p>Frequency of each LED lighting platform humming score.</p>
Full article ">
11 pages, 7620 KiB  
Article
Ultrathin, Stretchable, and Twistable Ferroelectret Nanogenerator for Facial Muscle Detection
by Ziling Song, Xianfa Cai, Zhi Chen, Ziying Zhu, Yunqi Cao and Wei Li
Nanoenergy Adv. 2024, 4(4), 344-354; https://doi.org/10.3390/nanoenergyadv4040021 - 15 Nov 2024
Viewed by 379
Abstract
Ferroelectret nanogenerators (FENGs) have garnered attention due to their unique porous structure and excellent piezoelectric performance. However, most existing FENGs lack sufficient stretchability and flexibility, limiting their application in the field of wearable electronics. In this regard, we have focused on the development [...] Read more.
Ferroelectret nanogenerators (FENGs) have garnered attention due to their unique porous structure and excellent piezoelectric performance. However, most existing FENGs lack sufficient stretchability and flexibility, limiting their application in the field of wearable electronics. In this regard, we have focused on the development of an ultrathin, stretchable, and twistable ferroelectret nanogenerator (UST-FENG) based on Ecoflex, which is made up of graphene, Ecoflex, and anhydrous ethanol, with controllable pore shape and density. The UST-FENG has a thickness of only 860 µm, a fracture elongation rate of up to 574%, and a Young’s modulus of only 0.2 MPa, exhibiting outstanding thinness and excellent stretchability. Its quasi-static piezoelectric coefficient is approximately 38 pC/N. Utilizing this UST-FENG device can enable the recognition of facial muscle movements such as blinking and speaking, thereby helping to monitor people’s facial conditions and improve their quality of life. The successful application of the UST-FENG in facial muscle recognition represents an important step forward in the field of wearable systems for the human face. Full article
Show Figures

Figure 1

Figure 1
<p>Preparation process of the UST-FENG.</p>
Full article ">Figure 2
<p>Structure diagram of the UST-FENG. (<b>a</b>) Structure of the UST-FENG. (<b>b</b>) SEM image of the graphene flexible electrode. (<b>c</b>) SEM image of the cross-sectional view of the UST-FENG.</p>
Full article ">Figure 3
<p>Simulation of the UST-FENG performance. (<b>a</b>) Potential distribution diagram of the UST-FENG. (<b>b</b>) Displacement change diagram of the UST-FENG after stress application. (<b>c</b>) Stress distribution diagram of the UST-FENG. (<b>d</b>) Cross-section diagram of the electric potential and stress distribution of the UST-FENG.</p>
Full article ">Figure 4
<p>Piezoelectric signals generated by the UST-FENG. (<b>a</b>) Test photograph of the UST-FENG under positive electrode in the initial state. (<b>b</b>) Test photograph of the UST-FENG under reverse electrode in the initial state. (<b>c</b>) Open-circuit voltage diagram produced by the UST-FENG under stress. (<b>d</b>) After polarity switching, open-circuit voltage diagram produced by the UST-FENG under stress.</p>
Full article ">Figure 5
<p>Characteristics of the UST-FENG. (<b>a</b>) Stress–strain curve of the UST-FENG. (<b>b</b>) Relationship diagram of the surface potential and charging voltage. (<b>c</b>) Relationship diagram of quasi-static <span class="html-italic">d</span><sub>33</sub> and charging voltage. (<b>d</b>) Relationship diagram of applied pressure and generated charge. (<b>e</b>) Relationship diagram of quasi-static <span class="html-italic">d</span><sub>33</sub> and applied pressure; the inset is the amplified diagram of the signal measured from 0 to 4.5 kPa. (<b>f</b>) Relationship diagram of dynamic <span class="html-italic">d</span><sub>33</sub> with frequency.</p>
Full article ">Figure 6
<p>Application of the UST-FENG in the detection of facial muscle movement. (<b>a</b>) Transmission diagram of the UST-FENG-generated signal. (<b>b</b>) Photograph of the UST-FENG sticking on an eye. (<b>c</b>) Detection of blink signals by the UST-FENG. (<b>d</b>) Photograph of the UST-FENG sticking on a mouth. (<b>e</b>) Detection of speech signals by the UST-FENG.</p>
Full article ">
18 pages, 12032 KiB  
Article
Advanced Modulation Formats for 400 Gbps Optical Networks and AI-Based Format Recognition
by Zhou He, Hao Huang, Fanjian Hu, Jiawei Gong, Binghua Shi, Jia Guo and Xiaoran Peng
Sensors 2024, 24(22), 7291; https://doi.org/10.3390/s24227291 - 14 Nov 2024
Viewed by 394
Abstract
The integration of communication and sensing (ICAS) in optical networks is an inevitable trend in building intelligent, multi-scenario, application-converged communication systems. However, due to the impact of nonlinear effects, co-fiber transmission of sensing signals and communication signals can cause interference to the communication [...] Read more.
The integration of communication and sensing (ICAS) in optical networks is an inevitable trend in building intelligent, multi-scenario, application-converged communication systems. However, due to the impact of nonlinear effects, co-fiber transmission of sensing signals and communication signals can cause interference to the communication signals, leading to an increased bit error rate (BER). This paper proposes a noncoherent solution based on the alternate polarization chirped return-to-zero frequency shift keying (Apol-CRZ-FSK) modulation format to realize a 4 × 100 Gbps dense wavelength division multiplexing (DWDM) optical network. Simulation results show that compared to traditional modulation formats, such as chirped return-to-zero frequency shift keying (CRZ-FSK) and differential quadrature phase shift keying (DQPSK), this solution demonstrates superior resistance to nonlinear effects, enabling longer transmission distances and better transmission performance. Moreover, to meet the transmission requirements and signal sensing and recognition needs in future optical networks, this study employs the Inception-ResNet-v2 convolutional neural network model to identify three modulation formats. Compared with six deep learning methods including AlexNet, ResNet50, GoogleNet, SqueezeNet, Inception-v4, and Xception, it achieves the highest performance. This research provides a low-cost, low-complexity, and high-performance solution for signal transmission and signal recognition in high-speed optical networks designed for integrated communication and sensing. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Architecture of a 4 × 100 Gbps Apol-CRZ-FSK signal transmission system for optical networks.</p>
Full article ">Figure 2
<p>Spectral diagram of a 4 × 100 Gbps signals: (<b>a</b>) Apol-CRZ-FSK; (<b>b</b>) CRZ-FSK; (<b>c</b>) DQPSK.</p>
Full article ">Figure 3
<p>The relation among SMF length, Q-factor, and launch power for the four wavelength channels of 4 × 100 Gbps Apol-CRZ-FSK signal transmission: (<b>a</b>) first channel; (<b>b</b>) second channel; (<b>c</b>) third channel; (<b>d</b>) last channel.</p>
Full article ">Figure 4
<p>The relation among SMF length, Q-factor and launch power for the four wavelength channels of 4 × 100 Gbps CRZ-FSK signal transmission: (<b>a</b>) first channel; (<b>b</b>) second channel; (<b>c</b>) third channel; (<b>d</b>) last channel.</p>
Full article ">Figure 5
<p>The relation among SMF length, Q-factor and launch power for the four wavelength channels of 4 × 100 Gbps DQPSK signal transmission: (<b>a</b>) first channel; (<b>b</b>) second channel; (<b>c</b>) third channel; (<b>d</b>) last channel.</p>
Full article ">Figure 6
<p>Performance analysis and comparison of three signals in different distances.</p>
Full article ">Figure 7
<p>Eye diagrams of the four-channel signals for the three types of signals at the launch power of 6 dBm and transmission distance of 1500 km: (<b>a</b>) first channel; (<b>b</b>) second channel; (<b>c</b>) third channel; (<b>d</b>) last channel.</p>
Full article ">Figure 8
<p>Model of the MFI method based on the Inception-ResNet-v2.</p>
Full article ">Figure 9
<p>Loss values for training and test sets.</p>
Full article ">Figure 10
<p>MFI confusion matrix for training and testing sets: (<b>a</b>) training set output confusion matrix; (<b>b</b>) testing set output confusion matrix.</p>
Full article ">Figure 11
<p>Effect of different factors on model MFI: (<b>a</b>) accuracy of the model at different number of rounds; (<b>b</b>) effect of different transmission distances on MFI; (<b>c</b>) effect of different signal-to-noise ratios on MFI.</p>
Full article ">Figure 12
<p>Comparative analysis of different modulation format recognition methods: (<b>a</b>) accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F1 score.</p>
Full article ">
25 pages, 17437 KiB  
Article
ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios
by Zhengbao Li, Heng Zhang, Ding Gao, Zewei Wu, Zheng Zhang and Libin Du
Sensors 2024, 24(22), 7288; https://doi.org/10.3390/s24227288 - 14 Nov 2024
Viewed by 174
Abstract
Abnormal behavior of crew members is an important cause of frequent ship safety accidents. The existing abnormal crew recognition algorithms are affected by complex ship environments and have low performance in real and open shipborne environments. This paper proposes an abnormal crew detection [...] Read more.
Abnormal behavior of crew members is an important cause of frequent ship safety accidents. The existing abnormal crew recognition algorithms are affected by complex ship environments and have low performance in real and open shipborne environments. This paper proposes an abnormal crew detection network for complex ship scenarios (ACD-Net), which uses a two-stage algorithm to detect and identify abnormal crew members in real-time. An improved YOLOv5s model based on a transformer and CBAM mechanism (YOLO-TRCA) is proposed with a C3-TransformerBlock module to enhance the feature extraction ability of crew members in complex scenes. The CBAM attention mechanism is introduced to reduce the interference of background features and improve the accuracy of real-time detection of crew abnormal behavior. The crew identification algorithm (CFA) tracks and detects abnormal crew members’ faces in real-time in an open environment (CenterFace), continuously conducts face quality assessment (Filter), and selects high-quality facial images for identity recognition (ArcFace). The CFA effectively reduces system computational overhead and improves the success rate of identity recognition. Experimental results indicate that ACD-Net achieves 92.3% accuracy in detecting abnormal behavior and a 69.6% matching rate for identity recognition, with a processing time of under 39.5 ms per frame at a 1080P resolution. Full article
(This article belongs to the Special Issue Human-Centric Sensing Technology and Systems: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>ACD-Net: Abnormal crew detection network.</p>
Full article ">Figure 2
<p>Four types of images with distinct features: (<b>a</b>) image with uneven lighting and significant brightness variations; (<b>b</b>) image with local overexposure, underexposure, or blurring; (<b>c</b>) image with a cluttered background and a small proportion of crew images; (<b>d</b>) image with severe occlusions and overlaps between crew and equipment.</p>
Full article ">Figure 3
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p>
Full article ">Figure 4
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p>
Full article ">Figure 4 Cont.
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p>
Full article ">Figure 5
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the Neck (PAN); (<b>e</b>) input 2 of the Neck (PAN); (<b>f</b>) input 3 of the Neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p>
Full article ">Figure 6
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p>
Full article ">Figure 7
<p>The structures of C3 module, TransformerBlock, and the C3-TransformerBlock module.</p>
Full article ">Figure 8
<p>Added CBAM schematic diagram in the feature fusion network.</p>
Full article ">Figure 9
<p>Comparison of loss function effects: (<b>a</b>) original image; (<b>b</b>) IoU; (<b>c</b>) CIoU.</p>
Full article ">Figure 10
<p>The architecture of YOLO-TRCA.</p>
Full article ">Figure 11
<p>CFA: crew identity recognition process.</p>
Full article ">Figure 12
<p>Facial coordinate diagram.</p>
Full article ">Figure 13
<p>Yaw rotation.</p>
Full article ">Figure 14
<p>Diagram of the crew identity recognition process.</p>
Full article ">Figure 15
<p>Partial images of the dataset: (<b>a</b>) not wearing a life jacket: nolifevast; (<b>b</b>) smoke; (<b>c</b>) not wearing work clothes: notrainlifevast; (<b>d</b>) not wearing a shirt: nocoat; (<b>e</b>) normal: lifevast.</p>
Full article ">Figure 16
<p>Comparison of detection results: (<b>a</b>) original image; (<b>b</b>) original image; (<b>c</b>) original image; (<b>d</b>) original image; (<b>e</b>) original image; (<b>f</b>) YOLOv5s; (<b>g</b>) YOLOv5s; (<b>h</b>) YOLOv5s; (<b>i</b>) YOLOv5s; (<b>j</b>) YOLOv5s; (<b>k</b>) proposed method; (<b>l</b>) proposed method; (<b>m</b>) proposed method; (<b>n</b>) proposed method; (<b>o</b>) proposed method.</p>
Full article ">Figure 17
<p>Features visualization of the network: (<b>a</b>) original image; (<b>b</b>) C3; (<b>c</b>) before adding CBAM1; (<b>d</b>) before adding CBAM2; (<b>e</b>) before adding CBAM3; (<b>f</b>) original image; (<b>g</b>) C3-TransformerBlock; (<b>h</b>) after adding CBAM1; (<b>i</b>) after adding CBAM2; (<b>j</b>) after adding CBAM3; (<b>k</b>) original image; (<b>l</b>) C3; (<b>m</b>) before adding CBAM1; (<b>n</b>) before adding CBAM2; (<b>o</b>) before adding CBAM3; (<b>p</b>) original image; (<b>q</b>) C3-TransformerBlock; (<b>r</b>) after adding CBAM1; (<b>s</b>) after adding CBAM2; (<b>t</b>) after adding CBAM3.</p>
Full article ">Figure 18
<p>Marine equipment layout diagram: (<b>a</b>) overall picture; (<b>b</b>) partial view.</p>
Full article ">Figure 19
<p>Algorithm effect and software design: (<b>a</b>) abnormal behavior detection of crew members in monitoring; (<b>b</b>) abnormal behavior detection of crew members in monitoring; (<b>c</b>) capturing and identifying abnormal crew; (<b>d</b>) abnormal crew identification record; (<b>e</b>) abnormal crew identity recognition results; (<b>f</b>) abnormal crew identity recognition results.</p>
Full article ">
20 pages, 4970 KiB  
Article
Revealing the Next Word and Character in Arabic: An Effective Blend of Long Short-Term Memory Networks and ARABERT
by Fawaz S. Al-Anzi and S. T. Bibin Shalini
Appl. Sci. 2024, 14(22), 10498; https://doi.org/10.3390/app142210498 - 14 Nov 2024
Viewed by 296
Abstract
Arabic raw audio datasets were initially gathered to produce a corresponding signal spectrum, which was further used to extract the Mel-Frequency Cepstral Coefficients (MFCCs). The pronunciation dictionary, language model, and acoustic model were further derived from the MFCCs’ features. These output data were [...] Read more.
Arabic raw audio datasets were initially gathered to produce a corresponding signal spectrum, which was further used to extract the Mel-Frequency Cepstral Coefficients (MFCCs). The pronunciation dictionary, language model, and acoustic model were further derived from the MFCCs’ features. These output data were processed into Baidu’s Deep Speech model (ASR system) to attain the text corpus. Baidu’s Deep Speech model was implemented to precisely identify the global optimal value rapidly while preserving a low word and character discrepancy rate by attaining an excellent performance in isolated and end-to-end speech recognition. The desired outcome in this work is to forecast the next word and character in a sequential and systematic order that applies under natural language processing (NLP). This work combines the trained Arabic language model ARABERT with the potential of Long Short-Term Memory (LSTM) networks to predict the next word and character in an Arabic text. We used the pre-trained ARABERT embedding to improve the model’s capacity and, to capture semantic relationships within the language, we educated LSTM + CNN and Markov models on Arabic text data to assess the efficacy of this model. Python libraries such as TensorFlow, Pickle, Keras, and NumPy were used to effectively design our development model. We extensively assessed the model’s performance using new Arabic text, focusing on evaluation metrics like accuracy, word error rate, character error rate, BLEU score, and perplexity. The results show how well the combined LSTM + ARABERT and Markov models have outperformed the baseline models in envisaging the next word or character in the Arabic text. The accuracy rates of 64.9% for LSTM, 74.6% for ARABERT + LSTM, and 78% for Markov chain models were achieved in predicting the next word, and the accuracy rates of 72% for LSTM, 72.22% for LSTM + CNN, and 73% for ARABERET + LSTM models were achieved for the next-character prediction. This work unveils a novelty in Arabic natural language processing tasks, estimating a potential future expansion in deriving a precise next-word and next-character forecasting, which can be an efficient utility for text generation and machine translation applications. Full article
Show Figures

Figure 1

Figure 1
<p>Baidu’s Deep Speech Arabic representation.</p>
Full article ">Figure 2
<p>Block diagram representation.</p>
Full article ">Figure 3
<p>LSTM architecture.</p>
Full article ">Figure 4
<p>Block diagram representation—next-character prediction.</p>
Full article ">Figure 5
<p>Case 1: Word-based prediction.</p>
Full article ">Figure 6
<p>Case 2: character-based prediction.</p>
Full article ">
21 pages, 1235 KiB  
Article
Understanding the Normalization of Plantation Agriculture: The Case of Hass Avocado in Colombia
by Andres Suarez
Land 2024, 13(11), 1911; https://doi.org/10.3390/land13111911 - 14 Nov 2024
Viewed by 283
Abstract
Plantations are not inherently normal, yet they have been normalized within traditional agricultural landscapes. This is the premise through which we explore why plantations thrive despite numerous social and ecological drawbacks. Accordingly, the aim of this paper is to present a framework to [...] Read more.
Plantations are not inherently normal, yet they have been normalized within traditional agricultural landscapes. This is the premise through which we explore why plantations thrive despite numerous social and ecological drawbacks. Accordingly, the aim of this paper is to present a framework to elucidate why Hass avocado plantations succeed, using Salamina, Colombia as a case study. We argue that these plantations prosper through a process of normalization, driven by the dynamic interplay between social structures and human agency in agriculture. Our theoretical framework regarding normalization unfolds in three stages: prescription, implementation embeddedness, and integration. To reach this outcome, we first build a theoretical foundation based on realist social theory and subsequently conduct a primarily qualitative case study, focusing on neighboring respondents to plantations for understanding the process of introduction, development, and persistence of these plantations in the landscape. Additionally, we consider supplementary interviews and secondary information to understand the context of Hass avocado expansion. We found that while normalization may appear to involve passive conformity, our analysis highlights the critical role of human agency. As our study demonstrates, agency fosters reflection and sustains various forms of resistance and counterbalance against systemic pressures. This recognition underscores the potential for proactive engagement and transformative action within agricultural systems, challenging and reshaping the prevailing norms. Full article
Show Figures

Figure 1

Figure 1
<p>Normalization in the morphogenetic cycle. Constructed from [<a href="#B32-land-13-01911" class="html-bibr">32</a>,<a href="#B36-land-13-01911" class="html-bibr">36</a>,<a href="#B38-land-13-01911" class="html-bibr">38</a>].</p>
Full article ">Figure 2
<p>Process of normalization. T: morphogenetic stages, N: normalization stages.</p>
Full article ">Figure 3
<p>Study area and selected cases for the interviews.</p>
Full article ">Figure 4
<p>Normalization process in Salamina. A<sub>1</sub> &gt; A<sub>2</sub> where A<sub>1</sub> is the first moment of higher apprehension given the novelty of the change, and A<sub>2</sub> is the second moment of less apprehension after the normalization process with space for agency (e.g., struggles, complaints, etc.).</p>
Full article ">
25 pages, 22413 KiB  
Article
Fault Diagnosis Method for Hydropower Station Measurement and Control System Based on ISSA-VMD and 1DCNN-BiLSTM
by Lin Wang, Fangqing Zhang, Jiefei Wang, Gang Ren, Dengxian Wang, Ling Gao and Xingyu Ming
Energies 2024, 17(22), 5686; https://doi.org/10.3390/en17225686 - 14 Nov 2024
Viewed by 213
Abstract
Sudden failures of measurement and control circuits in hydropower plants may lead to unplanned shutdowns of generating units. Therefore, the diagnosis of hydropower station measurement and control system poses a great challenge. Existing fault diagnosis methods suffer from long fault identification time, inaccurate [...] Read more.
Sudden failures of measurement and control circuits in hydropower plants may lead to unplanned shutdowns of generating units. Therefore, the diagnosis of hydropower station measurement and control system poses a great challenge. Existing fault diagnosis methods suffer from long fault identification time, inaccurate positioning, and low diagnostic efficiency. In order to improve the accuracy of fault diagnosis, this paper proposes a fault diagnosis method for hydropower station measurement and control system that combines variational modal decomposition (VMD), Pearson’s correlation coefficient, a one-dimensional convolutional neural network, and a bi-directional long and short-term memory network (1DCNN-BiLSTM). Firstly, the VMD parameters are optimised by the Improved Sparrow Search Algorithm (ISSA). Secondly, signal decomposition of the original fault signals is carried out by using ISSA-VMD, and meanwhile, the optimal intrinsic modal components (IMFs) are screened out by using Pearson’s correlation coefficient, and the optimal set of components is subjected to signal reconstruction in order to obtain the new signal sequences. Then, the 1DCNN-BiLSTM-based fault diagnosis model is proposed, which achieves accurate diagnosis of the faults of hydropower station measurement and control system. Finally, experimental verification reveals that, in comparison with other methods such as 1DCNN, BiLSTM, ELM, BP neural network, SVM, and DBN, the proposed approach in this paper achieves an exceptionally high average recognition accuracy of 99.8% in both simulation and example analysis. Additionally, it demonstrates faster convergence speed, indicating not only its superior diagnostic precision but also its high application value. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of ISSA algorithm.</p>
Full article ">Figure 2
<p>Benchmarking function <span class="html-italic">f</span><sub>1</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>1</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 3
<p>Benchmarking function <span class="html-italic">f</span><sub>2</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>2</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 4
<p>Benchmarking function <span class="html-italic">f</span><sub>3</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>3</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 5
<p>Benchmarking function <span class="html-italic">f</span><sub>4</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>4</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 6
<p>Benchmarking function <span class="html-italic">f</span><sub>5</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>5</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 7
<p>Benchmarking function <span class="html-italic">f</span><sub>6</sub> image and optimization algorithm’s optimization process. (<b>a</b>) Benchmarking functions <span class="html-italic">f</span><sub>6</sub>. (<b>b</b>) Optimization and convergence process.</p>
Full article ">Figure 8
<p>Flowchart of ISSA optimization of VMD parameters.</p>
Full article ">Figure 9
<p>1DCNN-BiLSTM model.</p>
Full article ">Figure 10
<p>Analogue circuit fault diagnosis flow.</p>
Full article ">Figure 11
<p>Quad OPAMP high pass filter circuit.</p>
Full article ">Figure 12
<p>Comparison of iterative convergence of the four methods.</p>
Full article ">Figure 13
<p>Waveforms in F0-F9 fault condition.</p>
Full article ">Figure 14
<p>ISSA-VMD signal decomposition results for F7.</p>
Full article ">Figure 15
<p>IMF Component Correlation Coefficient Values.</p>
Full article ">Figure 16
<p>Reconstructed signal waveform.</p>
Full article ">Figure 17
<p>Training curves for the four-OPAMP high-pass filter circuit model.</p>
Full article ">Figure 18
<p>Confusion matrix for four-OPAMP high-pass filtering circuits.</p>
Full article ">Figure 19
<p>Results of different fault diagnosis models for four OPAMP high pass filter circuits.</p>
Full article ">Figure 20
<p>Fault diagnosis experiment platform of hydropower station measurement and control system.</p>
Full article ">Figure 21
<p>Power supply board of hydropower station measurement and control system.</p>
Full article ">Figure 22
<p>Platform measured output voltage waveform.</p>
Full article ">Figure 23
<p>Comparison of iterative convergence of the four methods.</p>
Full article ">Figure 24
<p>Model training curves for the power board card circuits.</p>
Full article ">Figure 25
<p>Confusion Matrix for Power Board Circuitry.</p>
Full article ">Figure 26
<p>Results of different troubleshooting models for power board card circuits.</p>
Full article ">
11 pages, 258 KiB  
Article
An Overview of Family and Community Nurse Specialists’ Employment Situation in Spain: A Qualitative Study
by Francisca Sánchez-Muñoz, Isabel María Fernández-Medina, María Isabel Ventura-Miranda, Ángela María Ortega-Galán, María del Mar Jiménez-Lasserrotte and María Dolores Ruíz-Fernández
Healthcare 2024, 12(22), 2268; https://doi.org/10.3390/healthcare12222268 - 14 Nov 2024
Viewed by 293
Abstract
Background: Family and Community Nurse specialists are advocates of a holistic model of care in multidisciplinary primary care teams. This study aims to describe the experiences and perceptions of nurses specialising in Family and Community Nursing regarding their working conditions in primary [...] Read more.
Background: Family and Community Nurse specialists are advocates of a holistic model of care in multidisciplinary primary care teams. This study aims to describe the experiences and perceptions of nurses specialising in Family and Community Nursing regarding their working conditions in primary care in Spain. Methods: A qualitative descriptive study was conducted. Eighteen family and community specialist nurses from different autonomous communities in Spain participated. Individual interviews and a focus group were conducted. Results: The results identified two main themes: The current work situation of the Family and Community Nursing specialist and Support network and system of rejection with four sub-themes highlighting the lack of social and work recognition, the advantages of working with Family and Community Nursing specialists, systematic ambivalence towards Family and Community Nursing, and the need for institutional support. The inclusion of Family and Community Nursing specialists in primary care teams favours the nurse–patient bond, increases and/or maintains the quality of life of patients, and strengthens their empowerment; however, there is an absence of specific job vacancies. Conclusions: The institutional and social lack of awareness about the roles of Family and Community nurse practitioners and their impact on health care systems limits the quality of patient care in primary care. Full article
46 pages, 4014 KiB  
Article
Robust Human Activity Recognition for Intelligent Transportation Systems Using Smartphone Sensors: A Position-Independent Approach
by John Benedict Lazaro Bernardo, Attaphongse Taparugssanagorn, Hiroyuki Miyazaki, Bipun Man Pati and Ukesh Thapa
Appl. Sci. 2024, 14(22), 10461; https://doi.org/10.3390/app142210461 - 13 Nov 2024
Viewed by 680
Abstract
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or [...] Read more.
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or in the left/right leg pocket. The performance of traditional machine learning algorithms (Decision Trees (DT), K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Classifier (SVC), and XGBoost) is compared against deep learning models (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Temporal Convolutional Networks (TCN), and Transformer models) under two sensor configurations. Our findings highlight that the Temporal Convolutional Network (TCN) model consistently outperforms other models, particularly in the four-sensor non-overlapping configuration, achieving the highest accuracy of 97.70%. Deep learning models such as LSTM, GRU, and Transformer also demonstrate strong performance, showcasing their effectiveness in capturing temporal dependencies in HAR tasks. Traditional machine learning models, including RF and XGBoost, provide reasonable performance but do not match the accuracy of deep learning models. Additionally, incorporating data from linear accelerometers and gravity sensors led to slight improvements over using accelerometer and gyroscope data alone. This research enhances the recognition of passenger behaviors for intelligent transportation systems, contributing to more efficient congestion management and emergency response strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Running Activity Accelerometer Data: acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 2
<p>Running Activity Gyroscope Data: angular velocity along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 3
<p>Running Activity Linear Accelerometer Data: linear acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 4
<p>Running Activity Gravity Sensor Data: gravitational acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 5
<p>Methodological framework for assessing machine learning and deep learning techniques.</p>
Full article ">Figure 6
<p>Architecture of a Gated Recurrent Unit (GRU) Network used in Activity Recognition. Adapted from [<a href="#B37-applsci-14-10461" class="html-bibr">37</a>], showing the flow through the reset and update gates, facilitating efficient sequential data processing.</p>
Full article ">Figure 7
<p>Architecture of a Long Short-Term Memory (LSTM) Network utilized in Activity Recognition. Adapted from [<a href="#B38-applsci-14-10461" class="html-bibr">38</a>], showing the flow of information through the forget, input, and output gates to manage long-term dependencies in sequential data.</p>
Full article ">Figure 8
<p>Architecture of a Temporal Convolutional Network (TCN) for Activity Recognition, adapted from [<a href="#B28-applsci-14-10461" class="html-bibr">28</a>]. Dilated causal convolutions capture long-term dependencies, with dropout layers to prevent overfitting.</p>
Full article ">Figure 9
<p>Architecture of the Transformer Model used in Activity Recognition, illustrating the multi-head attention and feed-forward layers, adapted from [<a href="#B29-applsci-14-10461" class="html-bibr">29</a>]. The positional encoding enables handling of sequential data without recurrence.</p>
Full article ">Figure 10
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 10 Cont.
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11 Cont.
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">
Back to TopTop