[go: up one dir, main page]

Next Issue
Volume 24, August-1
Previous Issue
Volume 24, July-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 14 (July-2 2024) – 336 articles

Cover Story (view full-size image): The Internet of Things (IoT) is being increasingly integrated into our daily lives through interconnected smart sensors that operate autonomously, especially in remote areas and resource-limited environments. However, rechargeable batteries, which constitute the primary energy source of these devices, often have limited capacity. Ultra-low power techniques aim at prolonging the overall sensor network lifetime by yielding significant energy savings. In addition, the integration of these techniques with energy harvesting allows IoT devices to function with a virtually infinite operational lifetime, eliminating the need for traditional batteries. These two strategies combined create a vision for a more sustainable IoT infrastructure. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 2945 KiB  
Article
Security in Transformer Visual Trackers: A Case Study on the Adversarial Robustness of Two Models
by Peng Ye, Yuanfang Chen, Sihang Ma, Feng Xue, Noel Crespi, Xiaohan Chen and Xing Fang
Sensors 2024, 24(14), 4761; https://doi.org/10.3390/s24144761 - 22 Jul 2024
Viewed by 616
Abstract
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part [...] Read more.
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part of the input data. It has been widely applied in the field of visual tracking. Unfortunately, the security of the transformer model is unclear. It causes such transformer-based applications to be exposed to security threats. In this work, the security of the transformer model was investigated with an important component of autonomous driving, i.e., visual tracking. Such deep-learning-based visual tracking is vulnerable to adversarial attacks, and thus, adversarial attacks were implemented as the security threats to conduct the investigation. First, adversarial examples were generated on top of video sequences to degrade the tracking performance, and the frame-by-frame temporal motion was taken into consideration when generating perturbations over the depicted tracking results. Then, the influence of perturbations on performance was sequentially investigated and analyzed. Finally, numerous experiments on OTB100, VOT2018, and GOT-10k data sets demonstrated that the executed adversarial examples were effective on the performance drops of the transformer-based visual tracking. White-box attacks showed the highest effectiveness, where the attack success rates exceeded 90% against transformer-based trackers. Full article
(This article belongs to the Special Issue Advances in Automated Driving: Sensing and Control)
Show Figures

Figure 1

Figure 1
<p>The adversarial attack, RTAA, in two transformer-model-based trackers (TransT [<a href="#B20-sensors-24-04761" class="html-bibr">20</a>] and MixFormer [<a href="#B21-sensors-24-04761" class="html-bibr">21</a>]). The TransT tracker effectively located targets in the original video sequences. The MixFormer utilized the flexibility of attention operations, and there was a mixed attention module for simultaneous feature extraction and target information integration. The original result of the tracker as shown in (<b>a</b>), The adversarial attack strategy decreased the tracking accuracy, as shown in (<b>b</b>), with the RTAA attack, i.e., the TransT and MixFormer trackers output incorrect bounding boxes to track the wrong targets.</p>
Full article ">Figure 2
<p>The adversarial attack flowchart for Transformer trackers can be divided into two categories: gradient descent based attacks and generator based attacks, which include three types of attacks: cooling-shrinking attacks, IOU attacks, and RTAA attacks. In the attack section based on gradient descent in the figure, <math display="inline"><semantics> <mrow> <mspace width="4pt"/> <mi>D</mi> <mi>e</mi> <mi>l</mi> <mi>t</mi> <mi>a</mi> <mi>x</mi> </mrow> </semantics></math> represents perturbation interpolation between frames, and <span class="html-italic">T</span> represents the number of iterations.</p>
Full article ">Figure 3
<p>Evaluation results of trackers with and without adversarial attacks on the dataset OTB2015.</p>
Full article ">Figure 4
<p>Quantitative analysis of different attributes on the dataset VOT2018.</p>
Full article ">Figure 5
<p>Evaluation results of trackers with or without adversarial attacks on the dataset GOT-10k.</p>
Full article ">
23 pages, 7913 KiB  
Article
A Dual-Branch Fusion of a Graph Convolutional Network and a Convolutional Neural Network for Hyperspectral Image Classification
by Pan Yang and Xinxin Zhang
Sensors 2024, 24(14), 4760; https://doi.org/10.3390/s24144760 - 22 Jul 2024
Viewed by 530
Abstract
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To [...] Read more.
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial–spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets. Full article
Show Figures

Figure 1

Figure 1
<p>An outline of the proposed DFGCN for HSIC. It consists of two branches: a GCN branch, based on multi-scale superpixel segmentation, and a CNN branch with an attention mechanism.</p>
Full article ">Figure 2
<p>Segmentation maps acquired from the Indian Pines data set using the first principal component (PC) and adaptive multi-scale superpixel segmentation. Superpixel numbers at varying scales make up the figures: (<b>a</b>) first PC, (<b>b</b>) 262, (<b>c</b>) 525, and (<b>d</b>) 1051.</p>
Full article ">Figure 3
<p>Implementation of the proposed spectral feature enhancement module.</p>
Full article ">Figure 4
<p>Structural diagram of SE attention mechanism.</p>
Full article ">Figure 5
<p>Indian Pines: (<b>a</b>) false-color synthetic image; (<b>b</b>) ground truth.</p>
Full article ">Figure 6
<p>Pavia University: (<b>a</b>) false-color synthetic image; (<b>b</b>) ground truth.</p>
Full article ">Figure 7
<p>Kennedy Space Center: (<b>a</b>) false-color synthetic image; (<b>b</b>) ground truth.</p>
Full article ">Figure 8
<p>Impact of the parameters of α and the spectral feature enhancement module.</p>
Full article ">Figure 9
<p>Performance of classification with varying values of γ in spectral feature enhancement.</p>
Full article ">Figure 10
<p>Maps of classification produced using the IndianP data set using various methods in an optimal close-up view: (<b>a</b>) original image; (<b>b</b>) ground truth; (<b>c</b>) RBF-SVM; (<b>d</b>) 2-D-CNN; (<b>e</b>) 3-D-CNN; (<b>f</b>) GCN; (<b>g</b>) S<sup>2</sup>GCN; (<b>h</b>) MDGCN; (<b>i</b>) SGML; and (<b>j</b>) DFGCN.</p>
Full article ">Figure 11
<p>Maps of classification produced using the PaviaU data set using various methods in an optimal close-up view: (<b>a</b>) original image; (<b>b</b>) ground truth; (<b>c</b>) RBF-SVM; (<b>d</b>) 2-D-CNN; (<b>e</b>) 3-D-CNN; (<b>f</b>) GCN; (<b>g</b>) S<sup>2</sup>GCN; (<b>h</b>) MDGCN; (<b>i</b>) SGML; and (<b>j</b>) DFGCN.</p>
Full article ">Figure 12
<p>Maps of classification produced using the KSC data set using various methods in an optimal close-up view: (<b>a</b>) original image; (<b>b</b>) ground truth; (<b>c</b>) RBF-SVM; (<b>d</b>) 2-D-CNN; (<b>e</b>) 3-D-CNN; (<b>f</b>) GCN; (<b>g</b>) S<sup>2</sup>GCN; (<b>h</b>) MDGCN; (<b>i</b>) SGML; and (<b>j</b>) DFGCN.</p>
Full article ">Figure 13
<p>ROC curves and AUC values of each category in the PU data set.</p>
Full article ">Figure 14
<p>Multi-scale channel importance visualization using the PaviaU data set. (<b>a</b>–<b>c</b>) are weight visualizations of different scales. The red boxes indicate that the importance of superpixels within these ranges remains highly consistent across all channels at different scales, while the blue, orange, and green boxes represent that the importance of all superpixels on channels within these ranges is almost the same.</p>
Full article ">Figure 15
<p>The visualization of features from the IndianP, PaviaU, and KSC data sets using 2-D t-SNE. (<b>a</b>–<b>c</b>) are the original feature spaces of labeled samples and (<b>d</b>–<b>f</b>) are the data distributions of labeled samples in the graph convolution feature space. Classes are represented by different colors.</p>
Full article ">
13 pages, 5764 KiB  
Article
Effects of Fatigue and Unanticipated Factors on Knee Joint Biomechanics in Female Basketball Players during Cutting
by Aojie Zhu, Shunxiang Gao, Li Huang, Hairong Chen, Qiaolin Zhang, Dong Sun and Yaodong Gu
Sensors 2024, 24(14), 4759; https://doi.org/10.3390/s24144759 - 22 Jul 2024
Viewed by 602
Abstract
(1) This study examined the impact of fatigue and unanticipated factors on knee biomechanics during sidestep cutting and lateral shuffling in female basketball players, assessing the potential for non-contact anterior cruciate ligament (ACL) injuries. (2) Twenty-four female basketball players underwent fatigue induction and [...] Read more.
(1) This study examined the impact of fatigue and unanticipated factors on knee biomechanics during sidestep cutting and lateral shuffling in female basketball players, assessing the potential for non-contact anterior cruciate ligament (ACL) injuries. (2) Twenty-four female basketball players underwent fatigue induction and unanticipated change of direction tests, and kinematic and kinetic parameters were collected before and after fatigue with a Vicon motion capture system and Kistler ground reaction force (GRF) sensor. (3) Analysis using two-way repeated-measures ANOVA showed no significant interaction between fatigue and unanticipated factors on joint kinematics and kinetics. Unanticipated conditions significantly increased the knee joint flexion and extension angle (p < 0.01), decreased the knee flexion moment under anticipated conditions, and increased the knee valgus moment after fatigue (p ≤ 0.05). One-dimensional statistical parametric mapping (SPM1d) results indicated significant differences in GRF during sidestep cutting and knee inversion and rotation moments during lateral shuffling post-fatigue. (4) Unanticipated factors had a greater impact on knee load patterns, raising ACL injury risk. Fatigue and unanticipated factors were independent risk factors and should be considered separately in training programs to prevent lower limb injuries. Full article
(This article belongs to the Special Issue Sensor Techniques and Methods for Sports Science)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>). Experimental procedure. (<b>b</b>) The types of sidestep cutting and lateral shuffling in this study. The red arrows represent the ground reaction forces. (<b>c</b>) Schematic diagram of the experiment. The red arrows represent the ground reaction forces. The black arrow represents the movement trajectory.</p>
Full article ">Figure 2
<p>Knee joint moment in sagittal, frontal, and transverse planes during the stance phase in sidestep cutting and lateral shuffling. The black dashed line represents a value of 0. Effect A is fatigue, effect B is unanticipated factors, and significant main effects (<span class="html-italic">p</span> &lt; 0.05) are black horizontal bars at the bottom of the figure during corresponding periods from SPM1d analyses.</p>
Full article ">Figure 3
<p>The ground reaction force curves during sidestep cutting and lateral shuffling. The black dashed line represents a value of 0. Effect A is fatigue, effect B is unanticipated factors, and significant main effects (<span class="html-italic">p</span> &lt; 0.05) are black horizontal bars at the bottom of the figure during corresponding periods from SPM1d analyses.</p>
Full article ">
12 pages, 1120 KiB  
Article
Acute Response of Different High-Intensity Interval Training Protocols on Cardiac Auto-Regulation Using Wearable Device
by Myong-Won Seo
Sensors 2024, 24(14), 4758; https://doi.org/10.3390/s24144758 - 22 Jul 2024
Viewed by 557
Abstract
The purpose of this study was to compare different high-intensity interval training (HIIT) protocols with different lengths of work and rest times for a single session (all three had identical work-to-rest ratios and exercise intensities) for cardiac auto-regulation using a wearable device. With [...] Read more.
The purpose of this study was to compare different high-intensity interval training (HIIT) protocols with different lengths of work and rest times for a single session (all three had identical work-to-rest ratios and exercise intensities) for cardiac auto-regulation using a wearable device. With a randomized counter-balanced crossover, 13 physically active young male adults (age: 19.4 years, BMI: 21.9 kg/m2) were included. The HIIT included a warm-up of at least 5 min and three protocols of 10 s/50 s (20 sets), 20 s/100 s (10 sets), and 40 s/200 s (5 sets), with intensities ranging from 115 to 130% Wattmax. Cardiac auto-regulation was measured using a non-invasive method and a wearable device, including HRV and vascular function. Immediately after the HIIT session, the 40 s/200 s protocol produced the most intense stimulation in R-R interval (Δ-33.5%), ln low-frequency domain (Δ-42.6%), ln high-frequency domain (Δ-73.4%), and ln LF/HF ratio (Δ416.7%, all p < 0.05) compared to other protocols of 10 s/50 s and 20 s/100 s. The post-exercise hypotension in the bilateral ankle area was observed in the 40 s/200 s protocol only at 5 min after HIIT (right: Δ-12.2%, left: Δ-12.6%, all p < 0.05). This study confirmed that a longer work time might be more effective in stimulating cardiac auto-regulation using a wearable device, despite identical work-to-rest ratios and exercise intensity. Additional studies with 24 h measurements of cardiac autoregulation using wearable devices in response to various HIIT protocols are warranted. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Acute cardiac auto-regulation responses to different HIIT protocol. Note—SBP: systolic blood pressure, DBP: diastolic blood pressure, * Significant difference effect among the groups, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
30 pages, 8488 KiB  
Article
Improving Rebar Twist Prediction Exploiting Unified-Channel Attention-Based Image Restoration and Regression Techniques
by Jong-Chan Park and Gun-Woo Kim
Sensors 2024, 24(14), 4757; https://doi.org/10.3390/s24144757 - 22 Jul 2024
Viewed by 679
Abstract
Recent research has made significant progress in automated unmanned systems utilizing Artificial Intelligence (AI)-based image processing to optimize the rebar manufacturing process and minimize defects such as twisting during production. Despite various studies, including those employing data augmentation through Generative Adversarial Networks (GANs), [...] Read more.
Recent research has made significant progress in automated unmanned systems utilizing Artificial Intelligence (AI)-based image processing to optimize the rebar manufacturing process and minimize defects such as twisting during production. Despite various studies, including those employing data augmentation through Generative Adversarial Networks (GANs), the performance of rebar twist prediction has been limited due to image quality degradation caused by environmental noise, such as insufficient image quality and inconsistent lighting conditions in rebar processing environments. To address these challenges, we propose a novel approach for real-time rebar twist prediction in manufacturing processes. Our method involves restoring low-quality grayscale images to high resolution and employing an object detection model to identify and track rebar endpoints. We then apply regression analysis to the coordinates obtained from the bounding boxes to estimate the error rate of the rebar endpoint positions, thereby determining the occurrence of twisting. To achieve this, we first developed a Unified-Channel Attention (UCA) module that is robust to changes in intensity and contrast for grayscale images. The UCA can be integrated into image restoration models to more accurately detect rebar endpoint characteristics in object detection models. Furthermore, we introduce a method for predicting the future positions of rebar endpoints using various linear and non-linear regression models. The predicted positions are used to calculate the error rate in rebar endpoint locations, determined by the distance between the actual and predicted positions, which is then used to classify the presence of rebar twisting. Our experimental results demonstrate that integrating the UCA module with our image restoration model significantly improved existing models in Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) metrics. Moreover, employing regression models to predict future rebar endpoint positions enhances the F1 score for twist prediction. As a result, our approach offers a practical solution for rapid defect detection in rebar manufacturing processes. Full article
(This article belongs to the Special Issue Deep-Learning-Based Defect Detection for Smart Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Traditional rebar processing site.</p>
Full article ">Figure 2
<p>(<b>a</b>) shows the installation of a machine vision camera; (<b>b</b>) illustrates real-time calibration of rebar twist using grid cells. Detecting the rebar endpoint at the 60 cm mark from the extrusion point is crucial for predicting its future location at the 80 cm mark. Within the grid cell, black dots and green rectangular boundary boxes signify normal positions of rebar. Any deviation of rebar from this boundary box is marked by a red dot within grid cells. This deviation is indicative of twisting.</p>
Full article ">Figure 3
<p>Overview of the proposed technique.</p>
Full article ">Figure 4
<p>27 videos of the rebar extrusion process under different lighting conditions: (<b>a</b>) standard brightness (brightness level ~64,200 W LED, 10 ms exposure); (<b>b</b>) medium brightness (brightness level ~96,200 W LED, 15 ms exposure); (<b>c</b>) high brightness (brightness level ~128,200 W LED, 20 ms exposure). Each row within a lighting condition shows abnormal (<b>top</b> and <b>middle</b>) and normal (<b>bottom</b>) rebar extrusion processes.</p>
Full article ">Figure 5
<p>Image samples used in the restoration process: (<b>top</b>) 416 × 416 original high-resolution rebar images extracted from videos; (<b>bottom</b>) 104 × 104 low-resolution rebar images downscaled using Bicubic Interpolation.</p>
Full article ">Figure 6
<p>The overall architecture of UCA-SRGAN.</p>
Full article ">Figure 7
<p>Architecture of Unified-Channel Attention (UCA) module.</p>
Full article ">Figure 8
<p>The architecture of YOLOv5s for detecting rebar endpoints.</p>
Full article ">Figure 9
<p>Application of linear and non-linear regression models for predicting rebar endpoint location: (<b>a</b>) linear regression applied to linear data with minimal environmental noise; (<b>b</b>) non-linear regression applied to non-linear data with environmental noise.</p>
Full article ">Figure 10
<p>Visualization of grid cells for twist detection: (<b>a</b>) normal and (<b>b</b>) twist, displaying actual center coordinates and 5% threshold.</p>
Full article ">Figure 11
<p>Results of traditional image processing techniques: (<b>a</b>) edge detection and line segment detection results based on Canny edge, (<b>b</b>) rebar endpoint detection results using Hough transform, (<b>c</b>) results of background subtraction and real-time tracking of center coordinates of moving area.</p>
Full article ">Figure 12
<p>Change in F1 score according to threshold values ranging from 1% to 20%.</p>
Full article ">Figure 13
<p>Comparative results of images generated by image restoration model: CARN, RCAN, SRGAN, CBAM + SRGAN, and the proposed model.</p>
Full article ">Figure 14
<p>YOLOv5 training performance graphs for datasets generated by the proposed image restoration model: (<b>a</b>) Precision and F1 graphs for standard brightness resolution, (<b>b</b>) Precision and F1 graphs for medium brightness resolution, (<b>c</b>) Precision and F1 graphs for high brightness resolution.</p>
Full article ">Figure 15
<p>Visualizations of the detected rebar endpoints for each dataset.</p>
Full article ">
17 pages, 7063 KiB  
Article
Online Scene Semantic Understanding Based on Sparsely Correlated Network for AR
by Qianqian Wang, Junhao Song, Chenxi Du and Chen Wang
Sensors 2024, 24(14), 4756; https://doi.org/10.3390/s24144756 - 22 Jul 2024
Viewed by 553
Abstract
Real-world understanding serves as a medium that bridges the information world and the physical world, enabling the realization of virtual–real mapping and interaction. However, scene understanding based solely on 2D images faces problems such as a lack of geometric information and limited robustness [...] Read more.
Real-world understanding serves as a medium that bridges the information world and the physical world, enabling the realization of virtual–real mapping and interaction. However, scene understanding based solely on 2D images faces problems such as a lack of geometric information and limited robustness against occlusion. The depth sensor brings new opportunities, but there are still challenges in fusing depth with geometric and semantic priors. To address these concerns, our method considers the repeatability of video stream data and the sparsity of newly generated data. We introduce a sparsely correlated network architecture (SCN) designed explicitly for online RGBD instance segmentation. Additionally, we leverage the power of object-level RGB-D SLAM systems, thereby transcending the limitations of conventional approaches that solely emphasize geometry or semantics. We establish correlation over time and leverage this correlation to develop rules and generate sparse data. We thoroughly evaluate the system’s performance on the NYU Depth V2 and ScanNet V2 datasets, demonstrating that incorporating frame-to-frame correlation leads to significantly improved accuracy and consistency in instance segmentation compared to existing state-of-the-art alternatives. Moreover, using sparse data reduces data complexity while ensuring the real-time requirement of 18 fps. Furthermore, by utilizing prior knowledge of object layout understanding, we showcase a promising application of augmented reality, showcasing its potential and practicality. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

Figure 1
<p>Overview of our object-level RGBD SLAM based on SCN. In three steps, the input RGBD data stream is used to achieve 3D semantic model reconstruction. Step 1: frame-by-frame camera localization is performed. Step 2: camera positions and image data are fed into the SCN network to obtain per-pixel object semantic information. Step 3: the current-time semantics are transformed into global semantics.</p>
Full article ">Figure 2
<p>ResNet blocks for 2D feature learning.</p>
Full article ">Figure 3
<p>Image differences at different time intervals.</p>
Full article ">Figure 4
<p>Specific rules for sparse data convolution.</p>
Full article ">Figure 5
<p>The detailed architecture of Sparse Correlated Network.</p>
Full article ">Figure 6
<p>Comparison of semantic segmentation accuracy in NYU depth dataset.</p>
Full article ">Figure 7
<p>Comparison of semantic segmentation accuracy in ScanNet dataset.</p>
Full article ">Figure 8
<p>Instance Segmentation results obtained with our network. The integrated geometric model, semantic map, and instance map are displayed from left to right.</p>
Full article ">Figure 9
<p>Online 3D semantic model generation based on camera motion.</p>
Full article ">Figure 10
<p>AR applications that combine 3D instance mapping with virtual objects.</p>
Full article ">Figure 11
<p>Virtual–real substitution based on semantic association rules.</p>
Full article ">
19 pages, 7121 KiB  
Article
Sensor-Fused Nighttime System for Enhanced Pedestrian Detection in ADAS and Autonomous Vehicles
by Jungme Park, Bharath Kumar Thota and Karthik Somashekar
Sensors 2024, 24(14), 4755; https://doi.org/10.3390/s24144755 - 22 Jul 2024
Viewed by 822
Abstract
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm [...] Read more.
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm is proposed to fuse the data from the two camera sensors. The proposed alignment procedure is crucial for effective sensor fusion. To develop a robust Deep Neural Network (DNN) system, nighttime thermal and RGB images were collected under various scenarios, creating a labeled dataset of 32,000 image pairs. Three fusion techniques were explored using transfer learning, alongside two single-sensor models using only RGB or thermal data. Five DNN models were developed and evaluated, with experimental results showing superior performance of fused models over non-fusion counterparts. The late-fusion system was selected for its optimal balance of accuracy and response time. For real-time inferencing, the best model was further optimized, achieving 33 fps on the embedded edge computing device, an 83.33% improvement in inference speed over the system without optimization. These findings are valuable for advancing Advanced Driver Assistance Systems (ADASs) and autonomous vehicle technologies, enhancing pedestrian detection during nighttime to improve road safety and reduce accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A 960 by 540 RGB image in which the third person in the back is not captured by the RGB camera due to low lighting conditions; (<b>b</b>) a 640 by 512 IR thermal image of the same scene.</p>
Full article ">Figure 2
<p>The overall architecture of the proposed nighttime pedestrian detection system: sensors mounted on the vehicle indicated by the red circle, with detection results presented using red bounding boxes.</p>
Full article ">Figure 3
<p>(<b>a</b>) Hardware setup on the testing vehicle; (<b>b</b>) captured images with different FOVs and resolutions.</p>
Full article ">Figure 4
<p>The proposed image alignment method involves: (<b>a</b>) an original RGB image with dimensions of 960 × 540 pixels and a field of view (FOV) of 78°; (<b>b</b>) an original IR image sized at 640 × 512 pixels with a FOV of 50°; (<b>c</b>) a resized and translated RGB image; and (<b>d</b>) the aligned RGB image corresponding to the IR image.</p>
Full article ">Figure 5
<p>The image alignment results comparison: (<b>a</b>) an original RGB image with dimensions of 960 × 540 pixels and a field of view (FOV) of 78°; (<b>b</b>) an original IR image sized at 640 × 512 pixels with a FOV of 50°; (<b>c</b>) the aligned RGB image using the registration method; and (<b>d</b>) the aligned RGB image using the proposed method.</p>
Full article ">Figure 6
<p>Public datasets for nighttime pedestrian detection: (<b>a</b>) an example from the KAIST dataset with poor IR image quality; (<b>b</b>) an example from the LLVIP dataset.</p>
Full article ">Figure 7
<p>The Kettering dataset collected by the authors: (<b>a</b>) Pedestrian crossing; (<b>b</b>) Urban driving scenario.</p>
Full article ">Figure 8
<p>Three different sensor-fusion methods: (<b>a</b>) early fusion; (<b>b</b>) mid fusion; (<b>c</b>) late fusion.</p>
Full article ">Figure 9
<p>The weighted sum method for early fusion: (<b>a</b>) the person in the green box is captured in the IR image even in low lighting conditions; (<b>b</b>) the person in the green box is not captured in the RGB image due to low lighting conditions; (<b>c</b>) the person in the green box is captured in the weighted sum image by fusing the IR and RGB images.</p>
Full article ">Figure 10
<p>The training of the early-fusion model using transfer learning.</p>
Full article ">Figure 11
<p>The architecture of the late-fusion model.</p>
Full article ">Figure 12
<p>The NMS procedure for late fusion.</p>
Full article ">Figure 13
<p>Training of the mid-fusion method using transfer learning.</p>
Full article ">Figure 14
<p>Nighttime testing example 1: (<b>a</b>) Due to the low lighting conditions, two pedestrians on the left side (indicated by white arrows) are not visible. (<b>b</b>) Pedestrians are correctly detected by the sensor-fused system and marked with red bounding boxes on the screen.</p>
Full article ">Figure 15
<p>Nighttime testing example 2: (<b>a</b>) Due to the low lighting conditions, two pedestrians on the right side (indicated by white arrows) are not visible. (<b>b</b>) Pedestrians are correctly detected by the sensor-fused system and marked with red bounding boxes on the screen.</p>
Full article ">Figure 16
<p>Real-time inference results during the daytime. The proposed system can be run during the daytime and correctly detects pedestrians.</p>
Full article ">
18 pages, 12761 KiB  
Article
Robot-Assisted Augmented Reality (AR)-Guided Surgical Navigation for Periacetabular Osteotomy
by Haoyan Ding, Wenyuan Sun and Guoyan Zheng
Sensors 2024, 24(14), 4754; https://doi.org/10.3390/s24144754 - 22 Jul 2024
Cited by 1 | Viewed by 843
Abstract
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for [...] Read more.
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for surgeons to perform a PAO surgery. To solve this challenge, we propose a robot-assisted, augmented reality (AR)-guided surgical navigation system for PAO. The system mainly consists of a robot arm, an optical tracker, and a Microsoft HoloLens 2 headset, which is a state-of-the-art (SOTA) optical see-through (OST) head-mounted display (HMD). For AR guidance, we propose an optical marker-based AR registration method to estimate a transformation from the optical tracker coordinate system (COS) to the virtual space COS such that the virtual models can be superimposed on the corresponding physical counterparts. Furthermore, to guide the osteotomy, the developed system automatically aligns a bone saw with osteotomy planes planned in preoperative images. Then, it provides surgeons with not only virtual constraints to restrict movement of the bone saw but also AR guidance for visual feedback without sight diversion, leading to higher surgical accuracy and improved surgical safety. Comprehensive experiments were conducted to evaluate both the AR registration accuracy and osteotomy accuracy of the developed navigation system. The proposed AR registration method achieved an average mean absolute distance error (mADE) of 1.96 ± 0.43 mm. The robotic system achieved an average center translation error of 0.96 ± 0.23 mm, an average maximum distance of 1.31 ± 0.20 mm, and an average angular deviation of 3.77 ± 0.85°. Experimental results demonstrated both the AR registration accuracy and the osteotomy accuracy of the developed system. Full article
(This article belongs to the Special Issue Augmented Reality-Based Navigation System for Healthcare)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed robot-assisted AR-guided surgical navigation system for PAO.</p>
Full article ">Figure 2
<p>A schematic illustration of preoperative planning, where <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">o</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">y</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math> are calculated based on <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>11</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>12</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>21</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>22</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>.</p>
Full article ">Figure 3
<p>A schematic illustration of bone saw calibration. (<b>a</b>) Digitizing the four corner points using a trackable pointer; (<b>b</b>) calculating <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">o</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">y</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math> based on <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>11</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>12</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>21</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>22</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>.</p>
Full article ">Figure 4
<p>The proposed AR registration. (<b>a</b>) Virtual models of the optical marker are loaded in the virtual space. Each virtual model has a unique pose. (<b>b</b>) The optical marker attached to the robot flange is aligned with each virtual model.</p>
Full article ">Figure 5
<p>AR guidance during the PAO procedure. (<b>a</b>) The proposed AR navigation system not only provides visualization of virtual models but also dispalys the pose parameters of the bone saw relative to the osteotomy plane. (<b>b</b>) The definitions of the pose parameters.</p>
Full article ">Figure 6
<p>Experimental setup of the evaluation of AR registration accuracy. In this experiment, we defined eight validation points in the virtual space. Then, after performing AR registration, we used a trackable pointer to digitize the validation points, acquiring their coordinates in the optical tracker COS <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">O</mi> <mi>T</mi> </msub> </semantics></math>. We calculated the mADE of the validation points as an evaluation metric.</p>
Full article ">Figure 7
<p>Evaluation of the osteotomy accuracy. (<b>a</b>) Extraction of the upper plane and the lower plane in the postoperative image. (<b>b</b>) A schematic illustration on how the center translation error <math display="inline"><semantics> <msub> <mi>d</mi> <mi>c</mi> </msub> </semantics></math>, the maximum distance <math display="inline"><semantics> <msub> <mi>d</mi> <mi>m</mi> </msub> </semantics></math>, and the angular deviation <math display="inline"><semantics> <mi>θ</mi> </semantics></math> are defined.</p>
Full article ">Figure 8
<p>Visualization of the alignment between the virtual model (yellow) and the pelvis phantom (white) using different methods [<a href="#B18-sensors-24-04754" class="html-bibr">18</a>,<a href="#B19-sensors-24-04754" class="html-bibr">19</a>,<a href="#B21-sensors-24-04754" class="html-bibr">21</a>]. In this figure, misalignment is highlighted using red arrows. Compared with other methods, the proposed method achieved the most accurate AR registration.</p>
Full article ">Figure 9
<p>Visualization of experimental results of osteotomy where actual osteotomy planes and planned osteotomy planes are visualized in orange and yellow, respectively.</p>
Full article ">Figure 10
<p>AR guidance during the osteotomy procedure: (<b>a</b>) AR display when the bone saw was out of the osteotomy area, where the pose parameters are displayed in red; (<b>b</b>) AR display when the bone saw was on the planned plane and in the osteotomy area, where the pose parameters are visualized in green.</p>
Full article ">
19 pages, 3919 KiB  
Article
Integrated Wearable System for Monitoring Skeletal Muscle Force of Lower Extremities
by Heng Luo, Ying Xiong, Mingyue Zhu, Xijun Wei and Xiaoming Tao
Sensors 2024, 24(14), 4753; https://doi.org/10.3390/s24144753 - 22 Jul 2024
Viewed by 11907
Abstract
Continuous monitoring of lower extremity muscles is necessary, as the muscles support many human daily activities, such as maintaining balance, standing, walking, running, and jumping. However, conventional electromyography and physiological cross-sectional area methods inherently encounter obstacles when acquiring precise and real-time data pertaining [...] Read more.
Continuous monitoring of lower extremity muscles is necessary, as the muscles support many human daily activities, such as maintaining balance, standing, walking, running, and jumping. However, conventional electromyography and physiological cross-sectional area methods inherently encounter obstacles when acquiring precise and real-time data pertaining to human bodies, with a notable lack of consideration for user comfort. Benefitting from the fast development of various fabric-based sensors, this paper addresses these current issues by designing an integrated smart compression stocking system, which includes compression garments, fabric-embedded capacitive pressure sensors, an edge control unit, a user mobile application, and cloud backend. The pipeline architecture design and component selection are discussed in detail to illustrate a comprehensive user-centered STIMES design. Twelve healthy young individuals were recruited for clinical experiments to perform maximum voluntary isometric ankle plantarflexion contractions. All data were simultaneously collected through the integrated smart compression stocking system and a muscle force measurement system (Humac NORM, software version HUMAC2015). The obtained correlation coefficients above 0.92 indicated high linear relationships between the muscle torque and the proposed system readout. Two-way ANOVA analysis further stressed that different ankle angles (p = 0.055) had more important effects on the results than different subjects (p = 0.290). Hence, the integrated smart compression stocking system can be used to monitor the muscle force of the lower extremities in isometric mode. Full article
(This article belongs to the Special Issue Sensing Technologies in Medical Robot)
Show Figures

Figure 1

Figure 1
<p>Pipeline architecture design for the integrated smart compression stocking system, including compression garments, fabric-embedded pressure sensors, an edge control unit, a user mobile application, and cloud backend.</p>
Full article ">Figure 2
<p>Performance of fabric capacitive pressure sensors. (<b>a</b>) Sensitivity and hysteresis testing. (<b>b</b>) Repeatability testing. (<b>c</b>) Washability testing.</p>
Full article ">Figure 3
<p>Schematic diagram of the edge control unit design.</p>
Full article ">Figure 4
<p>Cloud backend architecture. Services for multi-user account authorization (register and login) and user pressure time-series data streaming were developed and deployed on the cloud serverless microservices.</p>
Full article ">Figure 5
<p>Experimental setup and exercise protocol. The subject’s right foot was to perform MVIC plantarflexion against the footplate and complete two sets of four exercises when the footplate was fixed at 0°, 10°, 20°, and 30°.</p>
Full article ">Figure 6
<p>The clinical experiment procedure time sequence for each subject.</p>
Full article ">Figure 7
<p>Illustrations of the MVIC’s different phases, consisting of the loading phase, the holding phase, and the relaxing phase. (<b>a</b>) A typical MVIC with measured torque values (NT) and capacitance values (C). (<b>b</b>) A typical gastrocnemius volume and morphologic change.</p>
Full article ">Figure 8
<p>Illustration of fitting coefficients and correlation coefficients for each subject’s 8 MVIC exercises, where the <span class="html-italic">X</span>-axis represents 12 different subjects and the <span class="html-italic">Z</span>-axis represents two exercises at four different ankle angles of 0°, 10°, 20°, and 30°. (<b>a</b>) The correlation coefficient <math display="inline"><semantics> <mrow> <mi>r</mi> </mrow> </semantics></math> for each MVIC exercise. (<b>b</b>) The fitting coefficient <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> for each MVIC exercise. (<b>c</b>) The fitting coefficient <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math> for each MVIC exercise.</p>
Full article ">Figure 9
<p>Visual approaches to test the ANOVA assumption of normality. Regarding the correlation coefficients <math display="inline"><semantics> <mrow> <mi>r</mi> </mrow> </semantics></math>: the (<b>a</b>) histogram of residuals and (<b>b</b>) QQ-plot from standardized residuals; regarding the fitting coefficients <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math>: the (<b>c</b>) histogram of residuals and (<b>d</b>) QQ-plot from standardized residuals; regarding the fitting coefficients <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math>: the (<b>e</b>) histogram of residuals and (<b>f</b>) QQ-plot from standardized residuals.</p>
Full article ">
19 pages, 1868 KiB  
Article
Constrained Flooding Based on Time Series Prediction and Lightweight GBN in BLE Mesh
by Junxiang Li, Mingxia Li and Li Wang
Sensors 2024, 24(14), 4752; https://doi.org/10.3390/s24144752 - 22 Jul 2024
Cited by 1 | Viewed by 479
Abstract
Bluetooth Low Energy Mesh (BLE Mesh) enables Bluetooth flexibility and coverage by introducing Low-Power Nodes (LPNs) and enhanced networking protocol. It is also a commonly used communication method in sensor networks. In BLE Mesh, LPNs are periodically woken to exchange messages in a [...] Read more.
Bluetooth Low Energy Mesh (BLE Mesh) enables Bluetooth flexibility and coverage by introducing Low-Power Nodes (LPNs) and enhanced networking protocol. It is also a commonly used communication method in sensor networks. In BLE Mesh, LPNs are periodically woken to exchange messages in a stop-and-wait way, where the tradeoff between energy and efficiency is a hard problem. Related works have reduced the energy consumption of LPNs mainly in the direction of changing the bearer layer, improving time synchronization and broadcast channel utilization. These algorithms improve communication efficiency; however, they cause energy loss, especially for the LPNs. In this paper, we propose a constrained flooding algorithm based on time series prediction and lightweight GBN (Go-Back-N). On the one hand, the wake-up cycle of the LPNs is determined by the time series prediction of the surrounding load. On the other, LPNs exchange messages through lightweight GBN, which improves the window and ACK mechanisms. Simulation results validate the effectiveness of the Time series Prediction and LlightWeight GBN (TP-LW) algorithm in energy consumption and throughput. Compared with the original algorithm of BLE Mesh, when fewer packets are transmitted, the throughput is increased by 214.71%, and the energy consumption is reduced by 65.14%. Full article
Show Figures

Figure 1

Figure 1
<p>Bluetooth mesh node types.</p>
Full article ">Figure 2
<p>BLE Mesh network friendship mechanism communication example.</p>
Full article ">Figure 3
<p>Flowchart of SARIMA time series forecasting model.</p>
Full article ">Figure 4
<p>Lightweight GBN protocol communication schematic.</p>
Full article ">Figure 5
<p>Constrained flooding algorithm based on time series prediction and lightweight GBN flow chart.</p>
Full article ">Figure 6
<p>Load prediction and friend node selection.</p>
Full article ">Figure 7
<p>Lightweight GBN communications.</p>
Full article ">Figure 8
<p>Friend node #1.</p>
Full article ">Figure 9
<p>Friend node #2.</p>
Full article ">Figure 10
<p>Friend node #6.</p>
Full article ">Figure 11
<p>Friend node #7.</p>
Full article ">Figure 12
<p>RMSE.</p>
Full article ">Figure 13
<p>MAE.</p>
Full article ">Figure 14
<p>Scanning time.</p>
Full article ">Figure 15
<p>LPN energy consumption.</p>
Full article ">Figure 16
<p>LPNs throughput.</p>
Full article ">Figure 17
<p>Probability of friendship break.</p>
Full article ">
22 pages, 49029 KiB  
Article
Autonomous Crack Detection for Mountainous Roads Using UAV Inspection System
by Xinbao Chen, Chenxi Wang, Chang Liu, Xiaodong Zhu, Yaohui Zhang, Tianxiang Luo and Junhao Zhang
Sensors 2024, 24(14), 4751; https://doi.org/10.3390/s24144751 - 22 Jul 2024
Viewed by 675
Abstract
Road cracks significantly affect the serviceability and safety of roadways, especially in mountainous terrain. Traditional inspection methods, such as manual detection, are excessively time-consuming, labor-intensive, and inefficient. Additionally, multi-function detection vehicles equipped with diverse sensors are costly and unsuitable for mountainous roads, primarily [...] Read more.
Road cracks significantly affect the serviceability and safety of roadways, especially in mountainous terrain. Traditional inspection methods, such as manual detection, are excessively time-consuming, labor-intensive, and inefficient. Additionally, multi-function detection vehicles equipped with diverse sensors are costly and unsuitable for mountainous roads, primarily because of the challenging terrain conditions characterized by frequent bends in the road. To address these challenges, this study proposes a customized Unmanned Aerial Vehicle (UAV) inspection system designed for automatic crack detection. This system focuses on enhancing autonomous capabilities in mountainous terrains by incorporating embedded algorithms for route planning, autonomous navigation, and automatic crack detection. The slide window method (SWM) is proposed to enhance the autonomous navigation of UAV flights by generating path planning on mountainous roads. This method compensates for GPS/IMU positioning errors, particularly in GPS-denied or GPS-drift scenarios. Moreover, the improved MRC-YOLOv8 algorithm is presented to conduct autonomous crack detection from UAV imagery in an on/offboard module. To validate the performance of our UAV inspection system, we conducted multiple experiments to evaluate its accuracy, robustness, and efficiency. The results of the experiments on automatic navigation demonstrate that our fusion method, in conjunction with SWM, effectively enables real-time route planning in GPS-denied mountainous terrains. The proposed system displays an average localization drift of 2.75% and a per-point local scanning error of 0.33 m over a distance of 1.5 km. Moreover, the experimental results on the road crack detection reveal that the MRC-YOLOv8 algorithm achieves an F1-Score of 87.4% and a mAP of 92.3%, thus surpassing other state-of-the-art models like YOLOv5s, YOLOv8n, and YOLOv9 by 1.2%, 1.3%, and 3.0% in terms of mAP, respectively. Furthermore, the parameters of the MRC-YOLOv8 algorithm indicate a volume reduction of 0.19(×106) compared to the original YOLOv8 model, thus enhancing its lightweight nature. The UAV inspection system proposed in this study serves as a valuable tool and technological guidance for the routine inspection of mountainous roads. Full article
Show Figures

Figure 1

Figure 1
<p>A mountainous road: the mountain road is narrow and winding (an example from a web resource [<a href="#B4-sensors-24-04751" class="html-bibr">4</a>]).</p>
Full article ">Figure 2
<p>Hardware architecture of the UAV inspection system (modified from [<a href="#B9-sensors-24-04751" class="html-bibr">9</a>]).</p>
Full article ">Figure 3
<p>The overall framework of the UAV inspection system for pavement crack detection on mountainous roads.</p>
Full article ">Figure 4
<p>Cascaded control scheme of the quadrotor drones (modified from [<a href="#B26-sensors-24-04751" class="html-bibr">26</a>,<a href="#B27-sensors-24-04751" class="html-bibr">27</a>]).</p>
Full article ">Figure 5
<p>The workflow of the sliding window method for route generation: (<b>a</b>) RGB image; (<b>b</b>) grayscale image; (<b>c</b>) route generation with SWM in the grayscale image; and (<b>d</b>) the workflow of the slide window method (SWM).</p>
Full article ">Figure 6
<p>A diagram of the UAV data acquisition process.</p>
Full article ">Figure 7
<p>The basic network and some improvements (marked in red rectangles) of the enhanced YOLOv8 structure.</p>
Full article ">Figure 8
<p>DWR segmentation model structures (modified from [<a href="#B33-sensors-24-04751" class="html-bibr">33</a>]).</p>
Full article ">Figure 9
<p>Xiangsi Mountainous Road in this study.</p>
Full article ">Figure 10
<p>The three strategies of the sliding window method: (<b>a</b>) the first strategy; (<b>b</b>) the second strategy; and (<b>c</b>) the third strategy.</p>
Full article ">Figure 11
<p>A comparison of the route generation results from the experiment.</p>
Full article ">Figure 12
<p>A comparison of the identification accuracy results of the three algorithms for the seven types of crack damage: (<b>a</b>) mAP@0.5 (%) and (<b>b</b>) F1-Score (%).</p>
Full article ">Figure 13
<p>Partial visual results of the crack detection, based on MRC-YOLOv8, of the concrete pavements in our UAV vertical imagery in this experiment.</p>
Full article ">
20 pages, 538 KiB  
Article
Exploring the Real-Time Variability and Complexity of Sitting Patterns in Office Workers with Non-Specific Chronic Spinal Pain and Pain-Free Individuals
by Eduarda Oliosi, Afonso Júlio, Phillip Probst, Luís Silva, João Paulo Vilas-Boas, Ana Rita Pinheiro and Hugo Gamboa
Sensors 2024, 24(14), 4750; https://doi.org/10.3390/s24144750 - 22 Jul 2024
Viewed by 753
Abstract
Chronic spinal pain (CSP) is a prevalent condition, and prolonged sitting at work can contribute to it. Ergonomic factors like this can cause changes in motor variability. Variability analysis is a useful method to measure changes in motor performance over time. When performing [...] Read more.
Chronic spinal pain (CSP) is a prevalent condition, and prolonged sitting at work can contribute to it. Ergonomic factors like this can cause changes in motor variability. Variability analysis is a useful method to measure changes in motor performance over time. When performing the same task multiple times, different performance patterns can be observed. This variability is intrinsic to all biological systems and is noticeable in human movement. This study aims to examine whether changes in movement variability and complexity during real-time office work are influenced by CSP. The hypothesis is that individuals with and without pain will have different responses to office work tasks. Six office workers without pain and ten with CSP participated in this study. Participant’s trunk movements were recorded during work for an entire week. Linear and nonlinear measures of trunk kinematic displacement were used to assess movement variability and complexity. A mixed ANOVA was utilized to compare changes in movement variability and complexity between the two groups. The effects indicate that pain-free participants showed more complex and less predictable trunk movements with a lower degree of structure and variability when compared to the participants suffering from CSP. The differences were particularly noticeable in fine movements. Full article
(This article belongs to the Special Issue Biomedical Signal Processing and Health Monitoring Based on Sensors)
Show Figures

Figure 1

Figure 1
<p>Postural analysis using a smartphone mounted on the chest.</p>
Full article ">Figure 2
<p>Detection of periods of interest.</p>
Full article ">Figure 3
<p>Decomposition-based approach for the projection’s calculation of displacement.</p>
Full article ">Figure 4
<p>Data from two anatomical axes.</p>
Full article ">Figure 5
<p>Multifractal spectrum. The arrow indicates the difference between the maximum and minimum hq that is called the multifractal spectrum width.</p>
Full article ">
58 pages, 12032 KiB  
Review
Artificial Intelligence in Pancreatic Image Analysis: A Review
by Weixuan Liu, Bairui Zhang, Tao Liu, Juntao Jiang and Yong Liu
Sensors 2024, 24(14), 4749; https://doi.org/10.3390/s24144749 - 22 Jul 2024
Viewed by 991
Abstract
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing [...] Read more.
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel’s workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart.</p>
Full article ">Figure 2
<p>Precursors, risk factors, and subtypes of PC.</p>
Full article ">Figure 3
<p>MSD sample data pancreas_004.nii.gz: (<b>a</b>) 3D visualization of pancreas and PC, (<b>b</b>) main view, (<b>c</b>) left view, and (<b>d</b>) top view.</p>
Full article ">Figure 4
<p>LEPset sample data: (<b>a</b>) labeled non-PC, (<b>b</b>) labeled PC, and (<b>c</b>) unlabeled image.</p>
Full article ">Figure 5
<p>PAIP sample data: (<b>a</b>) a pathological image of PC, (<b>b</b>) nontumor cell nucleus mask, (<b>c</b>) tumor cell nucleus mask (The masks were processed to be visible).</p>
Full article ">Figure 6
<p>Summary of AI tasks on different medical imaging modalities.</p>
Full article ">Figure 7
<p>Flowchart of AI application in pancreatic images analysis.</p>
Full article ">Figure 8
<p>Basic workflow of feature engineering in traditional machine learning based image classification.</p>
Full article ">Figure 9
<p>TransUNet architecture.</p>
Full article ">Figure 10
<p>3D TransUNet architecture.</p>
Full article ">Figure 11
<p>Summary of AI models’ segmentation performance for pancreas and PCs on MSD.</p>
Full article ">Figure 12
<p>Summary of AI models’ segmentation performance for pancreas on BTCV.</p>
Full article ">
20 pages, 27344 KiB  
Article
DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries
by Binfeng Zou, Xingru Huang, Yitao Jiang, Kai Jin and Yaoqi Sun
Sensors 2024, 24(14), 4748; https://doi.org/10.3390/s24144748 - 22 Jul 2024
Viewed by 761
Abstract
The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression [...] Read more.
The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the Deformable Convolution and Mamba Integration Network (DeMambaNet), integrating a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder.</p>
Full article ">Figure 2
<p>Schematic representation of the CSDE, which integrates a State Space Pathway, based on SSM, in the upper section, and an Adaptive Deformable Pathway, based on DCN, in the lower section.</p>
Full article ">Figure 3
<p>The schematic depiction of each hierarchical stage, composed of DCNv3, LN, and MLP, utilizes DCNv3 as its core operator for efficient feature extraction.</p>
Full article ">Figure 4
<p>The schematic depiction of the TSMamba block involves GSC, ToM, LN, and MLP, collectively enhancing input feature processing and representation.</p>
Full article ">Figure 5
<p>The schematic depiction of the SEM, which combines encoder outputs through concatenation, applies Conv, BN, and ReLU and then enhances features with the MLP and LVC. The MLP captures global dependencies, while the LVC focuses on local details.</p>
Full article ">Figure 6
<p>The schematic illustrates the HCD, incorporating a multi-layered decoder structure. Each tier combines convolutional and deconvolutional layers for feature enhancement and upsampling, and it is equipped with the TAFI designed specifically for feature fusion.</p>
Full article ">Figure 7
<p>Schematic representation of the TAFI, which combines features from the encoder’s two pathways and uses local and global attention modules to emphasize important information.</p>
Full article ">Figure 8
<p>Box plot showcasing the evaluation metrics index from training results. On the x-axis, the models are labeled as follows: (a) ENet; (b) ICNet; (c) LEDNet; (d) OCNet; (e) PSPNet; (f) SegNet; (g) VM-UNet; (h) Attention U-Net; (i) R2U-Net; (j) UNet; (k) UNet++; (l) TransUNet; (m) Dense-UNet; (n) Mamba-UNet; (o) DeMambaNet (ours).</p>
Full article ">Figure 9
<p>A few segmentation results of comparison between our proposed method and the existing state-of-the-art models. The segmentation result of the teeth is shown in green. The red dashed line represents the ground truth.</p>
Full article ">Figure 10
<p>Box plot showcasing the evaluation metrics index of ablation experiments. On the x-axis, the models are labeled as follows: (a) w/o SSP; (b) w/o ADP; (c) w/o TAFI; (d) w/o SEM; (e) DeMambaNet (ours).</p>
Full article ">
19 pages, 5019 KiB  
Article
Dense Pedestrian Detection Based on GR-YOLO
by Nianfeng Li, Xinlu Bai, Xiangfeng Shen, Peizeng Xin, Jia Tian, Tengfei Chai and Zhenyan Wang
Sensors 2024, 24(14), 4747; https://doi.org/10.3390/s24144747 - 22 Jul 2024
Viewed by 744
Abstract
In large public places such as railway stations and airports, dense pedestrian detection is important for safety and security. Deep learning methods provide relatively effective solutions but still face problems such as feature extraction difficulties, image multi-scale variations, and high leakage detection rates, [...] Read more.
In large public places such as railway stations and airports, dense pedestrian detection is important for safety and security. Deep learning methods provide relatively effective solutions but still face problems such as feature extraction difficulties, image multi-scale variations, and high leakage detection rates, which bring great challenges to the research in this field. In this paper, we propose an improved dense pedestrian detection algorithm GR-yolo based on Yolov8. GR-yolo introduces the repc3 module to optimize the backbone network, which enhances the ability of feature extraction, adopts the aggregation–distribution mechanism to reconstruct the yolov8 neck structure, fuses multi-level information, achieves a more efficient exchange of information, and enhances the detection ability of the model. Meanwhile, the Giou loss calculation is used to help GR-yolo converge better, improve the detection accuracy of the target position, and reduce missed detection. Experiments show that GR-yolo has improved detection performance over yolov8, with a 3.1% improvement in detection means accuracy on the wider people dataset, 7.2% on the crowd human dataset, and 11.7% on the people detection images dataset. Therefore, the proposed GR-yolo algorithm is suitable for dense, multi-scale, and scene-variable pedestrian detection, and the improvement also provides a new idea to solve dense pedestrian detection in real scenes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Yolov8 model structure diagram.</p>
Full article ">Figure 2
<p>WiderPerson data set: (<b>a</b>–<b>d</b>) Large Crowds of People Living Scene.</p>
Full article ">Figure 3
<p>CrowdHuman data set: (<b>a</b>–<b>d</b>) Dense pedestrian detection baseline.</p>
Full article ">Figure 4
<p>People Detection Image data set: (<b>a</b>–<b>d</b>) Crowd detection in various scenarious.</p>
Full article ">Figure 5
<p>Repc3 model structure diagram.</p>
Full article ">Figure 6
<p>C2f model structure diagram.</p>
Full article ">Figure 7
<p>Aggregation–distribution mechanism in the yolov8 application structure diagram.</p>
Full article ">Figure 8
<p>Improved GR-Yolo model structure diagram.</p>
Full article ">Figure 9
<p>The Sim_4 module structure diagram of the improved model.</p>
Full article ">Figure 10
<p>The Sim_3 module structure diagram of the improved model.</p>
Full article ">Figure 11
<p>Evaluation index.</p>
Full article ">Figure 12
<p>Run result.</p>
Full article ">
19 pages, 11989 KiB  
Article
Structural Optimization and Performance of a Low-Frequency Double-Shell Type-IV Flexural Hydroacoustic Transducer
by Jinsong Chen, Chengxin Gong, Guilin Yue, Lilong Zhang, Xiaoli Wang, Zhenhao Huo and Ziyu Dong
Sensors 2024, 24(14), 4746; https://doi.org/10.3390/s24144746 - 22 Jul 2024
Viewed by 406
Abstract
To amplify the displacement of the radiation shell, a double-shell type-IV curved hydroacoustic transducer was proposed. Through Ansys finite element simulation, the vibration modes of the transducer in different stages and the harmonic response characteristics in air and water were studied, and the [...] Read more.
To amplify the displacement of the radiation shell, a double-shell type-IV curved hydroacoustic transducer was proposed. Through Ansys finite element simulation, the vibration modes of the transducer in different stages and the harmonic response characteristics in air and water were studied, and the bandwidth emission of the hydroacoustic transducer was achieved. By optimizing the size of each component, the resonant frequency of the transducer is 740 Hz, the maximum conductivity was 0.66 mS, and the maximum transmitting voltage response was 130 dB. According to the optimized parameters, a longitudinal acoustic transducer prototype was manufactured, and a physical test was conducted in an anechoic pool. The obtained resonant frequency was 750 Hz, the maximum conductivity was 0.44 mS, the maximum transmitting voltage response was 129.25 dB, and the maximum linear dimension was 250 mm, which match the simulated value of the virtual prototype and meet the expected requirements. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Transducer model. (<b>a</b>) Double-shell Type IV bending tension transducer; (<b>b</b>) ⅛-scale model; (<b>c</b>) schematic diagram of structural parameters.</p>
Full article ">Figure 2
<p>Modal analysis of ⅛-scale transducer model. (<b>a</b>) First-order mode; (<b>b</b>) second-order mode; (<b>c</b>) third-order mode; (<b>d</b>) fourth-order mode.</p>
Full article ">Figure 2 Cont.
<p>Modal analysis of ⅛-scale transducer model. (<b>a</b>) First-order mode; (<b>b</b>) second-order mode; (<b>c</b>) third-order mode; (<b>d</b>) fourth-order mode.</p>
Full article ">Figure 3
<p>Transmitting voltage response curve of transducers.</p>
Full article ">Figure 4
<p>The admittance <span class="html-italic">G</span> and <span class="html-italic">B</span> component curve of the transducer in water.</p>
Full article ">Figure 5
<p>Influences of structural parameters of inner shell in water on the maximum TVR. (<b>a</b>) Inner housing pad height; (<b>b</b>) inner housing height; (<b>c</b>) inner shell thickness; (<b>d</b>) inner shell short axis/long axis ratio.</p>
Full article ">Figure 6
<p>Influences of structural parameters of the inner shell in water on the conductivity. (<b>a</b>) Inner housing pad height; (<b>b</b>) Inner housing height; (<b>c</b>) inner shell thickness; (<b>d</b>) inner shell short axis/long axis ratio.</p>
Full article ">Figure 7
<p>Influences of structural parameters of underwater shell on the maximum TVR. (<b>a</b>) Height of outer housing pad; (<b>b</b>) height of outer housing; (<b>c</b>) outer shell thickness; (<b>d</b>) outer shell short axis/long axis ratio.</p>
Full article ">Figure 8
<p>Influences of structural parameters of underwater shell on conductivity value. (<b>a</b>) Height of outer housing pad; (<b>b</b>) height of outer housing; (<b>c</b>) outer shell thickness; (<b>d</b>) outer shell short axis/long axis ratio.</p>
Full article ">Figure 9
<p>Influences of structural parameters of piezoelectric ceramics in water on the maximum TVR. (<b>a</b>) Piezoelectric ceramic sheet height; (<b>b</b>) length of piezoelectric ceramic sheet; (<b>c</b>) thickness of piezoelectric ceramic sheet.</p>
Full article ">Figure 10
<p>Influences of structural parameters of piezoelectric ceramic sheets in water on the conductivity value. (<b>a</b>) Piezoelectric ceramic sheet height; (<b>b</b>) Length of piezoelectric ceramic sheet; (<b>c</b>) thickness of piezoelectric ceramic sheet.</p>
Full article ">Figure 11
<p>Influences of structural parameters of underwater transducers on acoustic performance. (<b>a</b>) Change in maximum emission voltage response; (<b>b</b>) change in conductance.</p>
Full article ">Figure 12
<p>Optimized acoustic performance. (<b>a</b>) The optimized admittance <span class="html-italic">G</span> and <span class="html-italic">B</span> components in the air; (<b>b</b>) The optimized admittance value <span class="html-italic">g</span> and <span class="html-italic">b</span> components in water; (<b>c</b>) the optimized emission voltage response.</p>
Full article ">Figure 13
<p>Displacement and stress cloud diagram of the inner shell. (<b>a</b>) Total displacement cloud map; (<b>b</b>) long-axis displacement nephogram; (<b>c</b>) total stress nephogram.</p>
Full article ">Figure 14
<p>Displacement and stress cloud diagram of the long-axis end of the inner shell. (<b>a</b>) Displacement cloud image in the long-axis direction; (<b>b</b>) stress cloud image in the long-axis direction.</p>
Full article ">Figure 15
<p>Hydrostatic pressure cloud diagram of shell body.</p>
Full article ">Figure 16
<p>Three-dimensional molds. (<b>a</b>) Inner shell; (<b>b</b>) outer shell; (<b>c</b>) upper housing cover plate; (<b>d</b>) lower housing cover plate.</p>
Full article ">Figure 17
<p>Three-dimensional slice models. (<b>a</b>) Inner shell; (<b>b</b>) outer shell; (<b>c</b>) upper housing cover plate; (<b>d</b>) lower housing cover plate.</p>
Full article ">Figure 18
<p>Three-dimensionally printed molds. (<b>a</b>) Inner shell; (<b>b</b>) outer shell; (<b>c</b>) upper housing cover plate; (<b>d</b>) lower housing cover plate.</p>
Full article ">Figure 19
<p>Sand mold after demolding. (<b>a</b>) Upper housing cover plate; (<b>b</b>) inner housing.</p>
Full article ">Figure 20
<p>General assembly drawing of dual-shell class-IV flextensional transducers.</p>
Full article ">Figure 21
<p>Prototype of dual-shell class-IV flextensional transducers.</p>
Full article ">Figure 22
<p>Testing system.</p>
Full article ">Figure 23
<p>Admittance values of <span class="html-italic">G</span> and <span class="html-italic">B</span> components in water.</p>
Full article ">Figure 24
<p>Test and simulation values of transmitting voltage response curve.</p>
Full article ">
23 pages, 27007 KiB  
Article
An Intelligent Hand-Assisted Diagnosis System Based on Information Fusion
by Haonan Li and Yitong Zhou
Sensors 2024, 24(14), 4745; https://doi.org/10.3390/s24144745 - 22 Jul 2024
Viewed by 532
Abstract
This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient’s [...] Read more.
This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient’s hands in real time. This visual perception can provide an objective data foundation and capture the continuous changes in the patient’s hand movement, thereby providing more detailed information for the assessment and providing a scientific basis for subsequent treatment plans. By introducing medical knowledge graph technology, the system integrates and analyzes medical knowledge information and combines it with a voice question-answering system, allowing patients to communicate and obtain information effectively even with limited hand function. Voice question-answering, as a subjective and convenient interaction method, greatly improves the interactivity and communication efficiency between patients and the system. In conclusion, this system holds immense potential as a highly efficient and accurate hand-assisted assessment tool, delivering enhanced diagnostic services and rehabilitation support for patients. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental system layout.</p>
Full article ">Figure 2
<p>The architecture of a knowledge base question-answering system based on information retrieval.</p>
Full article ">Figure 3
<p>Example of UI interface for Q&amp;A robot.</p>
Full article ">Figure 4
<p>Schematic diagram of the bone and joint distribution in the right human hand.</p>
Full article ">Figure 5
<p>Diagram of the three-dimensional finger coordinate system and joint angles. (<b>a</b>) Example of a three-dimensional coordinate system of the hand. (<b>b</b>) Two-dimensional example of joint angles.</p>
Full article ">Figure 6
<p>Comparative experiment shooting angles. (<b>a</b>) Main experiment viewpoint example. (<b>b</b>) Viewpoint 1. (<b>c</b>) Viewpoint 2.</p>
Full article ">Figure 7
<p>Experimental setup for single-frame image measurement. (<b>a</b>) Example of single-frame image measurement. (<b>b</b>) Dorsal hand measurement example.</p>
Full article ">Figure 8
<p>Experimental setup for continuous-frame hand motion measurement. (<b>a</b>) Hand Clenching Example. (<b>b</b>) Hand Relaxation Example.</p>
Full article ">Figure 9
<p>Boxplot of continuous-frame hand activity range data.</p>
Full article ">Figure 10
<p>Boxplot of hand joint functional activity score.</p>
Full article ">Figure 11
<p>Presentation of results from experiment on reliability of question answering. (<b>a</b>) Etiology and symptoms. (<b>b</b>) Preventive measures. (<b>c</b>) Dietary recommendations. (<b>d</b>) Complications. (<b>e</b>) Treatment methods.</p>
Full article ">
13 pages, 2083 KiB  
Article
The Overlay, a New Solution for Volume Variations in the Residual Limb for Individuals with a Transtibial Amputation
by Pierre Badaire, Maxime T. Robert and Katia Turcot
Sensors 2024, 24(14), 4744; https://doi.org/10.3390/s24144744 - 22 Jul 2024
Viewed by 1426
Abstract
Background: The company Ethnocare has developed the Overlay, a new pneumatic solution for managing volumetric variations (VVs) of the residual limb (RL) in transtibial amputees (TTAs), which improves socket fitting. However, the impact of the Overlay during functional tasks and on the comfort [...] Read more.
Background: The company Ethnocare has developed the Overlay, a new pneumatic solution for managing volumetric variations (VVs) of the residual limb (RL) in transtibial amputees (TTAs), which improves socket fitting. However, the impact of the Overlay during functional tasks and on the comfort and pain felt in the RL is unknown. Methods: 8 TTAs participated in two evaluations, separated by two weeks. We measured compensatory strategies (CS) using spatio-temporal parameters and three-dimensional lower limb kinematics and kinetics during gait and sit-to-stand (STS) tasks. During each visit, the participant carried out our protocol while wearing the Overlay and prosthetic folds (PFs), the most common solution to VV. Between each task, comfort and pain felt were assessed using visual analog scales. Results: While walking, the cadence with the Overlay was 105 steps/min, while it was 101 steps/min with PFs (p = 0.021). During 35% and 55% of the STS cycle, less hip flexion was observed while wearing the Overlay compared to PFs (p = 0.004). We found asymmetry coefficients of 13.9% with the Overlay and 17% with PFs during the STS (p = 0.016) task. Pain (p = 0.031), comfort (p = 0.017), and satisfaction (p = 0.041) were better with the Overlay during the second visit. Conclusion: The Overlay’s impact is similar to PFs’ but provides less pain and better comfort. Full article
(This article belongs to the Special Issue Advanced Wearable Sensors for Medical Applications)
Show Figures

Figure 1

Figure 1
<p>On the left, the Overlay—On the right, the Overlay worn.</p>
Full article ">Figure 2
<p>(<b>A</b>) Sit-to-stand phases (Miramand et al., 2022 [<a href="#B5-sensors-24-04744" class="html-bibr">5</a>]) (<b>B</b>) Gait cycle phases (Perry et al., 1992 [<a href="#B23-sensors-24-04744" class="html-bibr">23</a>]).</p>
Full article ">Figure 3
<p>Kinematic on the sagittal plane for (<b>A</b>) the hip on the amputated side during the gait, (<b>B</b>) the hip on the amputated side during the STS, (<b>C</b>) the hip on the sound side during the STS. (<b>Left plot</b>) Kinematic under each condition. (<b>Right plot</b>) SPM Analysis, with the red dotted line as the significance threshold.</p>
Full article ">Figure 4
<p>(<b>Left plot</b>) Vertical ground reaction force on both sides during the STS. (<b>Right plot</b>) SPM Analysis.</p>
Full article ">Figure 5
<p>Pain and comfort evaluation for each task and during each visit. STS = sit to stand; 6MWT = six minute walking test; LS = long slope (6° incline); G = gravel; STEP = walking over a step; SS = short slope (13° incline).</p>
Full article ">
18 pages, 6596 KiB  
Article
A Miniaturized Dual-Band Circularly Polarized Implantable Antenna for Use in Hemodialysis
by Zhiwei Song, Yuchao Wang, Youwei Shi and Xianren Zheng
Sensors 2024, 24(14), 4743; https://doi.org/10.3390/s24144743 - 22 Jul 2024
Viewed by 544
Abstract
Hemodialysis is achieved by implanting a smart arteriovenous graft (AVG) to build a vascular pathway, but reliability and stability in data transmission cannot be guaranteed. To address this issue, a miniaturized dual-band circularly polarized implantable antenna operating at 1.4 GHz (for energy transmission) [...] Read more.
Hemodialysis is achieved by implanting a smart arteriovenous graft (AVG) to build a vascular pathway, but reliability and stability in data transmission cannot be guaranteed. To address this issue, a miniaturized dual-band circularly polarized implantable antenna operating at 1.4 GHz (for energy transmission) and 2.45 GHz (for wireless telemetry), implanted in a wireless arteriovenous graft monitoring device (WAGMD), has been designed. The antenna design incorporates a rectangular serpentine structure on the radiation surface to reduce its volume to 9.144 mm3. Furthermore, matching rectangular slots on the radiation surface and the ground plane enhance the antenna’s circular polarization performance. The simulated effective 3 dB axial ratio (AR) bandwidths are 11.43% (1.4 GHz) and 12.65% (2.45 GHz). The simulated peak gains of the antenna are −19.55 dBi and −22.85 dBi at 1.4 GHz and 2.45 GHz, respectively. The designed antenna is implanted in a WAGMD both in the simulation and the experiment. The performance of the system is simulated in homogeneous human tissue models of skin, fat, and muscle layers, as well as a realistic adult male forearm model. The measurement results in a minced pork environment align closely with the simulation results. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of arteriovenous graft and systemic implant.</p>
Full article ">Figure 2
<p>Antenna geometry. (<b>a</b>) Radiation surface. (<b>b</b>) Ground plane. (<b>c</b>) Side view. (<b>d</b>) Isometric view.</p>
Full article ">Figure 3
<p>Schematic diagram of WAGMD system structure.</p>
Full article ">Figure 4
<p>Antenna simulation environments.</p>
Full article ">Figure 5
<p>Comparison of simulation results for antenna’s S<sub>11</sub> under different implantation environments.</p>
Full article ">Figure 6
<p>(<b>a</b>) Evolution of antenna structure. (<b>b</b>) Simulated S<sub>11</sub>. (<b>c</b>) AR.</p>
Full article ">Figure 7
<p>The simulated S<sub>11</sub> and AR of the proposed antenna when the substrate material is changed.</p>
Full article ">Figure 8
<p>Parametric study of the proposed antenna. (<b>a</b>) Grounding slot location. (<b>b</b>) Grounding slot length.</p>
Full article ">Figure 9
<p>The simulated S<sub>11</sub> and AR of the proposed antenna when the position of the feed point is changed.</p>
Full article ">Figure 10
<p>Radial surface rectangular slot length analysis. (<b>a</b>) W2 parametric analysis. (<b>b</b>) W3 parametric analysis.</p>
Full article ">Figure 11
<p>Surface current distribution on radiation patch: (<b>a</b>) 1.4 GHz, (<b>b</b>) 2.45 GHz.</p>
Full article ">Figure 12
<p>Mean SAR distribution for implanted antennas in human forearms.</p>
Full article ">Figure 13
<p>(<b>a</b>) Antenna physical and soldering details. (<b>b</b>) Reflection coefficient and far-field gain measurement environment.</p>
Full article ">Figure 14
<p>Comparison of the simulated and measured reflection coefficients and circularly polarized axial ratios.</p>
Full article ">Figure 15
<p>Comparison of radiation patterns between simulation and measurement (in dB). (<b>a</b>) E-plane. (<b>b</b>) H-plane.</p>
Full article ">Figure 16
<p>Antenna three-dimensional radiation direction map: (<b>a</b>) 1.4 GHz, (<b>b</b>) 2.45 GHz.</p>
Full article ">Figure 17
<p>Calculating LM as a function of distance at 1 Mb/s and 2 Mb/s data rates.</p>
Full article ">
19 pages, 4380 KiB  
Article
Optimization of Distributed Energy Resources Operation in Green Buildings Environment
by Safdar Ali, Khizar Hayat, Ibrar Hussain, Ahmad Khan and Dohyeun Kim
Sensors 2024, 24(14), 4742; https://doi.org/10.3390/s24144742 - 22 Jul 2024
Viewed by 554
Abstract
Without a well-defined energy management plan, achieving meaningful improvements in human lifestyle becomes challenging. Adequate energy resources are essential for development, but they are both limited and costly. In the literature, several solutions have been proposed for energy management but they either minimize [...] Read more.
Without a well-defined energy management plan, achieving meaningful improvements in human lifestyle becomes challenging. Adequate energy resources are essential for development, but they are both limited and costly. In the literature, several solutions have been proposed for energy management but they either minimize energy consumption or improve the occupant’s comfort index. The energy management problem is a multi-objective problem where the user wants to reduce energy consumption while keeping the occupant’s comfort index intact. To address the multi-objective problem this paper proposed an energy control system for a green environment called PMC (Power Management and Control). The system is based on hybrid energy optimization, energy prediction, and multi-preprocessing. The combination of GA (Genetic Algorithm) and PSO (Particle Swarm Optimization) is performed to make a fusion methodology to improve the occupant comfort index (OCI) and decrease energy utilization. The proposed framework gives a better OCI when compared with its counterparts, the Ant Bee Colony Knowledge Base framework (ABCKB), GA-based prediction framework (GAP), Hybrid Prediction with Single Optimization framework (SOHP), and PSO-based power consumption framework. Compared with the existing AEO framework, the PMC gives practically the same OCI but consumes less energy. The PMC framework additionally accomplished the ideal OCI (i-e 1) when compared with the existing model, FA–GA (i-e 0.98). The PMC model consumed less energy as compared to existing models such as the ABCKB, GAP, PSO, and AEO. The PMC model consumed a little bit more energy than the SOHP but provided a better OCI. The comparative outcomes show the capability of the PMC framework to reduce energy utilization and improve the OCI. Unlike other existing methodologies except for the AEO framework, the PMC technique is additionally confirmed through a simulation by controlling the indoor environment using actuators, such as fan, light, AC, and boiler. Full article
(This article belongs to the Special Issue Smart Sensors, Smart Grid and Energy Management)
Show Figures

Figure 1

Figure 1
<p>Hybrid energy optimization model.</p>
Full article ">Figure 2
<p>Flow chart of the proposed methodology.</p>
Full article ">Figure 3
<p>Control messages for air-con based on the proposed framework.</p>
Full article ">Figure 4
<p>Control messages for boiler based on the proposed framework.</p>
Full article ">Figure 5
<p>Control messages for light based on the proposed framework.</p>
Full article ">Figure 6
<p>Control messages for fan based on the proposed framework.</p>
Full article ">Figure 7
<p>Proposed PMC framework predicted consumed power for temperature, illumination, air quality, and total power consumption.</p>
Full article ">Figure 8
<p>Comfort index of the proposed PMC frameworks.</p>
Full article ">Figure 9
<p>Proposed PMC framework based on predicted power consumption vs. ABCKB vs. GAP versus SOHP vs. PSO vs. AEO framework for temperature.</p>
Full article ">Figure 10
<p>Proposed PMC framework based on predicted power consumption vs. ABCKB vs. GAP vs. SOHP vs. PSO vs. AEO model for illumination.</p>
Full article ">Figure 11
<p>Proposed PMC framework based on predicted power consumption vs. ABCKB vs. GAP versus SOHP vs. PSO vs. AEO framework for air quality.</p>
Full article ">Figure 12
<p>Proposed PMC framework based on total predicted power consumption vs. ABCKB vs. GAP vs. SOHP vs. PSO vs. AEO framework.</p>
Full article ">Figure 13
<p>Comfort value comparisons of frameworks presented in [<a href="#B3-sensors-24-04742" class="html-bibr">3</a>,<a href="#B4-sensors-24-04742" class="html-bibr">4</a>,<a href="#B5-sensors-24-04742" class="html-bibr">5</a>,<a href="#B8-sensors-24-04742" class="html-bibr">8</a>,<a href="#B9-sensors-24-04742" class="html-bibr">9</a>,<a href="#B10-sensors-24-04742" class="html-bibr">10</a>] vs. proposed PMC framework.</p>
Full article ">
23 pages, 6075 KiB  
Article
Research on Isomorphic Task Transfer Algorithm Based on Knowledge Distillation in Multi-Agent Collaborative Systems
by Chunxue Bo, Shuzhi Liu, Yuyue Liu, Zhishuo Guo, Jinghan Wang and Jinghai Xu
Sensors 2024, 24(14), 4741; https://doi.org/10.3390/s24144741 - 22 Jul 2024
Viewed by 550
Abstract
In response to the increasing number of agents and changing task scenarios in multi-agent collaborative systems, existing collaborative strategies struggle to effectively adapt to new task scenarios. To address this challenge, this paper proposes a knowledge distillation method combined with a domain separation [...] Read more.
In response to the increasing number of agents and changing task scenarios in multi-agent collaborative systems, existing collaborative strategies struggle to effectively adapt to new task scenarios. To address this challenge, this paper proposes a knowledge distillation method combined with a domain separation network (DSN-KD). This method leverages the well-performing policy network from a source task as the teacher model, utilizes a domain-separated neural network structure to correct the teacher model’s outputs as supervision, and guides the learning of agents in new tasks. The proposed method does not require the pre-design or training of complex state-action mappings, thereby reducing the cost of transfer. Experimental results in scenarios such as UAV surveillance and UAV cooperative target occupation, robot cooperative box pushing, UAV cooperative target strike, and multi-agent cooperative resource recovery in a particle simulation environment demonstrate that the DSN-KD transfer method effectively enhances the learning speed of new task policies and improves the proximity of the policy model to the theoretically optimal policy in practical tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The framework of DSN-KD.</p>
Full article ">Figure 2
<p>Agent policy network.</p>
Full article ">Figure 3
<p>Pairing of pre-trained policy networks with target intelligent agents.</p>
Full article ">Figure 4
<p>Training process of target intelligent agents.</p>
Full article ">Figure 5
<p>The architecture of domain separation network.</p>
Full article ">Figure 6
<p>UAV surveillance environment.</p>
Full article ">Figure 7
<p>UAV cooperative target point occupation environment.</p>
Full article ">Figure 8
<p>Cooperative robot pushing environment.</p>
Full article ">Figure 9
<p>UAV cooperative target point strike environment.</p>
Full article ">Figure 10
<p>Multi-agent cooperative material recovery scene.</p>
Full article ">Figure 11
<p>Illustration of transfer task 2.</p>
Full article ">Figure 12
<p>Illustration of transfer task 3.</p>
Full article ">Figure 13
<p>Illustration of transfer task 4.</p>
Full article ">Figure 14
<p>Illustration of transfer task 5.</p>
Full article ">Figure 15
<p>Illustration of transfer task 3. Performance comparison of the model under different values of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 16
<p>Comparative experiment for transfer task 1.</p>
Full article ">Figure 17
<p>Comparative experiment for transfer task 2.</p>
Full article ">Figure 18
<p>Comparative experiment for transfer task 3.</p>
Full article ">Figure 19
<p>Comparative experiment for transfer task 4.</p>
Full article ">Figure 20
<p>Comparative experiment for transfer task 5.</p>
Full article ">
1 pages, 134 KiB  
Retraction
RETRACTED: Syah et al. A New Hybrid Algorithm for Multi-Objective Reactive Power Planning via FACTS Devices and Renewable Wind Resources. Sensors 2021, 21, 5246
by Rahmad Syah, Peyman Khorshidian Mianaei, Marischa Elveny, Naeim Ahmadian, Dadan Ramdan, Reza Habibifar and Afshin Davarpanah
Sensors 2024, 24(14), 4740; https://doi.org/10.3390/s24144740 - 22 Jul 2024
Viewed by 425
Abstract
The Sensors Editorial Office retracts the article, “A New Hybrid Algorithm for Multi-Objective Reactive Power Planning via FACTS Devices and Renewable Wind Resources” [...] Full article
(This article belongs to the Section Internet of Things)
18 pages, 11573 KiB  
Article
Design and Implementation of a Two-Wheeled Vehicle Safe Driving Evaluation System
by Dongbeom Kim, Hyemin Kim, Suyun Lee, Qyoung Lee, Minwoo Lee, Jooyoung Lee and Chulmin Jun
Sensors 2024, 24(14), 4739; https://doi.org/10.3390/s24144739 - 22 Jul 2024
Viewed by 615
Abstract
The delivery market in Republic of Korea has experienced significant growth, leading to a surge in motorcycle-related accidents. However, there is a lack of comprehensive data collection systems for motorcycle safety management. This study focused on designing and implementing a foundational data collection [...] Read more.
The delivery market in Republic of Korea has experienced significant growth, leading to a surge in motorcycle-related accidents. However, there is a lack of comprehensive data collection systems for motorcycle safety management. This study focused on designing and implementing a foundational data collection system to monitor and evaluate motorcycle driving behavior. To achieve this, eleven risky behaviors were defined, identified using image-based, GIS-based, and inertial-sensor-based methods. A motorcycle-mounted sensing device was installed to assess driving, with drivers reviewing their patterns through an app and all data monitored via a web interface. The system was applied and tested using a testbed. This study is significant as it successfully conducted foundational data collection for motorcycle safety management and designed and implemented a system for monitoring and evaluation. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the two-wheeled vehicle monitoring system.</p>
Full article ">Figure 2
<p>Chart of two-wheeled vehicle accidents and violations of laws and regulations. (<b>a</b>) The number of motorcycle accidents by violation of laws and regulations in the metropolitan area of Korea (Seoul, Incheon, Gyeonggi) between 2017 and 2021. (<b>b</b>) The status of crackdowns by motorcycle violations of laws and regulations between 2017 and 2021. (<b>c</b>) The result of the second fact-finding survey of two-wheeled vehicle traffic laws on 16 intersections in Seoul from 8 to 9 September 2021.</p>
Full article ">Figure 3
<p>Flowchart to build an image-based reference.</p>
Full article ">Figure 4
<p>Flowchart to build a reference based on GIS/GNSS.</p>
Full article ">Figure 5
<p>Flowchart for the building of a reference for an inertial sensor. The left section depicts a web-based survey incorporating driving videos from the Carla simulator. Respondents were required to evaluate the degree of aggressiveness in the driving. Subsequently, as shown in the right section, a database of trajectory information from the Carla simulator was recorded.</p>
Full article ">Figure 6
<p>Attachment-type aggressive driving sensing device. (<b>a</b>) and (<b>b</b>) are cameras and take front and rear images, respectively. (<b>c</b>) is an inertial sensor. (<b>a</b>) is the front camera, (<b>b</b>) is the camera for helmet detection, (<b>c</b>) is the inertial sensor, (<b>d</b>) is the GPS/GNSS sensor, and blur processing was performed for security. (<b>e</b>) is the RTK base.</p>
Full article ">Figure 7
<p>Image-based two-wheeled vehicle dangerous event detection using YOLO v5.</p>
Full article ">Figure 8
<p>GIS-based aggressive driving detection.</p>
Full article ">Figure 9
<p>Sensor-based aggressive driving detection.</p>
Full article ">Figure 10
<p>Screenshot of the web interface of the two-wheeled vehicle rider monitoring system. The interface shows the trajectories of real two-wheeled vehicles riders, the risk item violation points, and other information.</p>
Full article ">Figure 11
<p>Screenshot of the app interface of the two-wheeled vehicle rider evaluation system.</p>
Full article ">Figure 12
<p>Screenshot of the web interface of the two-wheeled vehicle rider monitoring system. The interface visualizes the location where dangerous driving is detected and presents statistics.</p>
Full article ">
23 pages, 15975 KiB  
Article
Integrating the Capsule-like Smart Aggregate-Based EMI Technique with Deep Learning for Stress Assessment in Concrete
by Quoc-Bao Ta, Quang-Quang Pham, Ngoc-Lan Pham and Jeong-Tae Kim
Sensors 2024, 24(14), 4738; https://doi.org/10.3390/s24144738 - 21 Jul 2024
Viewed by 743
Abstract
This study presents a concrete stress monitoring method utilizing 1D CNN deep learning of raw electromechanical impedance (EMI) signals measured with a capsule-like smart aggregate (CSA) sensor. Firstly, the CSA-based EMI measurement technique is presented by depicting a prototype of the CSA sensor [...] Read more.
This study presents a concrete stress monitoring method utilizing 1D CNN deep learning of raw electromechanical impedance (EMI) signals measured with a capsule-like smart aggregate (CSA) sensor. Firstly, the CSA-based EMI measurement technique is presented by depicting a prototype of the CSA sensor and a 2 degrees of freedom (2 DOFs) EMI model for the CSA sensor embedded in a concrete cylinder. Secondly, the 1D CNN deep regression model is designed to adapt raw EMI responses from the CSA sensor for estimating concrete stresses. Thirdly, a CSA-embedded cylindrical concrete structure is experimented with to acquire EMI responses under various compressive loading levels. Finally, the feasibility and robustness of the 1D CNN model are evaluated for noise-contaminated EMI data and untrained stress EMI cases. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>Prototype of CSA sensor (dimensions in mm).</p>
Full article ">Figure 2
<p>CSA-based EMI measurement 2 DOFs model for concrete structure: (<b>a</b>) CSA-embedded concrete structure; (<b>b</b>) 2 DOFs model.</p>
Full article ">Figure 3
<p>Behavior of CSA sensor in x-direction embedded in concrete cylinder under compression.</p>
Full article ">Figure 4
<p>Diagram of 1D CNN stress estimation model using CSA’s EMI signals.</p>
Full article ">Figure 5
<p>Architecture of 1D CNN stress estimation model using EMI signals [<a href="#B24-sensors-24-04738" class="html-bibr">24</a>].</p>
Full article ">Figure 6
<p>Data configuration for noise-contaminated EMI cases.</p>
Full article ">Figure 7
<p>Data configuration for untrained stress-EMI cases.</p>
Full article ">Figure 8
<p>CSA prototype (dimensions in mm): (<b>a</b>) CSA sensor’s components; (<b>b</b>) Fabricated CSA.</p>
Full article ">Figure 9
<p>Fabrication of CSA-embedded concrete cylinder (dimensions in mm).</p>
Full article ">Figure 10
<p>Testing setup for EMI measuring from x-CSA-embedded concrete cylinder under compression.</p>
Full article ">Figure 11
<p>Applied loading history on x-CSA-embedded cylinder.</p>
Full article ">Figure 12
<p>EMI responses (in average of ensembles) of CSA in cylinder under applied stresses S<sub>0–8</sub>: (<b>a</b>) S<sub>0</sub>; (<b>b</b>) S<sub>1</sub>; (<b>c</b>) S<sub>2</sub>; (<b>d</b>) S<sub>3</sub>; (<b>e</b>) S<sub>4</sub>; (<b>f</b>) S<sub>5</sub>; (<b>g</b>) S<sub>6</sub>; (<b>h</b>) S<sub>7</sub>; (<b>i</b>) S<sub>8</sub>.</p>
Full article ">Figure 13
<p>EMI responses (in average of ensembles) of CSA in cylinder under applied stresses S<sub>0</sub>–S<sub>8</sub>.</p>
Full article ">Figure 14
<p>Visual observation of cylinder during loading steps S<sub>0</sub>–S<sub>8</sub>.</p>
Full article ">Figure 15
<p>EMI features of x-CSA under applied stresses: (<b>a</b>) RMSE; (<b>b</b>) CCD.</p>
Full article ">Figure 16
<p>Visualization of labeled EMI data in training set.</p>
Full article ">Figure 17
<p>Example of noise-contaminated EMI signals under stress level S<sub>1</sub> in testing set: (<b>a</b>) 2%; (<b>b</b>) 4%; (<b>c</b>) 6%; (<b>d</b>) 8%; (<b>e</b>) 10%; (<b>f</b>) 12%; (<b>g</b>) 14%; (<b>h</b>) 16%.</p>
Full article ">Figure 18
<p>Loss values after 100 epochs.</p>
Full article ">Figure 19
<p>(<b>a</b>) 0%; (<b>b</b>) 2%; (<b>c</b>) 4%; (<b>d</b>) 6%; (<b>e</b>) 8%; (<b>f</b>) 10%; (<b>g</b>) 12%; (<b>h</b>) 14%; (<b>i</b>) 16%.</p>
Full article ">Figure 19 Cont.
<p>(<b>a</b>) 0%; (<b>b</b>) 2%; (<b>c</b>) 4%; (<b>d</b>) 6%; (<b>e</b>) 8%; (<b>f</b>) 10%; (<b>g</b>) 12%; (<b>h</b>) 14%; (<b>i</b>) 16%.</p>
Full article ">Figure 20
<p>RMSE error under noise levels: (<b>a</b>) levels of noise 0–5% (trained levels); (<b>b</b>) levels of noise 6–16% (untrained levels).</p>
Full article ">Figure 21
<p>Training data in untrained cases: (<b>a</b>) untrained case 1 (excluded stress S<sub>2</sub>); (<b>b</b>) untrained case 2 (excluded stress S<sub>2</sub> and S<sub>4</sub>).</p>
Full article ">Figure 22
<p>Example of noise-added EMI signals under stress level S<sub>1</sub> in testing set: (<b>a</b>) 0%; (<b>b</b>) 1%; (<b>c</b>) 2%; (<b>d</b>) 3%; (<b>e</b>) 4%; (<b>f</b>) 5%.</p>
Full article ">Figure 23
<p>Loss values for untrained cases: (<b>a</b>) case 1 (excluded stress S<sub>2</sub>); (<b>b</b>) case 2 (excluded stress S<sub>2</sub> and S<sub>4</sub>); (<b>c</b>) case 3 (excluded stress S<sub>2</sub>, S<sub>4</sub>, and S<sub>6</sub>); (<b>d</b>) case 4 (excluded stress S<sub>2</sub>, S<sub>4</sub>, S<sub>6</sub>, and S<sub>8</sub>).</p>
Full article ">Figure 24
<p>Stress estimation for untrained case 1 (excluded stress S<sub>2</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 25
<p>Stress estimation for untrained case 2 (excluded stress S<sub>2</sub> and S<sub>4</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 25 Cont.
<p>Stress estimation for untrained case 2 (excluded stress S<sub>2</sub> and S<sub>4</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 26
<p>Stress estimation for untrained case 3 (excluded stress S<sub>2</sub>, S<sub>4</sub>, and S<sub>6</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 27
<p>Stress estimation for untrained case 4 (excluded stress S<sub>2</sub>, S<sub>4</sub>, S<sub>6</sub>, and S<sub>8</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 27 Cont.
<p>Stress estimation for untrained case 4 (excluded stress S<sub>2</sub>, S<sub>4</sub>, S<sub>6</sub>, and S<sub>8</sub>): (<b>a</b>) 0% noise; (<b>b</b>) 1% noise; (<b>c</b>) 2% noise; (<b>d</b>) 3% noise; (<b>e</b>) 4% noise; (<b>f</b>) 5% noise.</p>
Full article ">Figure 28
<p>RMSE error in untrained cases: (<b>a</b>) untrained case 1 (excluded stress S<sub>2</sub>); (<b>b</b>) untrained case 2 (excluded stress S<sub>2</sub> and S<sub>4</sub>); (<b>c</b>) untrained case 3 (excluded stress S<sub>2</sub>, S<sub>4</sub>, and S<sub>6</sub>); (<b>d</b>) untrained case 4 (excluded stress S<sub>2</sub>, S<sub>4</sub>, S<sub>6</sub>, and S<sub>8</sub>).</p>
Full article ">
15 pages, 983 KiB  
Article
Trustworthy High-Performance Multiplayer Games with Trust-but-Verify Protocol Sensor Validation
by Alexander Joens, Ananth A. Jillepalli and Frederick T. Sheldon
Sensors 2024, 24(14), 4737; https://doi.org/10.3390/s24144737 - 21 Jul 2024
Viewed by 557
Abstract
Multiplayer online video games are a multibillion-dollar industry, to which widespread cheating presents a significant threat. Game designers compromise on game security to meet demanding performance targets, but reduced security increases the risk of potential malicious exploitation. To mitigate this risk, game developers [...] Read more.
Multiplayer online video games are a multibillion-dollar industry, to which widespread cheating presents a significant threat. Game designers compromise on game security to meet demanding performance targets, but reduced security increases the risk of potential malicious exploitation. To mitigate this risk, game developers implement alternative security sensors. The alternative sensors themselves become a liability due to their intrusive and taxing nature. Online multiplayer games with real-time gameplay are known to be difficult to secure due to the cascading exponential nature of many-many relationships among the components involved. Behavior-based security sensor schemes, or referees (a trusted third party), could be a potential solution but require frameworks to obtain the game state information they need. We describe our Trust-Verify Game Protocol (TVGP), which is a sensor protocol intended for low-trust environments and designed to provide game state information to help support behavior-based cheat-sensing detection schemes. We argue TVGP is an effective solution for applying an independent trusted referee capability to trust-lacking subdomains and demands high-performance requirements. Our experimental results validate high efficiency and performance standards for TVGP. We identify and discuss the operational domain assumptions of the TVGP validation testing presented here. Full article
(This article belongs to the Special Issue Intelligent Solutions for Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>A high-level abstract of the Trust-Verify Game Protocol (TVGP) system.</p>
Full article ">Figure A1
<p>Trust-Verify Game Protocol (TVGP): client-to-server referee process. TVGP recording messages sent from the client to the server.</p>
Full article ">
18 pages, 4243 KiB  
Article
An Optimal Spatio-Temporal Hybrid Model Based on Wavelet Transform for Early Fault Detection
by Jingyang Xing, Fangfang Li, Xiaoyu Ma and Qiuyue Qin
Sensors 2024, 24(14), 4736; https://doi.org/10.3390/s24144736 - 21 Jul 2024
Viewed by 625
Abstract
An optimal spatio-temporal hybrid model (STHM) based on wavelet transform (WT) is proposed to improve the sensitivity and accuracy of detecting slowly evolving faults that occur in the early stage and easily submerge with noise in complex industrial production systems. Specifically, a WT [...] Read more.
An optimal spatio-temporal hybrid model (STHM) based on wavelet transform (WT) is proposed to improve the sensitivity and accuracy of detecting slowly evolving faults that occur in the early stage and easily submerge with noise in complex industrial production systems. Specifically, a WT is performed to denoise the original data, thus reducing the influence of background noise. Then, a principal component analysis (PCA) and the sliding window algorithm are used to acquire the nearest neighbors in both spatial and time dimensions. Subsequently, the cumulative sum (CUSUM) and the mahalanobis distance (MD) are used to reconstruct the hybrid statistic with spatial and temporal sequences. It helps to enhance the correlation between high-frequency temporal dynamics and space and improves fault detection precision. Moreover, the kernel density estimation (KDE) method is used to estimate the upper threshold of the hybrid statistic so as to optimize the fault detection process. Finally, simulations are conducted by applying the WT-based optimal STHM in the early fault detection of the Tennessee Eastman (TE) process, with the aim of proving that the fault detection method proposed has a high fault detection rate (FDR) and a low false alarm rate (FAR), and it can improve both production safety and product quality. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Wavelet denoising workflow diagram.</p>
Full article ">Figure 2
<p>Flowchart of early fault detection based on the spatio-temporal hybrid model.</p>
Full article ">Figure 3
<p>Process flow diagram of the TE process.</p>
Full article ">Figure 4
<p>Detection results of statistical measures for Fault 4 using different detection methods. (<b>a</b>) Demonstration of raw data for Fault 1; (<b>b</b>) demonstration of denoised data for Fault 1; (<b>c</b>) demonstration of raw data for Fault 2; (<b>d</b>) demonstration of denoised data for Fault 2; (<b>e</b>) demonstration of raw data for Fault 3; (<b>f</b>) demonstration of denoised data for Fault 3; (<b>g</b>) demonstration of raw data for Fault 4; (<b>h</b>) demonstration of denoised data for Fault 4; (<b>i</b>) demonstration of raw data for Fault 5; (<b>j</b>) demonstration of denoised data for Fault 5; (<b>k</b>) demonstration of raw data for Fault 8; (<b>l</b>) demonstration of denoised data for Fault 8; (<b>m</b>) demonstration of raw data for Fault 10; (<b>n</b>) demonstration of denoised data for Fault 10; (<b>o</b>) demonstration of raw data for Fault 12; (<b>p</b>) demonstration of denoised data for Fault 12; (<b>q</b>) demonstration of raw data for Fault 13; and (<b>r</b>) demonstration of denoised data for Fault 13.</p>
Full article ">Figure 4 Cont.
<p>Detection results of statistical measures for Fault 4 using different detection methods. (<b>a</b>) Demonstration of raw data for Fault 1; (<b>b</b>) demonstration of denoised data for Fault 1; (<b>c</b>) demonstration of raw data for Fault 2; (<b>d</b>) demonstration of denoised data for Fault 2; (<b>e</b>) demonstration of raw data for Fault 3; (<b>f</b>) demonstration of denoised data for Fault 3; (<b>g</b>) demonstration of raw data for Fault 4; (<b>h</b>) demonstration of denoised data for Fault 4; (<b>i</b>) demonstration of raw data for Fault 5; (<b>j</b>) demonstration of denoised data for Fault 5; (<b>k</b>) demonstration of raw data for Fault 8; (<b>l</b>) demonstration of denoised data for Fault 8; (<b>m</b>) demonstration of raw data for Fault 10; (<b>n</b>) demonstration of denoised data for Fault 10; (<b>o</b>) demonstration of raw data for Fault 12; (<b>p</b>) demonstration of denoised data for Fault 12; (<b>q</b>) demonstration of raw data for Fault 13; and (<b>r</b>) demonstration of denoised data for Fault 13.</p>
Full article ">Figure 4 Cont.
<p>Detection results of statistical measures for Fault 4 using different detection methods. (<b>a</b>) Demonstration of raw data for Fault 1; (<b>b</b>) demonstration of denoised data for Fault 1; (<b>c</b>) demonstration of raw data for Fault 2; (<b>d</b>) demonstration of denoised data for Fault 2; (<b>e</b>) demonstration of raw data for Fault 3; (<b>f</b>) demonstration of denoised data for Fault 3; (<b>g</b>) demonstration of raw data for Fault 4; (<b>h</b>) demonstration of denoised data for Fault 4; (<b>i</b>) demonstration of raw data for Fault 5; (<b>j</b>) demonstration of denoised data for Fault 5; (<b>k</b>) demonstration of raw data for Fault 8; (<b>l</b>) demonstration of denoised data for Fault 8; (<b>m</b>) demonstration of raw data for Fault 10; (<b>n</b>) demonstration of denoised data for Fault 10; (<b>o</b>) demonstration of raw data for Fault 12; (<b>p</b>) demonstration of denoised data for Fault 12; (<b>q</b>) demonstration of raw data for Fault 13; and (<b>r</b>) demonstration of denoised data for Fault 13.</p>
Full article ">Figure 5
<p>Detection results of statistical measures for Fault 4 using different detection methods. (<b>a</b>) Statistical measures of the STHM method; (<b>b</b>) statistical measures of the STN method; (<b>c</b>) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> statistics of the PCA method; and (<b>d</b>) SPE statistics of the PCA method.</p>
Full article ">Figure 5 Cont.
<p>Detection results of statistical measures for Fault 4 using different detection methods. (<b>a</b>) Statistical measures of the STHM method; (<b>b</b>) statistical measures of the STN method; (<b>c</b>) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> statistics of the PCA method; and (<b>d</b>) SPE statistics of the PCA method.</p>
Full article ">Figure 6
<p>Detection results of statistical measures for Fault 10 using different detection methods. (<b>a</b>) Statistical measures of the STHM method; (<b>b</b>) statistical measures of the STN method; (<b>c</b>) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> statistics of the PCA method; and (<b>d</b>) SPE statistics of the PCA method.</p>
Full article ">Figure 7
<p>Detection results of statistical measures for Fault 15 using different detection methods. (<b>a</b>) Statistical measures of the STHM method; (<b>b</b>) statistical measures of the STN method; (<b>c</b>) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> statistics of the PCA method; and (<b>d</b>) SPE statistics of the PCA method.</p>
Full article ">Figure 8
<p>Comparison chart of FDR effects for different detection methods.</p>
Full article ">
29 pages, 5115 KiB  
Article
Open-Vocabulary Predictive World Models from Sensor Observations
by Robin Karlsson, Ruslan Asfandiyarov, Alexander Carballo, Keisuke Fujii, Kento Ohtani and Kazuya Takeda
Sensors 2024, 24(14), 4735; https://doi.org/10.3390/s24144735 - 21 Jul 2024
Viewed by 437
Abstract
Cognitive scientists believe that adaptable intelligent agents like humans perform spatial reasoning tasks by learned causal mental simulation. The problem of learning these simulations is called predictive world modeling. We present the first framework for a learning open-vocabulary predictive world model (OV-PWM) from [...] Read more.
Cognitive scientists believe that adaptable intelligent agents like humans perform spatial reasoning tasks by learned causal mental simulation. The problem of learning these simulations is called predictive world modeling. We present the first framework for a learning open-vocabulary predictive world model (OV-PWM) from sensor observations. The model is implemented through a hierarchical variational autoencoder (HVAE) capable of predicting diverse and accurate fully observed environments from accumulated partial observations. We show that the OV-PWM can model high-dimensional embedding maps of latent compositional embeddings representing sets of overlapping semantics inferable by sufficient similarity inference. The OV-PWM simplifies the prior two-stage closed-set PWM approach to the single-stage end-to-end learning method. CARLA simulator experiments show that the OV-PWM can learn compact latent representations and generate diverse and accurate worlds with fine details like road markings, achieving 69 mIoU over six query semantics on an urban evaluation sequence. We propose the OV-PWM as a versatile continual learning paradigm for providing spatio-semantic memory and learned internal simulation capabilities to future general-purpose mobile robots. Full article
(This article belongs to the Collection Robotics and 3D Computer Vision)
Show Figures

Figure 1

Figure 1
<p>The framework integrates open-vocabulary semantic point cloud observations into a common vector space. A predictive world model samples a set of diverse plausible complete world states from a partially observed state. The model improves through continual learning from experience by comparing predicted and observed future states based on predictive coding. High-dimensional semantic embeddings are projected as RGB color values for visualization.</p>
Full article ">Figure 2
<p>The process of transforming sensor observations into open-vocabulary partial world states. A semantic segmentation model interprets images. The inferred semantic embedding map is attached to a point cloud. Sequential semantic point clouds are accumulated into an ego-centric reference frame. Top-down projection creates BEV representations. BEVs can be measured for their similarity and sufficient similarity with a query semantic. High-dimensional semantic embeddings are projected as RGB color values for visualization.</p>
Full article ">Figure 3
<p>Predictive world model. The encoder <math display="inline"><semantics> <mrow> <msub> <mi>Enc</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <mo>)</mo> </mrow> </mrow> </semantics></math> learns the hierarchical latent variables <span class="html-italic">Z</span> representing the environment <math display="inline"><semantics> <msup> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo>*</mo> </msup> </semantics></math> conditioned on the <span class="html-italic">past-to-future</span> partially observed state <math display="inline"><semantics> <msup> <mi>x</mi> <mo>*</mo> </msup> </semantics></math>. The posterior matching encoder <math display="inline"><semantics> <mrow> <msub> <mi>Enc</mi> <mi>ϕ</mi> </msub> <mrow> <mo>(</mo> <mo>)</mo> </mrow> </mrow> </semantics></math> learns to predict the same distribution <span class="html-italic">Z</span> from the <span class="html-italic">past-to-present</span> state <span class="html-italic">x</span>. The decoder <math display="inline"><semantics> <msub> <mi>Dec</mi> <mi>θ</mi> </msub> </semantics></math> learns to reconstruct diverse and plausible complete states <math display="inline"><semantics> <msup> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo>*</mo> </msup> </semantics></math> from <span class="html-italic">Z</span>.</p>
Full article ">Figure 4
<p>Training plots. The mean ELBO (<a href="#FD28-sensors-24-04735" class="html-disp-formula">28</a>), cosine distance (<a href="#FD31-sensors-24-04735" class="html-disp-formula">31</a>), posterior (<a href="#FD29-sensors-24-04735" class="html-disp-formula">29</a>), and posterior matching (<a href="#FD30-sensors-24-04735" class="html-disp-formula">30</a>) distribution separation metrics continue to decrease with additional computation. See <a href="#sec4dot2-sensors-24-04735" class="html-sec">Section 4.2</a> for an explanation of the partially observed states <span class="html-italic">x</span>, <math display="inline"><semantics> <msup> <mi>x</mi> <mo>*</mo> </msup> </semantics></math>, and predicted complete states <math display="inline"><semantics> <msup> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo>*</mo> </msup> </semantics></math>.</p>
Full article ">Figure 5
<p>Conditional sampling visualizations. The high-dimensional open-vocabulary partial observation input <span class="html-italic">x</span> and sampled predictive world model output <math display="inline"><semantics> <msup> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo>*</mo> </msup> </semantics></math> are projected onto RGB images by PCA projection. Semantic inferences by sufficient similarity are shown in the third column. The actual worlds perceived in future observations are shown in the forth column. The first three rows show evaluation samples. The remaining two rows show samples from the training distribution.</p>
Full article ">Figure 6
<p>Unconditional sampling visualizations. High-dimensional open-vocabulary embedding maps are generated by the predictive world model <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>|</mo> <mi>Z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> through sampling from the learned prior distribution <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <mi>Z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. The embedding maps are visualized as RGB images by PCA projection.</p>
Full article ">
19 pages, 6068 KiB  
Article
Inversion Method for Transformer Winding Hot Spot Temperature Based on Gated Recurrent Unit and Self-Attention and Temperature Lag
by Yuefeng Hao, Zhanlong Zhang, Xueli Liu, Yu Yang and Jun Liu
Sensors 2024, 24(14), 4734; https://doi.org/10.3390/s24144734 - 21 Jul 2024
Viewed by 465
Abstract
The hot spot temperature of transformer windings is an important indicator for measuring insulation performance, and its accurate inversion is crucial to ensure the timely and accurate fault prediction of transformers. However, existing studies mostly directly input obtained experimental or operational data into [...] Read more.
The hot spot temperature of transformer windings is an important indicator for measuring insulation performance, and its accurate inversion is crucial to ensure the timely and accurate fault prediction of transformers. However, existing studies mostly directly input obtained experimental or operational data into networks to construct data-driven models, without considering the lag between temperatures, which may lead to the insufficient accuracy of the inversion model. In this paper, a method for inverting the hot spot temperature of transformer windings based on the SA-GRU model is proposed. Firstly, temperature rise experiments are designed to collect the temperatures of the entire side and top of the transformer tank, top oil temperature, ambient temperature, the cooling inlet and outlet temperatures, and winding hot spot temperature. Secondly, experimental data are integrated, considering the lag of the data, to obtain candidate input feature parameters. Then, a feature selection algorithm based on mutual information (MI) is used to analyze the correlation of the data and construct the optimal feature subset to ensure the maximum information gain. Finally, Self-Attention (SA) is applied to optimize the Gate Recurrent Unit (GRU) network, establishing the GRU-SA model to perceive the potential patterns between output feature parameters and input feature parameters, achieving the precise inversion of the hot spot temperature of the transformer windings. The experimental results show that considering the lag of the data can more accurately invert the hot spot temperature of the windings. The inversion method proposed in this paper can reduce redundant input features, lower the complexity of the model, accurately invert the changing trend of the hot spot temperature, and achieve higher inversion accuracy than other classical models, thereby obtaining better inversion results. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Thermal circuit model of transformer.</p>
Full article ">Figure 2
<p>Simplified thermal circuit model.</p>
Full article ">Figure 3
<p>Flowchart of the inversion method.</p>
Full article ">Figure 4
<p>Layout of fiber optics: (<b>a</b>) fiber for measuring top oil temperature; (<b>b</b>) fiber for measuring winding hotspot temperature.</p>
Full article ">Figure 5
<p>Layout diagram of infrared thermal imagers: (<b>a</b>) top of the tank; (<b>b</b>) side of the tank.</p>
Full article ">Figure 6
<p>Sample of experimental data.</p>
Full article ">Figure 7
<p>Flowchart of feature selection process.</p>
Full article ">Figure 8
<p>Structure of GRU.</p>
Full article ">Figure 9
<p>Illustrates the architecture of the GRU-SA network.</p>
Full article ">Figure 10
<p>MI calculation results between input features and winding hotspot temperature.</p>
Full article ">Figure 11
<p>Relationship between model inversion errors and number of input features: (<b>a</b>) MSE; (<b>b</b>) MAE; (<b>c</b>) R<sup>2</sup>; (<b>d</b>) MAPE.</p>
Full article ">Figure 11 Cont.
<p>Relationship between model inversion errors and number of input features: (<b>a</b>) MSE; (<b>b</b>) MAE; (<b>c</b>) R<sup>2</sup>; (<b>d</b>) MAPE.</p>
Full article ">Figure 12
<p>Inversion results with optimal feature set.</p>
Full article ">Figure 13
<p>MAE values of the inversion model under different parameter settings.</p>
Full article ">Figure 14
<p>Comparison of results from different inversion methods.</p>
Full article ">Figure 15
<p>Trend of changes in actual and inversion values of hot spot temperatures.</p>
Full article ">
14 pages, 10497 KiB  
Article
TTFDNet: Precise Depth Estimation from Single-Frame Fringe Patterns
by Yi Cai, Mingyu Guo, Congying Wang, Xiaowei Lu, Xuanke Zeng, Yiling Sun, Yuexia Ai, Shixiang Xu and Jingzhen Li
Sensors 2024, 24(14), 4733; https://doi.org/10.3390/s24144733 - 21 Jul 2024
Viewed by 551
Abstract
This work presents TTFDNet, a transformer-based and transfer learning network for end-to-end depth estimation from single-frame fringe patterns in fringe projection profilometry. TTFDNet features a precise contour and coarse depth (PCCD) pre-processor, a global multi-dimensional fusion (GMDF) module and a progressive depth extractor [...] Read more.
This work presents TTFDNet, a transformer-based and transfer learning network for end-to-end depth estimation from single-frame fringe patterns in fringe projection profilometry. TTFDNet features a precise contour and coarse depth (PCCD) pre-processor, a global multi-dimensional fusion (GMDF) module and a progressive depth extractor (PDE). It utilizes transfer learning through fringe structure consistency evaluation (FSCE) to leverage the transformer’s benefits even on a small dataset. Tested on 208 scenes, the model achieved a mean absolute error (MAE) of 0.00372 mm, outperforming Unet (0.03458 mm) models, PDE (0.01063 mm) and PCTNet (0.00518 mm). It demonstrated precise measurement capabilities with deviations of ~90 μm for a 25.4 mm radius ball and ~6 μm for a 20 mm thick metal part. Additionally, TTFDNet showed excellent generalization and robustness in dynamic reconstruction and varied imaging conditions, making it appropriate for practical applications in manufacturing, automation and computer vision. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision and Image Processing Sensors)
Show Figures

Figure 1

Figure 1
<p>Overview of the TTFDNet model.</p>
Full article ">Figure 2
<p>The schematic diagram of precise contour and coarse depth (PCCD) pre-processor.</p>
Full article ">Figure 3
<p>FSCE process for fine-tuning PCCD pre-processor. (<b>a</b>) Input fringe pattern. (<b>b</b>) PCCD prediction before any fine-tuning. (<b>c</b>) PCCD prediction after fine-tuning without FSCE. (<b>d</b>) PCCD prediction after fine-tuning with FSCE. (<b>e</b>) Ground truth.</p>
Full article ">Figure 4
<p>The structure of progressive depth extractor (PDE).</p>
Full article ">Figure 5
<p>Objects (<b>a</b>,<b>b</b>) and fringes projected onto objects (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 6
<p>Comparison of depth prediction and 3D reconstruction using the proposed model.</p>
Full article ">Figure 7
<p>A 3D reconstruction of standard parts based on TTFDNet.</p>
Full article ">Figure 8
<p>The height of a certain point on the fan changes.</p>
Full article ">Figure 9
<p>The predicted depth maps for a rotating fan.</p>
Full article ">Figure 10
<p>Predicted depth maps in varied imaging conditions. From left to right are the ground truth and predictions from four different methods. (<b>a1</b>–<b>e1</b>) show the overall scene; (<b>a2</b>–<b>e2</b>) are zoomed-in views of the left object from the overall scene; (<b>a3</b>–<b>e3</b>) are zoomed-in views of the right object from the overall scene; (<b>a4</b>–<b>e4</b>) show the predicted maps of scenes composed of different objects.</p>
Full article ">
27 pages, 9601 KiB  
Article
Three-Dimensional Reconstruction and Visualization of Underwater Bridge Piers Using Sonar Imaging
by Jianbin Luo, Shaofei Jiang, Yamian Zeng and Changqin Lai
Sensors 2024, 24(14), 4732; https://doi.org/10.3390/s24144732 - 21 Jul 2024
Viewed by 688
Abstract
The quality of underwater bridge piers significantly impacts bridge safety and long-term usability. To address limitations in conventional inspection methods, this paper presents a sonar-based technique for the three-dimensional (3D) reconstruction and visualization of underwater bridge piers. Advanced MS1000 scanning sonar is employed [...] Read more.
The quality of underwater bridge piers significantly impacts bridge safety and long-term usability. To address limitations in conventional inspection methods, this paper presents a sonar-based technique for the three-dimensional (3D) reconstruction and visualization of underwater bridge piers. Advanced MS1000 scanning sonar is employed to detect and image bridge piers. Automated image preprocessing, including filtering, denoising, binarization, filling, and morphological operations, introduces an enhanced wavelet denoising method to accurately extract the foundation contour coordinates of bridge piers from sonar images. Using these coordinates, along with undamaged pier dimensions and sonar distances, a model-driven approach for a 3D pier reconstruction algorithm is developed. This algorithm leverages multiple sonar data points to reconstruct damaged piers through multiplication. The Visualization Toolkit (VTK) and surface contour methodology are utilized for 3D visualization, enabling interactive manipulation for enhanced observation and analysis. Experimental results indicate a relative error of 13.56% for the hole volume and 10.65% for the spalling volume, demonstrating accurate replication of bridge pier defect volumes by the reconstructed models. Experimental validation confirms the method’s accuracy and effectiveness in reconstructing underwater bridge piers in three dimensions, providing robust support for safety assessments and contributing significantly to bridge stability and long-term safety assurance. Full article
(This article belongs to the Special Issue Acoustic and Ultrasonic Sensing Technology in Non-destructive Testing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of acoustic imaging.</p>
Full article ">Figure 2
<p>Schematic diagram of single-beam scanning imaging sonar scan.</p>
Full article ">Figure 3
<p>Schematic diagram of imaging principle.</p>
Full article ">Figure 4
<p>Schematic of measurement point beam coverage area.</p>
Full article ">Figure 5
<p>Schematic of actual target point directional beam.</p>
Full article ">Figure 6
<p>Flowchart of automated recognition of 2D sonar images for underwater bridge piers surface profile.</p>
Full article ">Figure 7
<p>Original pseudo-color sonar image of an underwater bridge pier.</p>
Full article ">Figure 8
<p>Histogram statistical examples of partial bridge pier sonar grayscale images. (<b>a</b>) Grayscale sonar image of actual bridge pier. (<b>b</b>) Grayscale sonar image of bridge pier experimental model. (<b>c</b>) Actual bridge pier sonar grayscale image histogram. (<b>d</b>) Bridge pier experimental model sonar grayscale image histogram.</p>
Full article ">Figure 9
<p>Results after histogram equalization. (<b>a</b>) Processed grayscale image of actual bridge pier. (<b>b</b>) Processed grayscale image of bridge pier experimental model. (<b>c</b>) Processed histogram of actual bridge pier. (<b>d</b>) Processed histogram of bridge pier experimental model.</p>
Full article ">Figure 10
<p>Filtering of sonar image.</p>
Full article ">Figure 11
<p>Binarization of sonar image.</p>
Full article ">Figure 12
<p>Removal of black and white regions in sonar image. (<b>a</b>) Sonar image after black region removal. (<b>b</b>) Sonar image after white region removal.</p>
Full article ">Figure 13
<p>Morphological erosion and dilation. (<b>a</b>) Erosion treatment to eliminate edge spikes. (<b>b</b>) Dilation treatment to preserve spikes.</p>
Full article ">Figure 14
<p>Bridge pier contour recognition. (<b>a</b>) Extracted contour information. (<b>b</b>) Comparison with the original sonar image. (<b>c</b>) Comparison with the binarized sonar image.</p>
Full article ">Figure 15
<p>Undamaged bridge pier model.</p>
Full article ">Figure 16
<p>Schematic representation of possible 3D shapes of continuous measurement points. (<b>a</b>) Measurement Point 1. (<b>b</b>) Measurement Point 2.</p>
Full article ">Figure 17
<p>Schematic representation of surface damage reconstruction for adjacent measurement points. (<b>a</b>) All control points corresponding to the measurement point. (<b>b</b>) Excluded contour. (<b>c</b>) Final surface damage contour.</p>
Full article ">Figure 17 Cont.
<p>Schematic representation of surface damage reconstruction for adjacent measurement points. (<b>a</b>) All control points corresponding to the measurement point. (<b>b</b>) Excluded contour. (<b>c</b>) Final surface damage contour.</p>
Full article ">Figure 18
<p>Fitting the bridge piers contour for the current cross-section.</p>
Full article ">Figure 19
<p>The visualization software.</p>
Full article ">Figure 20
<p>Schematic diagram of the 7.1 m × 5.1 m × 1.5 m water tank structure.</p>
Full article ">Figure 21
<p>The water tank hoisting system.</p>
Full article ">Figure 22
<p>Experimental auxiliary device diagram. (<b>a</b>) Schematic of sonar fixed on angle steel frame. (<b>b</b>) Test tank and mobile operating platform. (<b>c</b>) Underwater turntable.</p>
Full article ">Figure 23
<p>Concrete column with surface defects. (<b>a</b>) Hole. (<b>b</b>) Spalling.</p>
Full article ">Figure 24
<p>Surface defect dimensions (Unit: mm). (<b>a</b>) Hole dimensions. (<b>b</b>) Spalling dimensions. (<b>c</b>) Section p1. (<b>d</b>) Section p2. (<b>e</b>) Section p3. (<b>f</b>) Section p4. (<b>g</b>) Section p5.</p>
Full article ">Figure 24 Cont.
<p>Surface defect dimensions (Unit: mm). (<b>a</b>) Hole dimensions. (<b>b</b>) Spalling dimensions. (<b>c</b>) Section p1. (<b>d</b>) Section p2. (<b>e</b>) Section p3. (<b>f</b>) Section p4. (<b>g</b>) Section p5.</p>
Full article ">Figure 25
<p>Azimuthal layout of measurement points.</p>
Full article ">Figure 26
<p>Test scene image.</p>
Full article ">Figure 27
<p>Sonar image of bridge pier model. (<b>a</b>) Sonar image of 0<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>b</b>) Sonar image of 52<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>c</b>) Sonar image of 104<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>d</b>) Sonar image of 156<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>e</b>) Sonar image of 208<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>f</b>) Sonar image of 260<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>g</b>) Sonar image of 312<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 27 Cont.
<p>Sonar image of bridge pier model. (<b>a</b>) Sonar image of 0<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>b</b>) Sonar image of 52<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>c</b>) Sonar image of 104<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>d</b>) Sonar image of 156<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>e</b>) Sonar image of 208<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>. (<b>f</b>) Sonar image of 260<math display="inline"><semantics> <mrow> <mo>°</mo> <mo>.</mo> </mrow> </semantics></math> (<b>g</b>) Sonar image of 312<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 28
<p>Reconstructed 3D model of the bridge pier.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop