[go: up one dir, main page]

Next Issue
Volume 22, December-2
Previous Issue
Volume 22, November-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 22, Issue 23 (December-1 2022) – 539 articles

Cover Story (view full-size image): The development and application of modern technology are an essential basis for the efficient monitoring of species in natural habitats to assess the change of ecosystems, species communities and populations, and in order to understand important drivers of change. For estimating wildlife abundance, camera trapping in combination with 3D measurements of habitats is incredibly valuable. Additionally, 3D information improves the accuracy of wildlife detection using camera trapping. This study presents a novel approach to 3D camera trapping, featuring stereo vision to infer the 3D information of natural habitats, designated as stereo camera trap for monitoring of biodiversity (SOCRATES). SOCRATES shows a significant improvement in animal detection and superior applicability for estimating animal abundance using camera trap distance sampling. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 1791 KiB  
Review
Investigating Cardiorespiratory Interaction Using Ballistocardiography and Seismocardiography—A Narrative Review
by Paniz Balali, Jeremy Rabineau, Amin Hossein, Cyril Tordeur, Olivier Debeir and Philippe van de Borne
Sensors 2022, 22(23), 9565; https://doi.org/10.3390/s22239565 - 6 Dec 2022
Cited by 18 | Viewed by 5237
Abstract
Ballistocardiography (BCG) and seismocardiography (SCG) are non-invasive techniques used to record the micromovements induced by cardiovascular activity at the body’s center of mass and on the chest, respectively. Since their inception, their potential for evaluating cardiovascular health has been studied. However, both BCG [...] Read more.
Ballistocardiography (BCG) and seismocardiography (SCG) are non-invasive techniques used to record the micromovements induced by cardiovascular activity at the body’s center of mass and on the chest, respectively. Since their inception, their potential for evaluating cardiovascular health has been studied. However, both BCG and SCG are impacted by respiration, leading to a periodic modulation of these signals. As a result, data processing algorithms have been developed to exclude the respiratory signals, or recording protocols have been designed to limit the respiratory bias. Reviewing the present status of the literature reveals an increasing interest in applying these techniques to extract respiratory information, as well as cardiac information. The possibility of simultaneous monitoring of respiratory and cardiovascular signals via BCG or SCG enables the monitoring of vital signs during activities that require considerable mental concentration, in extreme environments, or during sleep, where data acquisition must occur without introducing recording bias due to irritating monitoring equipment. This work aims to provide a theoretical and practical overview of cardiopulmonary interaction based on BCG and SCG signals. It covers the recent improvements in extracting respiratory signals, computing markers of the cardiorespiratory interaction with practical applications, and investigating sleep breathing disorders, as well as a comparison of different sensors used for these applications. According to the results of this review, recent studies have mainly concentrated on a few domains, especially sleep studies and heart rate variability computation. Even in those instances, the study population is not always large or diversified. Furthermore, BCG and SCG are prone to movement artifacts and are relatively subject dependent. However, the growing tendency toward artificial intelligence may help achieve a more accurate and efficient diagnosis. These encouraging results bring hope that, in the near future, such compact, lightweight BCG and SCG devices will offer a good proxy for the gold standard methods for assessing cardiorespiratory function, with the added benefit of being able to perform measurements in real-world situations, outside of the clinic, and thus decrease costs and time. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>BCG waveform. The time traces of the BCG acceleration signal in the longitudinal axis (y) for a healthy subject are shown together with the ECG signal in one heart beat (in arbitrary unit). As opposed to the SCG, BCG represents global movements, and its waves are more difficult to associate with a single event of the cardiovascular cycle. However, the F, G, and H waves are related to events occurring before the ventricular systolic phase. Indeed, the H wave was shown to be associated with the atrial contraction, while the I and J waves are associated with the ejection of blood in the aorta and other large vessels. The K and following waves are thought to be due to the reflection of the pressure wave at the peripheral vessels and to the resulting oscillations in the center of mass of the overall blood volume.</p>
Full article ">Figure 2
<p>SCG waveform. The time traces of the SCG acceleration signal in the dorsoventral axis (z) for a healthy subject are shown alongside the ECG signal in one heart beat (arbitrary unit). The SCG labels correspond to the mitral valve closure (MC) and opening (MO), aortic valve closure (AC) and opening (AO), rapid ejection (RE), rapid filling (RF), and isovolumetric contraction (IVC).</p>
Full article ">Figure 3
<p>Outline of the review.</p>
Full article ">Figure 4
<p>Respiratory variations of the SCG and BCG signals. The time traces of an ECG, SCG (dorsoventral direction), and BCG (head-to-foot direction) are plotted against the concurrently recorded respiration signal. The SCG signal is visibly affected by breathing, with higher amplitudes during expiration than during inspiration. Opposite changes are also visible in the amplitude of the BCG signal, decreasing during expiration.</p>
Full article ">
20 pages, 3483 KiB  
Article
Emergency Braking Evoked Brain Activities during Distracted Driving
by Changcheng Shi, Lirong Yan, Jiawen Zhang, Yu Cheng, Fumin Peng and Fuwu Yan
Sensors 2022, 22(23), 9564; https://doi.org/10.3390/s22239564 - 6 Dec 2022
Cited by 1 | Viewed by 2553
Abstract
Electroencephalogram (EEG) was used to analyze the mechanisms and differences in brain neural activity of drivers in visual, auditory, and cognitive distracted vs. normal driving emergency braking conditions. A pedestrian intrusion emergency braking stimulus module and three distraction subtasks were designed in a [...] Read more.
Electroencephalogram (EEG) was used to analyze the mechanisms and differences in brain neural activity of drivers in visual, auditory, and cognitive distracted vs. normal driving emergency braking conditions. A pedestrian intrusion emergency braking stimulus module and three distraction subtasks were designed in a simulated experiment, and 30 subjects participated in the study. The common activated brain regions during emergency braking in different distracted driving states included the inferior temporal gyrus, associated with visual information processing and attention; the left dorsolateral superior frontal gyrus, related to cognitive decision-making; and the postcentral gyrus, supplementary motor area, and paracentral lobule associated with motor control and coordination. When performing emergency braking under different driving distraction states, the brain regions were activated in accordance with the need to process the specific distraction task. Furthermore, the extent and degree of activation of cognitive function-related prefrontal regions increased accordingly with the increasing task complexity. All distractions caused a lag in emergency braking reaction time, with 107.22, 67.15, and 126.38 ms for visual, auditory, and cognitive distractions, respectively. Auditory distraction had the least effect and cognitive distraction the greatest effect on the lag. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Simulated driving platform. (<b>A</b>) The road for the simulation was designed with curves and slopes to simulate a real driving road. (<b>B</b>) For the simulated scenario, the subject wore an EEG cap on the head, and EEG information was collected while simulating driving. (<b>C</b>) Simulation of the driving display screen showed the current lap number and speed in the upper left corner, and the visual and cognitive distraction screen in the upper right corner when visual or cognitive distraction occurred.</p>
Full article ">Figure 2
<p>Visual and auditory distraction paradigm. (<b>A</b>) Visual distraction paradigm. (<b>B</b>) Auditory distraction paradigm.</p>
Full article ">Figure 3
<p>Emergency braking brain area activation in four driving conditions (<span class="html-italic">p</span> &lt; 0.05(FWE), extent threshold k &gt; 100 voxels). (<b>A</b>) Normal driving. (<b>B</b>) Visual distraction driving. (<b>C</b>) Auditory distraction driving. (<b>D</b>) Cognitive distraction driving. Below the axial-viewed image is the Montreal Neurological Institute (MNI) Z-coordinate of the peak of the current activation cluster.</p>
Full article ">Figure 4
<p>ANOVA of emergency braking in four driving conditions vs. normal driving (<span class="html-italic">p</span> &lt; 0.001(FWE), extent threshold k &gt; 100 voxels). (<b>A</b>) Emergency braking under normal driving vs. normal driving. (<b>B</b>) Emergency braking under visual distraction driving vs. normal driving. (<b>C</b>) Emergency braking under auditory distraction driving vs. normal driving. (<b>D</b>) Emergency braking under cognitive distraction driving vs. normal driving. Below the axial-viewed image is the Montreal Neurological Institute (MNI) Z-coordinate of the peak of the current activation cluster.</p>
Full article ">Figure 5
<p>ANOVA of emergency braking in distracted driving vs. emergency braking in normal driving (<span class="html-italic">p</span> &lt; 0.001(FWE), extent threshold k &gt; 50 voxels). (<b>A</b>) Visual distraction driving vs. normal driving. (<b>B</b>) Auditory distraction driving vs. normal driving. (<b>C</b>) Cognitive distraction driving vs. normal driving. Below the axial-viewed image is the Montreal Neurological Institute (MNI) Z-coordinate of the peak of the current activation cluster.</p>
Full article ">Figure 6
<p>Analysis of emergency braking response time under four driving states. (<b>A</b>) Average response time. (<b>B</b>) Statistical differences in response time. (*** <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">
13 pages, 5941 KiB  
Article
Reduction of Crosstalk Errors in a Surface Encoder Having a Long Z-Directional Measuring Range
by Yifan Hong, Ryo Sato, Yuki Shimizu, Hiraku Matsukuma, Hiroki Shimizu and Wei Gao
Sensors 2022, 22(23), 9563; https://doi.org/10.3390/s22239563 - 6 Dec 2022
Cited by 4 | Viewed by 2022
Abstract
A modified two-axis surface encoder is proposed to separately measure both the in-plane displacement and the Z-directional out-of-plane displacement with minor crosstalk errors. The surface encoder is composed of a scale grating and a small-sized sensor head. In the modified surface encoder, [...] Read more.
A modified two-axis surface encoder is proposed to separately measure both the in-plane displacement and the Z-directional out-of-plane displacement with minor crosstalk errors. The surface encoder is composed of a scale grating and a small-sized sensor head. In the modified surface encoder, the measurement laser beam from the sensor head is designed to be projected onto the scale grating at a right angle. For measurement of the X- and Y-directional in-plane scale displacement, the positive and negative first-order diffracted beams from the scale grating are superimposed on each other in the sensor head, producing interference signals. On the other hand, the Z-directional out-of-plane scale displacement is measured based on the principle of a Michelson-type interferometer. To avoid the influence of reflection from the middle area of the transparent grating, which causes periodic crosstalk errors in the previous research, a specially fabricated transparent grating with a hole in the middle is employed in the newly designed optical system. A prototype sensor head is constructed, and basic performances of the modified surface encoder are tested by experiments. Full article
(This article belongs to the Special Issue Feature Papers in Optical Sensors 2022)
Show Figures

Figure 1

Figure 1
<p>Optical configurations of the surface encoder: (<b>a</b>) The optical configuration of the conventional surface encoder with an expanded Z-directional measuring range; (<b>b</b>) The optical configuration of the newly proposed surface encoder.</p>
Full article ">Figure 2
<p>Optical system related to the <span class="html-italic">X</span>-directional displacement measurement (the transparency of the unrelated components is increased).</p>
Full article ">Figure 3
<p>Optical system related to the <span class="html-italic">Z</span>-directional displacement measurement (the transparency of the unrelated components is increased).</p>
Full article ">Figure 4
<p>Prototype sensor head: (<b>a</b>) A three-dimensional model; (<b>b</b>) A photograph.</p>
Full article ">Figure 5
<p>A photograph of the measurement setup.</p>
Full article ">Figure 6
<p>Interference signal and corresponding crosstalk signal observed in the conventional surface encoder: (<b>a</b>) Readouts of <span class="html-italic">I<sub>x</sub></span> and <span class="html-italic">I<sub>z</sub></span> when giving the <span class="html-italic">X</span>-displacement; (<b>b</b>) Readouts of <span class="html-italic">I<sub>x</sub></span> and <span class="html-italic">I<sub>z</sub></span> when giving the <span class="html-italic">Z</span>-displacement.</p>
Full article ">Figure 7
<p>Interference signal and corresponding crosstalk signal observed in the proposed surface encoder: (<b>a</b>) Readouts of <span class="html-italic">I<sub>x</sub></span> and <span class="html-italic">I<sub>z</sub></span> when giving the <span class="html-italic">X</span>-displacement; (<b>b</b>) Readouts of <span class="html-italic">I<sub>x</sub></span> and <span class="html-italic">I<sub>z</sub></span> when giving the <span class="html-italic">Z</span>-displacement.</p>
Full article ">Figure 8
<p>Crosstalk errors observed in the conventional surface encoder: (<b>a</b>) Readouts of Δ<span class="html-italic">x</span> and Δ<span class="html-italic">z</span> when giving the <span class="html-italic">X</span>-displacement; (<b>b</b>) Readouts of Δ<span class="html-italic">x</span> and Δ<span class="html-italic">z</span> when giving the <span class="html-italic">Z</span>-displacement.</p>
Full article ">Figure 9
<p>Crosstalk errors observed in the proposed surface encoder: (<b>a</b>) Readouts of Δ<span class="html-italic">x</span> and Δ<span class="html-italic">z</span> when giving the <span class="html-italic">X</span>-displacement; (<b>b</b>) Readouts of Δ<span class="html-italic">x</span> and Δ<span class="html-italic">z</span> when giving the <span class="html-italic">Z</span>-displacement.</p>
Full article ">Figure 10
<p>Interpolation errors of the proposed method of this research: (<b>a</b>) Interpolation error in <span class="html-italic">X</span>-axis displacement measurement; (<b>b</b>) Interpolation error in <span class="html-italic">Z</span>-axis displacement measurement.</p>
Full article ">Figure 11
<p>Schematic of the experimental setup when giving the offset to test the <span class="html-italic">Z</span>-directional measurement range of the surface encoder.</p>
Full article ">Figure 12
<p>Variations of amplitudes of the interpolation errors when the <span class="html-italic">Z</span>-directional offset Δ<span class="html-italic">wd</span> was applied to the scale grating: (<b>a</b>) Δ<span class="html-italic">x</span> measurement; (<b>b</b>) Δ<span class="html-italic">z</span> measurement.</p>
Full article ">Figure 13
<p>Variations in amplitudes of the crosstalk errors when the <span class="html-italic">Z</span>-directional offset Δ<span class="html-italic">wd</span> was applied to the scale grating: (<b>a</b>) Readout of Δ<span class="html-italic">z</span> when <span class="html-italic">X</span>-axis displacement was applied to the scale grating; (<b>b</b>) Readout of Δ<span class="html-italic">x</span> when <span class="html-italic">Z</span>-axis displacement was applied to the scale grating.</p>
Full article ">
21 pages, 45014 KiB  
Article
Contactless Deformation Monitoring of Bridges with Spatio-Temporal Resolution: Profile Scanning and Microwave Interferometry
by Florian Schill, Chris Michel and Andrei Firus
Sensors 2022, 22(23), 9562; https://doi.org/10.3390/s22239562 - 6 Dec 2022
Cited by 10 | Viewed by 2906
Abstract
Against the background of an aging infrastructure, the condition assessment process of existing bridges is becoming an ever more challenging task for structural engineers. Short-term measurements and structural monitoring are valuable tools that can lead to a more accurate assessment of the remaining [...] Read more.
Against the background of an aging infrastructure, the condition assessment process of existing bridges is becoming an ever more challenging task for structural engineers. Short-term measurements and structural monitoring are valuable tools that can lead to a more accurate assessment of the remaining service life of structures. In this context, contactless sensors have great potential, as a wide range of applications can already be covered with relatively little effort and without having to interrupt traffic. In particular, profile scanning and microwave interferometry, have become increasingly important in the research field of bridge measurement and monitoring in recent years. In contrast to other contactless displacement sensors, both technologies enable a spatially distributed detection of absolute structural displacements. In addition, their high sampling rate enables the detection of the dynamic structural behaviour. This paper analyses the two sensor types in detail and discusses their advantages and disadvantages for the deformation monitoring of bridges. It focuses on a conceptual comparison between the two technologies and then discusses the main challenges related to their application in real-world structures in operation, highlighting the respective limitations of both sensors. The findings are illustrated with measurement results at a railway bridge in operation. Full article
(This article belongs to the Special Issue Structural Health Monitoring Based on Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration showing the application of microwave interferometry and profile scanning for the deformation monitoring of bridges.</p>
Full article ">Figure 2
<p>Antenna pattern for the standard antenna of the IBIS-S, vertical and horizontal (based on the User-Manual).</p>
Full article ">Figure 3
<p>Schematic illustration of the IBIS-S measurement principle in relation to the spatial resolution cells.</p>
Full article ">Figure 4
<p>Uncertainty of the projected displacements (vertical) for different standard deviations of the LOS displacement (upper vs. lower graphic), projection angles and precision of range and height for the resolution cells.</p>
Full article ">Figure 5
<p>Railway framework bridge under investigation.</p>
Full article ">Figure 6
<p>Complex framework of the underside of the bridge: Photograph vs. color-coded (height) laser scan.</p>
Full article ">Figure 7
<p>Side view of the bridge (laser scan) showing the schematic recording geometries of the 3 setups.</p>
Full article ">Figure 8
<p>Top and side view of the bridge with the projected resolution cells (color-coded) of the IBIS-S for Setup 2.</p>
Full article ">Figure 9
<p>Measurement configuration of IBIS-S and IMAGER 5016 in profile mode for setup 1. On the left side a 3D scan of both sensors is shown, including the color coded (range bins) view of the bridge underside from the IBIS-S. On the right side a cutout of a processed profile of the IMAGER 5016 is shown, which is color coded based on the spatio-temporal processing scheme from [<a href="#B7-sensors-22-09562" class="html-bibr">7</a>].</p>
Full article ">Figure 10
<p>Comparison of the derived displacements with both measuring systems (profile scanner and MI) at cross girder 6. Line 1 shows the measurements of the IMAGER 5016 in profile mode in blue and the IBIS-S measurements in red (no projection necessary) and line 2 displays the respective differences in black.</p>
Full article ">Figure 11
<p>Perspective view of Setup 3 with color-coded resolution cells for the IBIS-S.</p>
Full article ">Figure 12
<p>Comparison of the derived displacements with both measuring systems (profile scanner and MI) at cross girder 6 for setup 2 (left side) and setup 3 (right side). Line 1 shows the measurements of the IMAGER 5016 in profile mode in blue and the 1D-projected IBIS-S measurements in red, and line 2 displays the respective differences in black. Note the different scaling of the differences between the two train crossings.</p>
Full article ">Figure 13
<p>Comparison of the derived displacement with both measuring systems (profile scanner and MI) at cross girder 4 (left side) and cross girder 5 (right side) for setup 3. The figures in line 1 show the measurements of the IMAGER 5016 in profile mode in blue and the 1D-projected IBIS-S measurements in red, and in line 2 the respective differences are displayed in black. In line 3 the horizontal displacements measured with the IMAGER 5016 in profile mode at cross girder 4 are presented in yellow. Line 4 shows the measurements of the IMAGER 5016 in profile mode in blue and the 2D-projected IBIS-S measurements in red.</p>
Full article ">
13 pages, 3174 KiB  
Article
The Real-Time Validation of the Effectiveness of Third-Generation Hyperbranched Poly(ɛ-lysine) Dendrons-Modified KLVFF Sequences to Bind Amyloid-β1-42 Peptides Using an Optical Waveguide Light-Mode Spectroscopy System
by Valeria Perugini and Matteo Santin
Sensors 2022, 22(23), 9561; https://doi.org/10.3390/s22239561 - 6 Dec 2022
Cited by 1 | Viewed by 1880
Abstract
The aggregation of cytotoxic amyloid peptides (Aβ1-42) is widely recognised as the cause of brain tissue degeneration in Alzheimer’s disease (AD). Indeed, evidence indicates that the deposition of cytotoxic Aβ1-42 plaques formed through the gradual aggregation of Aβ1-42 monomers [...] Read more.
The aggregation of cytotoxic amyloid peptides (Aβ1-42) is widely recognised as the cause of brain tissue degeneration in Alzheimer’s disease (AD). Indeed, evidence indicates that the deposition of cytotoxic Aβ1-42 plaques formed through the gradual aggregation of Aβ1-42 monomers into fibrils determines the onset of AD. Thus, distinct Aβ1-42 inhibitors have been developed, and only recently, the use of short linear peptides has shown promising results by either preventing or reversing the process of Aβ1-42 aggregation. Among them, the KLVFF peptide sequence, which interacts with the hydrophobic region of Aβ16-20, has received widespread attention due to its ability to inhibit fibril formation of full-length Aβ1-42. In this study, hyperbranched poly-L-lysine dendrons presenting sixteen KLVFF at their uppermost molecular branches were designed with the aim of providing the KLVFF sequence with a molecular scaffold able to increase its stability and of improving Aβ1-42 fibril formation inhibitory effect. These high-purity branched KLVFF were used to functionalise the surface of the metal oxide chip of the optical waveguide lightmode spectroscopy sensor showing the more specific, accurate and rapid measurement of Aβ1-42 than that detected by linear KLVFF peptides. Full article
(This article belongs to the Special Issue Lab-on-a-Chip–From Point of Care to Precision Medicine (Volume II))
Show Figures

Figure 1

Figure 1
<p>Molecular structures of the linear KLVFF (<b>A</b>) and hyperbranched KLVFF [Rgen3K(KLVFF)<sub>16</sub>] (<b>B</b>). Dark dots at the terminal of the Rgen3K(KLVFF)<sub>16</sub> branching indicate the KLVFF sequence.</p>
Full article ">Figure 2
<p>Mass spectrometry of the linear KLVFF (<b>A</b>) and Rgen3K (<b>B</b>). Theoretical and experimental molecular weights are reported in the spectra (Rgen3K(KLVFF)<sub>16</sub> spectrum data not shown as its molecular weight was outside the detection range of the equipment).</p>
Full article ">Figure 3
<p>HPLC chromatograms of the linear KLVFF (<b>A</b>) and Rgen3K(KLVFF)<sub>16</sub> (<b>B</b>). Peak at 4 min is assigned to the organic solvent used as mobile phase of the chromatography.</p>
Full article ">Figure 4
<p>FTIR of linear KKVFF (<b>A</b>), Rgen3K and Rgen3K(KLVFF)<sub>16</sub> (<b>B</b>).</p>
Full article ">Figure 5
<p>Inhibitory effect of Aβ<sub>1-42</sub> fibril formation by linear KLVFF and hyperbranched Rgen3K(KLVFF)<sub>16</sub> assessed by confocal microscopy of ThT-stained (<b>A</b>) and Congo Red-stained ((<b>B</b>) Rgen3K(KLVFF)<sub>16</sub>) samples over 7 days’ incubation. Effective binding specificity was proven using a scrambled hyperbranched sequence as control. Scale bar in A: 50 nm. Error bars in B indicate standard deviations. Statistically significant differences at <span class="html-italic">p</span> ≤ 0.05 were observed in the case of samples treated with KLVFF and Rgen3K(KLVFF)<sub>16</sub> peptides.</p>
Full article ">Figure 6
<p>OWLS measurements of the linear KLVFF (<b>A</b>,<b>C</b>) and hyperbranched Rgen3K(KLVFF)<sub>16</sub> (<b>B,D</b>) followed by their relative Aβ<sub>1-42</sub> binding. (<b>A</b>,<b>B</b>) show the activation of the OWLS chip metal oxide surface with glutaraldehyde (GL) followed by washing steps and grafting of either KLVFF ((<b>A</b>), KLVFF injection) or Rgen3K(KLVFF)<sub>16</sub> ((<b>B</b>), Rgen3K(KLVFF)<sub>16</sub> injection). (<b>C</b>,<b>D</b>) show the respective intensity peak angles for both the transverse electric (IntTE) and the transverse magnetic (IntTM). Third GL injection shows formation of a plateau indicating surface saturation. Change in medium after the third GL injection leads to a slight change in the baseline signal. Experiments also include the following injection of the Aβ<sub>1-42</sub> samples.</p>
Full article ">Figure 7
<p>OWLS real-time monitoring of binding mass of functionalisation molecules and Aβ<sub>1-42</sub> monomers. OWLS chip functionalised with linear KLVFF (KLVFF injection) (<b>A</b>) and Rgen3K(KLVFF)<sub>16</sub> (Rgen3K(KLVFF)<sub>16</sub> injection) (<b>B</b>).</p>
Full article ">
33 pages, 10859 KiB  
Review
Extended Reality (XR) for Condition Assessment of Civil Engineering Structures: A Literature Review
by Fikret Necati Catbas, Furkan Luleci, Mahta Zakaria, Ulas Bagci, Joseph J. LaViola, Jr., Carolina Cruz-Neira and Dirk Reiners
Sensors 2022, 22(23), 9560; https://doi.org/10.3390/s22239560 - 6 Dec 2022
Cited by 24 | Viewed by 8357
Abstract
Condition assessment of civil engineering structures has been an active research area due to growing concerns over the safety of aged as well as new civil structures. Utilization of emerging immersive visualization technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed [...] Read more.
Condition assessment of civil engineering structures has been an active research area due to growing concerns over the safety of aged as well as new civil structures. Utilization of emerging immersive visualization technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in the architectural, engineering, and construction (AEC) industry has demonstrated that these visualization tools can be paradigm-shifting. Extended Reality (XR), an umbrella term for VR, AR, and MR technologies, has found many diverse use cases in the AEC industry. Despite this exciting trend, there is no review study on the usage of XR technologies for the condition assessment of civil structures. Thus, the present paper aims to fill this gap by presenting a literature review encompassing the utilization of XR technologies for the condition assessment of civil structures. This study aims to provide essential information and guidelines for practitioners and researchers on using XR technologies to maintain the integrity and safety of civil structures. Full article
Show Figures

Figure 1

Figure 1
<p>Structural condition assessment methods: traditional methods and visual inspection and structural health monitoring by their application levels.</p>
Full article ">Figure 2
<p>Global- and local-level SHM continuum.</p>
Full article ">Figure 3
<p>Typical idealized condition assessment procedure for civil structures.</p>
Full article ">Figure 4
<p>Number of review studies observed in the literature on VR, AR, and MR in condition assessment of civil structures over the years.</p>
Full article ">Figure 5
<p>The mixed reality spectrum.</p>
Full article ">Figure 6
<p>Overview of research methodology.</p>
Full article ">Figure 7
<p>PRISMA flow diagram of record selection according to [<a href="#B67-sensors-22-09560" class="html-bibr">67</a>].</p>
Full article ">Figure 8
<p>Use cases of VR, AR, and MR in the AEC Industry.</p>
Full article ">Figure 9
<p>Number of studies included in this paper on VR and AR in condition assessment of civil structures over the years.</p>
Full article ">Figure 10
<p>Multiple views of the bridge replicated in a 3D VR environment [<a href="#B94-sensors-22-09560" class="html-bibr">94</a>].</p>
Full article ">Figure 11
<p>The VR environment: users can explore the virtual reconstruction of a monitored structure while having direct access to values measured by the sensors deployed on it [<a href="#B100-sensors-22-09560" class="html-bibr">100</a>].</p>
Full article ">Figure 12
<p>The volume measurement of a pile of sand in the photogrammetry model with hand gestural input [<a href="#B101-sensors-22-09560" class="html-bibr">101</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) FEA reflected TLS point cloud with serviceability limit state check warning and dynamic monitoring of the midspan; (<b>b</b>) configuration panel of the FEA reflected TLS point cloud in immersive view; (<b>c</b>) dynamic monitoring of the all nodes with VR controller in immersive view; (<b>d</b>,<b>e</b>) multi-user feature; (<b>f</b>) iPad screen—iPad LiDAR footbridge scanning; (<b>g</b>,<b>h</b>) iPad LiDAR real-time footbridge reconstructing in the VR environment [<a href="#B12-sensors-22-09560" class="html-bibr">12</a>].</p>
Full article ">Figure 14
<p>Prototype of inspector assistant robot [<a href="#B104-sensors-22-09560" class="html-bibr">104</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) VR of non-visible bridge components and related information that clarifies the structure; (<b>b</b>) photogrammetry model of the bridge; (<b>c</b>,<b>d</b>) visualization and query in the VR environment of the BIM model [<a href="#B105-sensors-22-09560" class="html-bibr">105</a>].</p>
Full article ">Figure 16
<p>Schematic overview of the introduced AR-assisted assessment framework to estimate the IDR [<a href="#B105-sensors-22-09560" class="html-bibr">105</a>].</p>
Full article ">Figure 17
<p>(<b>a</b>) User interface for text annotation, (<b>b</b>) photographic annotation projection on the floor, (<b>c</b>) scanning the corner of a window to build its 3D model, (<b>d</b>) the stroke-type annotation [<a href="#B105-sensors-22-09560" class="html-bibr">105</a>].</p>
Full article ">Figure 18
<p>Visual representation of the AI-supported AI framework [<a href="#B11-sensors-22-09560" class="html-bibr">11</a>].</p>
Full article ">Figure 19
<p>(<b>a</b>) Inspector wearing Epson BTB-300, (<b>b</b>) site inspection to identify corrosion/fatigue with Epson BTB-300 [<a href="#B121-sensors-22-09560" class="html-bibr">121</a>].</p>
Full article ">Figure 20
<p>The visualization of the NDT data in VR mode (<b>left</b>) and in AR mode (<b>right</b>) [<a href="#B123-sensors-22-09560" class="html-bibr">123</a>].</p>
Full article ">Figure 21
<p>Real-time displacement visualization of different sensors through the HoloLens interface [<a href="#B11-sensors-22-09560" class="html-bibr">11</a>].</p>
Full article ">Figure 22
<p>(<b>a</b>) Damage visualization and (<b>b</b>) condition rating visualization of bridge structure in HoloLens [<a href="#B130-sensors-22-09560" class="html-bibr">130</a>].</p>
Full article ">Figure 23
<p>Descriptive visualization of HMCI for bridge inspection [<a href="#B131-sensors-22-09560" class="html-bibr">131</a>].</p>
Full article ">Figure 24
<p>Highlights observed in XR studies presented in this paper.</p>
Full article ">Figure 25
<p>Roadmap for condition assessment of civil structures.</p>
Full article ">
17 pages, 1401 KiB  
Article
Mobile Location in Wireless Sensor Networks Based on Multi Spot Measurements Model
by Chao Zheng, Wei Hu, Jiyan Huang, Pengfei Wang, Yufei Liu and Chenyu Yang
Sensors 2022, 22(23), 9559; https://doi.org/10.3390/s22239559 - 6 Dec 2022
Cited by 1 | Viewed by 1969
Abstract
The localization of sensors in wireless sensor networks has recently gained considerable attention. The existing location methods are based on a one-spot measurement model. It is difficult to further improve the positioning accuracy of existing location methods based on single-spot measurements. This paper [...] Read more.
The localization of sensors in wireless sensor networks has recently gained considerable attention. The existing location methods are based on a one-spot measurement model. It is difficult to further improve the positioning accuracy of existing location methods based on single-spot measurements. This paper proposes two location methods based on multi-spot measurements to reduce location errors. Because the multi-spot measurements model has more measurement equations than the single-spot measurements model, the proposed methods provide better performance than the traditional location methods using one-spot measurement in terms of the root mean square error (RMSE) and Cramer–Rao lower bound (CRLB). Both closed-form and iterative algorithms are proposed in this paper. The former performs suboptimally with less computational burden, whereas the latter has the highest positioning accuracy in attaining the CRLB. Moreover, a novel CRLB for the proposed multi-spot measurements model is also derived in this paper. A theoretical proof shows that the traditional CRLB in the case of single-spot measurements performs worse than the proposed CRLB in the case of multi-spot measurements. The simulation results show that the proposed methods have a lower RMSE than the traditional location methods. Full article
(This article belongs to the Special Issue Indoor and Outdoor Sensor Networks for Positioning and Localization)
Show Figures

Figure 1

Figure 1
<p>The traditional location technique using one-spot TOA measurements.</p>
Full article ">Figure 2
<p>The proposed location technique using multi-spot TOA measurements.</p>
Full article ">Figure 3
<p>The topology diagram of the RN distribution.</p>
Full article ">Figure 4
<p>Comparison between the traditional method and the proposed method under different TOA noises.</p>
Full article ">Figure 5
<p>Comparison between the traditional method and the proposed method under different Ms.</p>
Full article ">Figure 6
<p>Comparison of the proposed methods under different TOA noises.</p>
Full article ">Figure 7
<p>Comparison of the proposed methods under different Ms.</p>
Full article ">Figure 8
<p>Comparison of the proposed iterative methods with different initial values.</p>
Full article ">
21 pages, 2194 KiB  
Article
Balance Impairments in People with Early-Stage Multiple Sclerosis: Boosting the Integration of Instrumented Assessment in Clinical Practice
by Ilaria Carpinella, Denise Anastasi, Elisa Gervasoni, Rachele Di Giovanni, Andrea Tacchino, Giampaolo Brichetto, Paolo Confalonieri, Marco Rovaris, Claudio Solaro, Maurizio Ferrarin and Davide Cattaneo
Sensors 2022, 22(23), 9558; https://doi.org/10.3390/s22239558 - 6 Dec 2022
Cited by 13 | Viewed by 3806
Abstract
The balance of people with multiple sclerosis (PwMS) is commonly assessed during neurological examinations through clinical Romberg and tandem gait tests that are often not sensitive enough to unravel subtle deficits in early-stage PwMS. Inertial sensors (IMUs) could overcome this drawback. Nevertheless, IMUs [...] Read more.
The balance of people with multiple sclerosis (PwMS) is commonly assessed during neurological examinations through clinical Romberg and tandem gait tests that are often not sensitive enough to unravel subtle deficits in early-stage PwMS. Inertial sensors (IMUs) could overcome this drawback. Nevertheless, IMUs are not yet fully integrated into clinical practice due to issues including the difficulty to understand/interpret the big number of parameters provided and the lack of cut-off values to identify possible abnormalities. In an attempt to overcome these limitations, an instrumented modified Romberg test (ImRomberg: standing on foam with eyes closed while wearing an IMU on the trunk) was administered to 81 early-stage PwMS and 38 healthy subjects (HS). To facilitate clinical interpretation, 21 IMU-based parameters were computed and reduced through principal component analysis into two components, sway complexity and sway intensity, descriptive of independent aspects of balance, presenting a clear clinical meaning and significant correlations with at least one clinical scale. Compared to HS, early-stage PwMS showed a 228% reduction in sway complexity and a 63% increase in sway intensity, indicating, respectively, a less automatic (more conscious) balance control and larger and faster trunk movements during upright posture. Cut-off values were derived to identify the presence of balance abnormalities and if these abnormalities are clinically meaningful. By applying these thresholds and integrating the ImRomberg test with the clinical tandem gait test, balance impairments were identified in 58% of PwMS versus the 17% detected by traditional Romberg and tandem gait tests. The higher sensitivity of the proposed approach would allow for the direct identification of early-stage PwMS who could benefit from preventive rehabilitation interventions aimed at slowing MS-related functional decline during neurological examinations and with minimal modifications to the tests commonly performed. Full article
Show Figures

Figure 1

Figure 1
<p>Result of the principal component analysis. Eight wearable-sensor-based variables descriptive of trunk sway during the instrumented modified Romberg test (reported on the left) were reduced to two “latent variables” (principal components) called “sway complexity” and “sway intensity”. Arrows report significant factor loadings (&gt;0.60). The percentage of variance explained by each component and the total variance is given on the right.</p>
Full article ">Figure 2
<p>Scatterplot reporting sway complexity and sway intensity scores extracted from the ImRomberg in each participant. HS: healthy subjects; PwMS: people with multiple sclerosis; N: normal; A NCS: abnormal, not clinically significant, A CS: abnormal, clinically significant.</p>
Full article ">Figure 3
<p>(<b>a</b>) Percentage of people with MS (PwMS) showing abnormal ImRomberg test and abnormal clinical scores on modified Romberg (mRomberg) and Romberg tests. (<b>b</b>) Percentage of PwMS with abnormal tandem gait test (light violet) and with normal tandem but abnormal clinical Romberg test (dark violet). (<b>c</b>) Percentage of PwMS with abnormal tandem gait test (light violet) and with normal tandem but abnormal ImRomberg (light and dark magenta). ImRomberg was considered abnormal in case of sway complexity and/or sway intensity score outside normative cut-offs (two-headed dashed arrow). Abnormal NCS: abnormal, not clinically significant (sway complexity and/or sway intensity outside normative cut-offs but inside clinically significant cut-offs); abnormal CS: abnormal, clinically significant (sway complexity and/or sway intensity outside clinically significant cut-offs).</p>
Full article ">Figure 4
<p>Spearman’s correlation coefficient between ImRomberg components (sway complexity and sway intensity) and clinical measures in PwMS. *** <span class="html-italic">p</span> &lt; 0.001, ** <span class="html-italic">p</span> &lt; 0.01; * <span class="html-italic">p</span> &lt; 0.05; <sup>†</sup> <span class="html-italic">p</span> &lt; 0.10. TUG: timed up and go; FABs: Fullerton advanced balance scale—short; Nr. CA Vest. Tests: number of clinically abnormal vestibular tests (i.e., standing on foam with eyes closed, turning 360° left and right, walking with head turns); T25FWT: timed 25 foot walk test; MSWS-12: twelve-item multiple sclerosis walking scale; EDSS: expanded disability status scale.</p>
Full article ">Figure A1
<p>Correlations matrix among IMU-based parameters. Pearson’s correlation coefficients ≥ 0.90 are reported in bold. AP: anteroposterior; ML: mediolateral; SwAmp: sway amplitude; SwRange: sway range; SwArea: 95% confidence ellipse area; SwVel: sway velocity; SwPath: sway path; nJerk: normalized jerk; Pwr: total spectral power; SaEn: sample entropy.</p>
Full article ">
16 pages, 3121 KiB  
Article
Validity and Reliability of Wearable Motion Sensors for Clinical Assessment of Shoulder Function in Brachial Plexus Birth Injury
by Helena Grip, Anna Källströmer and Fredrik Öhberg
Sensors 2022, 22(23), 9557; https://doi.org/10.3390/s22239557 - 6 Dec 2022
Cited by 4 | Viewed by 2813
Abstract
The modified Mallet scale (MMS) is commonly used to grade shoulder function in brachial plexus birth injury (BPBI) but has limited sensitivity and cannot grade scapulothoracic and glenohumeral mobility. This study aims to evaluate if the addition of a wearable inertial movement unit [...] Read more.
The modified Mallet scale (MMS) is commonly used to grade shoulder function in brachial plexus birth injury (BPBI) but has limited sensitivity and cannot grade scapulothoracic and glenohumeral mobility. This study aims to evaluate if the addition of a wearable inertial movement unit (IMU) system could improve clinical assessment based on MMS. The system validity was analyzed with simultaneous measurements with the IMU system and an optical camera system in three asymptomatic individuals. Test–retest and interrater reliability were analyzed in nine asymptomatic individuals and six BPBI patients. IMUs were placed on the upper arm, forearm, scapula, and thorax. Peak angles, range of motion, and average joint angular speed in the shoulder, scapulothoracic, glenohumeral, and elbow joints were analyzed during mobility assessments and MMS tasks. In the validity tests, clusters of reflective markers were placed on the sensors. The validity was high with an error standard deviation below 3.6°. Intraclass correlation coefficients showed that 90.3% of the 69 outcome scores showed good-to-excellent test–retest reliability, and 41% of the scores gave significant differences between BPBI patients and controls with good-to-excellent test–retest reliability. The interrater reliability was moderate to excellent, implying that standardization is important if the patient is followed-up longitudinally. Full article
(This article belongs to the Special Issue Wearable or Markerless Sensors for Gait and Movement Analysis)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>–<b>C</b>). Illustrates the IMU placements used in Part I (validity) and Part II (reliability). The reflective markers used in Part II were placed either on an orthoplastic shell with the sensor in the center of the shell (upper and lower arms, (<b>A</b>) or directly on the sensor’s front and sides (scapula (<b>B</b>) and thorax (<b>C</b>)). The local coordinate systems, marked with white arrows (<b>B</b>,<b>C</b>), were all defined so that after calibration/sensor alignment, they were oriented in the same way.</p>
Full article ">Figure 2
<p>Examples of data from one test person during four of the tasks included in the modified Mallet score MMS3 hand to neck (<b>A</b>), MMS4 hand to spine (<b>B</b>), MMS5 hand to mouth (<b>C</b>), MMS6 internal rotation (<b>D</b>) as simultaneously measured with the reference system (red line) and the IMU system (thick blue line). The segment helical angle was calculated for scapula (upper row), upper arm (middle row) and forearm (bottom row).</p>
Full article ">Figure 3
<p>Bland–Altmann plots illustrate system agreements of the IMU system with the reference system. The helical angle was calculated for each segment, scapula (<b>A</b>), upper arm (<b>B</b>) and forearm (<b>C</b>). Angular errors were calculated for tasks involving large shoulder movements in one plane (upper row, shoulder flexion−extension, MMS1 global abduction and MMS2 global external rotation), for elbow mobility tasks (middle row, elbow flexion−extension and forearm pronation–supination); tasks involving movement in both elbow and shoulder (bottom row, MMS3 hand to neck, Mallet MMS4 hand to spine, MMS5 hand to mouth and MMS6 internal rotation). The mean error is illustrated with a blue line, and the error 95% confidence intervals are marked with red lines.</p>
Full article ">Figure 4
<p>Outcome measures from the assessment of (<b>A</b>) shoulder and (<b>B</b>,<b>C</b>) elbow mobility. Group mean and standard error of mean are illustrated (BPBI: red left bar, control: blue right bar). ICC from test–retest reliability and <span class="html-italic">p</span>-values from <span class="html-italic">t</span>-tests are shown in the upper right corner of each subplot and are highlighted green for outcome scores with both good test–retest reliability (ICC &gt; 0.75) and a significant group difference (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 5
<p>Outcome measures from the assessment of two tasks from the modified Mallet scale (MMS); (<b>A</b>) global abduction and (<b>B</b>) global external rotation. Group mean and standard error of mean are illustrated (BPBI: red left bar, control: blue right bar). ICC from test–retest reliability and p-values from <span class="html-italic">t</span>-tests are shown in the upper right corner of each subplot and are highlighted green for outcome scores with both good test–retest reliability (ICC &gt; 0.75) and a significant group difference (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 6
<p>Outcome measures from the assessment of two tasks from the modified Mallet scale (MMS); (<b>A</b>) hand to neck and (<b>B</b>) hand to spine. Group mean and standard error of mean are illustrated (BPBI: red left bar, control: blue right bar). ICC from test–retest reliability and <span class="html-italic">p</span>-values from <span class="html-italic">t</span>-tests are shown in the upper right corner of each subplot and are highlighted green for outcome scores with both good test–retest reliability (ICC &gt; 0.75) and a significant group difference (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 7
<p>Outcome measures from the assessment of two tasks from the modified Mallet scale (MMS); (<b>A</b>) hand to mouth and (<b>B</b>) internal rotation. Group mean and standard error of mean are illustrated (BPBI: red left bar, control: blue right bar). ICC from test–retest reliability and p-values from <span class="html-italic">t</span>-tests are shown in the upper right corner of each subplot and are highlighted green for outcome scores with both good test–retest reliability (ICC &gt; 0.75) and a significant group difference (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
13 pages, 3256 KiB  
Article
Temporal Dashboard Gaze Variance (TDGV) Changes for Measuring Cognitive Distraction While Driving
by Cyril Marx, Elem Güzel Kalayci and Peter Moertl
Sensors 2022, 22(23), 9556; https://doi.org/10.3390/s22239556 - 6 Dec 2022
Cited by 3 | Viewed by 2169
Abstract
A difficult challenge for today’s driver monitoring systems is the detection of cognitive distraction. The present research presents the development of a theory-driven approach for cognitive distraction detection during manual driving based on temporal control theories. It is based solely on changes in [...] Read more.
A difficult challenge for today’s driver monitoring systems is the detection of cognitive distraction. The present research presents the development of a theory-driven approach for cognitive distraction detection during manual driving based on temporal control theories. It is based solely on changes in the temporal variance of driving-relevant gaze behavior, such as gazes onto the dashboard (TDGV). Validation of the detection method happened in a field and in a simulator study by letting participants drive, alternating with and without a secondary task inducing external cognitive distraction (auditory continuous performance task). The general accuracy of the distraction detection method varies between 68% and 81% based on the quality of an individual prerecorded baseline measurement. As a theory-driven system, it represents not only a step towards a sophisticated cognitive distraction detection method, but also explains that changes in temporal dashboard gaze variance (TDGV) are a useful behavioral indicator for detecting cognitive distraction. Full article
(This article belongs to the Special Issue Robust Multimodal Sensing for Automated Driving Systems)
Show Figures

Figure 1

Figure 1
<p>Areas of interest (AOI) in the experimental setup. A: dashboard; B: windshield; C: rear mirror; D: left mirror; E: right mirror; F: SURT Tablet. Only the dashboard AOI is used in analysis. The others were used for better evaluation of data validity.</p>
Full article ">Figure 2
<p>Example for Q3A-MEM, where the participant has to react to an A each time it succeeds a Q with two other random letters in between. Participants would have to react to each highlighted A in this example case.</p>
Full article ">Figure 3
<p>Schematic example task of the SuRT. Participants had to react to the larger circle.</p>
Full article ">Figure 4
<p>Experimental segments during study 1. Each segment took approximately one minute.</p>
Full article ">Figure 5
<p>Average temporal dashboard gaze variance over all participants sorted by experimental segments. The error bars represent the standard deviation across different participants.</p>
Full article ">Figure 6
<p>Accuracy, sensitivity, and specificity of distraction detection for cognitive distraction detection side-to-side in study 1.</p>
Full article ">Figure 7
<p>Experimental conditions during study 2. Each segment took approximately one minute.</p>
Full article ">Figure 8
<p>Timeseries graph of detection approach with distraction detection for one example participant. Red, bold marking shows where the metric detected a distraction. The slight variation in the length of the segments results from variations in setting the segment-defining markers during the experiment.</p>
Full article ">Figure 9
<p>Differences between groups with a lower and higher baseline quality in distraction detection performance parameters.</p>
Full article ">
16 pages, 5444 KiB  
Article
Compact Wideband Double-Slot Microstrip Feed Engraved TEM Horn Strip Antennas on a Multilayer Substrate Board for in Bed Resting Body Positions Determination Based on Artificial Intelligence
by Jiwan Ghimire, Ji-Hoon Kim and Dong-You Choi
Sensors 2022, 22(23), 9555; https://doi.org/10.3390/s22239555 - 6 Dec 2022
Viewed by 2227
Abstract
In this paper, a horn-shaped strip antenna exponentially tapered carved on a multilayer dielectric substrate for an indoor body position tracking system is proposed. The performance of the proposed antenna was verified by testing it as a tracking state of an indoor resting [...] Read more.
In this paper, a horn-shaped strip antenna exponentially tapered carved on a multilayer dielectric substrate for an indoor body position tracking system is proposed. The performance of the proposed antenna was verified by testing it as a tracking state of an indoor resting body position. Among different feeding techniques, the uniplanar T-junction power divider approach is used. The performance verification of the proposed antenna is explained through its compact size and 3D shape, along with a performance comparison of the return loss radiation pattern and the realized gain. The suggested antenna has an 88.88% fractional bandwidth and a return loss between 6 and 15.6 GHz, with a maximum gain of 9.46 dBi in the 9.5 GHz region. Within the intended band, the radiation pattern had an excellent directivity characteristics. The proposed antenna was connected to an NVA-R661 module of Xethru Inc. for sleeping body position tracking. The performance of the antenna is measured through microwave imagining of the state of the resting body in various sleeping positions on the bed using a Recurrent Neural Network (RNN). The predicted outcomes clearly define the antenna’s performance and could be used for sensing and prediction purposes. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Horn antenna engraved on a multilayered dielectric substrate. 3D perspective view.</p>
Full article ">Figure 2
<p>Structure of antenna (<b>a</b>) Side view of the antenna with exponentially shaped adhesive foil tape on the dielectric substrate layers; (<b>b</b>) Bottom view of the antenna showing feed line, slots, drill holes, and copper strips; (<b>c</b>) Front and back view of the bottom substrate layer of thickness 0.8 mm; (<b>d</b>) Top view of the fabricated antenna.</p>
Full article ">Figure 2 Cont.
<p>Structure of antenna (<b>a</b>) Side view of the antenna with exponentially shaped adhesive foil tape on the dielectric substrate layers; (<b>b</b>) Bottom view of the antenna showing feed line, slots, drill holes, and copper strips; (<b>c</b>) Front and back view of the bottom substrate layer of thickness 0.8 mm; (<b>d</b>) Top view of the fabricated antenna.</p>
Full article ">Figure 3
<p>The simulated and measured result of the proposed antenna; (<b>a</b>) Return loss; (<b>b</b>) Realized gain; (<b>c</b>) Simulated return loss and realized gain of the bottom substrate feed layer; (<b>d</b>) Radiation efficiency of the proposed antenna and bottom substrate feed layer.</p>
Full article ">Figure 4
<p>Measured far-field radiation pattern at E-plane and H-plane: (<b>a</b>) 5 GHz; (<b>b</b>) 6.5 GHz; (<b>c</b>) 7.5 GHz; (<b>d</b>) 8 GHz; (<b>e</b>) 9.5 GHz; (<b>f</b>) 10.5 GHz; (<b>g</b>)11.5 GHz; (<b>h</b>) 12.5 GHz; (<b>i</b>) 13.5 GHz; (<b>j</b>) 14.5 GHz.</p>
Full article ">Figure 4 Cont.
<p>Measured far-field radiation pattern at E-plane and H-plane: (<b>a</b>) 5 GHz; (<b>b</b>) 6.5 GHz; (<b>c</b>) 7.5 GHz; (<b>d</b>) 8 GHz; (<b>e</b>) 9.5 GHz; (<b>f</b>) 10.5 GHz; (<b>g</b>)11.5 GHz; (<b>h</b>) 12.5 GHz; (<b>i</b>) 13.5 GHz; (<b>j</b>) 14.5 GHz.</p>
Full article ">Figure 4 Cont.
<p>Measured far-field radiation pattern at E-plane and H-plane: (<b>a</b>) 5 GHz; (<b>b</b>) 6.5 GHz; (<b>c</b>) 7.5 GHz; (<b>d</b>) 8 GHz; (<b>e</b>) 9.5 GHz; (<b>f</b>) 10.5 GHz; (<b>g</b>)11.5 GHz; (<b>h</b>) 12.5 GHz; (<b>i</b>) 13.5 GHz; (<b>j</b>) 14.5 GHz.</p>
Full article ">Figure 5
<p>Far-field radiation beam components: (<b>a</b>) Electric field distribution plot at 6.8 GHz; (<b>b</b>) Front-to-back ratio and beam width; (<b>c</b>) Anechoic chamber measurement setup; (<b>d</b>) 3D radiation plot at 6.8 GHz; (<b>e</b>) Measured phase response of S21.</p>
Full article ">Figure 6
<p>Experimental setup and sleep positions and states: (<b>a</b>) Arrangement setup with antenna, Rf cables, PC, with empty bed state; (<b>b</b>) UWB radar module with a pair of antennas; (<b>c</b>) Right side of the bed; (<b>d</b>) Middle of the bed.</p>
Full article ">Figure 7
<p>Time and frequency domain pulse shaped of IR-UWB radar for PGselect = 5; (<b>a</b>) Transmitted pulse shape in the time domain; (<b>b</b>) Transmitted impulse in the frequency domain.</p>
Full article ">Figure 8
<p>The received pulse signal level with and without correlation obtain from 6.8 GHz transmission: (<b>a</b>) Without sleeping body; (<b>b</b>) With sleeping body.</p>
Full article ">Figure 9
<p>Scanned 2D holographic slices at different sleeping body positions in bed: (<b>a</b>) Right position; (<b>b</b>) Middle position; (<b>c</b>) Left position; (<b>d</b>) Without a presence in the bed.</p>
Full article ">Figure 10
<p>Training progress with 25 epochs: (<b>a</b>) Classification accuracy; (<b>b</b>) Classification loss.</p>
Full article ">Figure 11
<p>Classification accuracy for each sleeping state, represented by a Confusion Matrix for the Bi-LSTM layer.</p>
Full article ">
17 pages, 4462 KiB  
Article
OptiFit: Computer-Vision-Based Smartphone Application to Measure the Foot from Images and 3D Scans
by Riyad Bin Rafiq, Kazi Miftahul Hoque, Muhammad Ashad Kabir, Sayed Ahmed and Craig Laird
Sensors 2022, 22(23), 9554; https://doi.org/10.3390/s22239554 - 6 Dec 2022
Cited by 3 | Viewed by 5268
Abstract
The foot is a vital organ, as it stabilizes the impact forces between the human skeletal system and the ground. Hence, precise foot dimensions are essential not only for custom footwear design, but also for the clinical treatment of foot health. Most existing [...] Read more.
The foot is a vital organ, as it stabilizes the impact forces between the human skeletal system and the ground. Hence, precise foot dimensions are essential not only for custom footwear design, but also for the clinical treatment of foot health. Most existing research on measuring foot dimensions depends on a heavy setup environment, which is costly and ineffective for daily use. In addition, there are several smartphone applications online, but they are not suitable for measuring the exact foot shape for custom footwear, both in clinical practice and public use. In this study, we designed and implemented computer-vision-based smartphone application OptiFit that provides the functionality to automatically measure the four essential dimensions (length, width, arch height, and instep girth) of a human foot from images and 3D scans. We present an instep girth measurement algorithm, and we used a pixel per metric algorithm for measurement; these algorithms were accordingly integrated with the application. Afterwards, we evaluated our application using 19 medical-grade silicon foot models (12 males and 7 females) from different age groups. Our experimental evaluation shows that OptiFit could measure the length, width, arch height, and instep girth with an accuracy of 95.23%, 96.54%, 89.14%, and 99.52%, respectively. A two-tailed paired t-test was conducted, and only the instep girth dimension showed a significant discrepancy between the manual measurement (MM) and the application-based measurement (AM). We developed a linear regression model to adjust the error. Further, we performed comparative analysis demonstrating that there were no significant errors between MM and AM, and the application offers satisfactory performance as a foot-measuring application. Unlike other applications, the iOS application we developed, OptiFit, fulfils the requirements to automatically measure the exact foot dimensions for individually fitted footwear. Therefore, the application can facilitate proper foot measurement and enhance awareness to prevent foot-related problems caused by inappropriate footwear. Full article
(This article belongs to the Special Issue E-health System Based on Sensors and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Foot dimensions. (<b>a</b>) Top view—foot length and width are marked as 1 and 2, respectively; (<b>b</b>) side view—arch height and instep girth are marked as 3 and 4, respectively.</p>
Full article ">Figure 2
<p>Two sequential stages for Graham’s scan algorithm.</p>
Full article ">Figure 3
<p>Application architecture.</p>
Full article ">Figure 4
<p>Application settings: (<b>a</b>) main screen; (<b>b</b>) adding reference object; (<b>c</b>) tutorial screen.</p>
Full article ">Figure 5
<p>Activity diagram for the measurements of length, width, and arch height.</p>
Full article ">Figure 6
<p>Activity diagram for the instep girth measurement.</p>
Full article ">Figure 7
<p>Instep girth measurement from 3D mesh.</p>
Full article ">Figure 8
<p>Linear regression between AM and MM for the instep girth of the training dataset.</p>
Full article ">Figure 9
<p>Accuracy and relative error for different age groups: (<b>a</b>) accuracy of four foot measurements; (<b>b</b>) mean and its standard error (error bar) of relative difference (%) between MM and AM.</p>
Full article ">Figure 10
<p>Accuracy and relative error for the instep girth measurement of different age groups after applying linear regression: (<b>a</b>) accuracy; (<b>b</b>) mean and its standard error (error bar) of relative difference (%) between MM and AM.</p>
Full article ">
23 pages, 10432 KiB  
Article
Design and Modeling of a Fully Integrated Microring-Based Photonic Sensing System for Liquid Refractometry
by Grigory Voronkov, Aida Zakoyan, Vladislav Ivanov, Dmitry Iraev, Ivan Stepanov, Roman Yuldashev, Elizaveta Grakhova, Vladimir Lyubopytov, Oleg Morozov and Ruslan Kutluyarov
Sensors 2022, 22(23), 9553; https://doi.org/10.3390/s22239553 - 6 Dec 2022
Cited by 12 | Viewed by 3057
Abstract
The design of a refractometric sensing system for liquids analysis with a sensor and the scheme for its intensity interrogation combined on a single photonic integrated circuit (PIC) is proposed. A racetrack microring resonator with a channel for the analyzed liquid formed on [...] Read more.
The design of a refractometric sensing system for liquids analysis with a sensor and the scheme for its intensity interrogation combined on a single photonic integrated circuit (PIC) is proposed. A racetrack microring resonator with a channel for the analyzed liquid formed on the top is used as a sensor, and another microring resonator with a lower Q-factor is utilized to detect the change in the resonant wavelength of the sensor. As a measurement result, the optical power at its drop port is detected in comparison with the sum of the powers at the through and drop ports. Simulations showed the possibility of registering a change in the analyte refractive index with a sensitivity of 110 nm per refractive index unit. The proposed scheme was analyzed with a broadband source, as well as a source based on an optoelectronic oscillator using an optical phase modulator. The second case showed the fundamental possibility of implementing an intensity interrogator on a PIC using an external typical single-mode laser as a source. Meanwhile, additional simulations demonstrated an increased system sensitivity compared to the conventional interrogation scheme with a broadband or tunable light source. The proposed approach provides the opportunity to increase the integration level of a sensing device, significantly reducing its cost, power consumption, and dimensions. Full article
(This article belongs to the Special Issue Fiber Bragg Grating Sensors: Recent Advances and Future Perspectives)
Show Figures

Figure 1

Figure 1
<p>3D design draft of the proposed integrated MRR-based sensing system with a broadband source (not to scale).</p>
Full article ">Figure 2
<p>Principle of MRR-based interrogation by intensity.</p>
Full article ">Figure 3
<p>An example of improper overlapping of the interrogator and sensor transmission spectra, which can lead to destructive crosstalk between signals from two sensors’ resonant peaks.</p>
Full article ">Figure 4
<p>3D design draft of the proposed integrated sensing system with an OEO (not to scale; microwave elements are shown tentatively).</p>
Full article ">Figure 5
<p>Transmission spectra of the through and drop ports of the sensor MRR.</p>
Full article ">Figure 6
<p>(<b>a</b>) The dependence of the sensor MRR transmission spectrum at the drop port on the analyte RI; (<b>b</b>) the dependence of the analyzed resonant wavelength of the sensor on the analyte RI.</p>
Full article ">Figure 7
<p>Transmission spectra at the through and drop ports of the interrogator MRR.</p>
Full article ">Figure 8
<p>(<b>a</b>) Transmission spectra at the drop ports of the sensor and interrogator MRRs; (<b>b</b>) Wavelength dependence of the system transmission coefficient for the different analyte RIs.</p>
Full article ">Figure 9
<p>(<b>a</b>) Modeled wavelength dependence of the photodiode responsivity; (<b>b</b>) Modeled dependence of the relative power at the system output on the analyte RI in the scheme with a broadband source.</p>
Full article ">Figure 10
<p>Optical spectrum at the through port (<b>a</b>) and the drop port (<b>b</b>) of the sensor in the scheme with an OEO. In inset: the magnified spectrum of the optical subcarrier at the drop-port.</p>
Full article ">Figure 11
<p>Modeled dependence of the relative power at the system output on the analyte RI in the scheme using an OEO as a source.</p>
Full article ">Figure 12
<p>The waveform of the microwave signal at the MZM input.</p>
Full article ">Figure 13
<p>FBG transmission and reflection spectra at the resonant wavelength <span class="html-italic">λ<sub>B</sub></span> = 1537.2 nm.</p>
Full article ">Figure 14
<p>The simulation schemes of FBG interrogation: (<b>a</b>) by analyzing the transmitted light; (<b>b</b>) by analyzing the reflected light. Numbers 1−3 denote the order of the circulator’s ports.</p>
Full article ">Figure 15
<p>The combined transmission spectra at the interrogator drop port depending on the Bragg wavelength of the FBG: (<b>a</b>) for the scheme that analyzes the transmitted light; (<b>b</b>) for the scheme that analyzes the reflected light.</p>
Full article ">Figure 16
<p>Relative power at the interrogator drop port versus the resonant wavelength of the FBG: (<b>a</b>) for the scheme that analyzes the transmitted light; (<b>b</b>) for the scheme that analyzes the reflected light.</p>
Full article ">Figure 17
<p>(<b>a</b>) PS-FBG interrogation scheme with an OEO; (<b>b</b>) PS-FBG transmission and reflection signals spectra at the resonant wavelength <span class="html-italic">λ<sub>B</sub></span> = 1537.2 nm. Numbers 1–3 denote the order of the circulator’s ports.</p>
Full article ">Figure 18
<p>Relative power at the interrogator drop port versus the resonant wavelength of the PS-FBG in the scheme with OEO.</p>
Full article ">Figure 19
<p>Relative power at the interrogator drop port versus the resonant wavelength of the PS-FBG in the scheme with broadband source.</p>
Full article ">Figure 20
<p>(<b>a</b>) Sensor’s transmission spectra for sensing an RI increment of 0.001 (n1 = 1.311, n2 = 1.312) in the case of fabrication deviation of the waveguide width of +8 nm; (<b>b</b>) sensor’s transmission spectra for the same analyte RI (n = 1.311) for the fabrication deviation of the waveguide width of ±8 nm.</p>
Full article ">Figure 21
<p>Interrogator’s transmission spectra for the fabrication deviation in the waveguide width of ±8 nm.</p>
Full article ">
26 pages, 868 KiB  
Article
Bounded Model Checking for Metric Temporal Logic Properties of Timed Automata with Digital Clocks
by Agnieszka M. Zbrzezny and Andrzej Zbrzezny
Sensors 2022, 22(23), 9552; https://doi.org/10.3390/s22239552 - 6 Dec 2022
Cited by 2 | Viewed by 2170
Abstract
Metric temporal logic (MTL) is a popular real-time extension of linear temporal logic (LTL). This paper presents a new simple SAT-based bounded model-checking (SAT-BMC) method for MTL interpreted over discrete infinite timed models generated by discrete timed automata with digital clocks. We show [...] Read more.
Metric temporal logic (MTL) is a popular real-time extension of linear temporal logic (LTL). This paper presents a new simple SAT-based bounded model-checking (SAT-BMC) method for MTL interpreted over discrete infinite timed models generated by discrete timed automata with digital clocks. We show a new translation of the existential part of MTL to the existential part of linear temporal logic with a new set of atomic propositions and present the details of the new translation. We compare the new method’s advantages to the old method based on a translation of the hard reset LTL (HLTL). Our method does not need new clocks or new transitions. It uses only one path and requires a smaller number of propositional variables and clauses than the HLTL-based method. We also implemented the new method, and as a case study, we applied the technique to analyze several systems. We support the theoretical description with the experimental results demonstrating the method’s efficiency. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The simple light switch.</p>
Full article ">Figure 2
<p>An example of the path.</p>
Full article ">Figure 3
<p>An example of the <span class="html-italic">k</span>-path, which is a loop.</p>
Full article ">Figure 4
<p>The TDPP system.</p>
Full article ">Figure 5
<p>TDPP with <span class="html-italic">n</span> philosophers: <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>The TDPP with <span class="html-italic">n</span> philosophers: <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>The TDPP with <span class="html-italic">n</span> philosophers clauses and variables: <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>The TGPP system.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math>: TGPP with <span class="html-italic">n</span> nodes.</p>
Full article ">Figure 10
<p>Results for <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>: TGPP with <span class="html-italic">n</span> nodes. Number of variables and clauses for <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>The TTCS system.</p>
Full article ">Figure 12
<p><math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math>: TTCS with <span class="html-italic">n</span> trains.</p>
Full article ">Figure 13
<p><math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>: TTCS with <span class="html-italic">n</span> trains.</p>
Full article ">Figure 14
<p>Results for <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math>. Number of variables and clauses for <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math> and TTCS with <span class="html-italic">n</span> trains.</p>
Full article ">Figure 15
<p>TDPP: Pairs Wilcoxon plots for total time usage and total memory usage for <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">
27 pages, 5361 KiB  
Article
Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons
by Alexey Sulavko
Sensors 2022, 22(23), 9551; https://doi.org/10.3390/s22239551 - 6 Dec 2022
Cited by 6 | Viewed by 2348
Abstract
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the [...] Read more.
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the reliability of biometric-based key generation, which is used for remote authentication with the protection of biometric templates. Ear canal echograms were used as biometric images. Multilayer convolutional neural networks that belong to the autoencoder type were used to extract features from the echograms. A new class of neurons (correlation neurons) that analyzes correlations between features instead of feature values is proposed. A neuro-extractor model was developed to associate a feature vector with a cryptographic key or user password. An open data set of ear canal echograms to test the performance of the proposed model was used. The following indicators were achieved: EER = 0.0238 (FRR = 0.093, FAR < 0.001), with a key length of 8192 bits. The proposed model is superior to known analogues in terms of key length and probability of erroneous decisions. The ear canal parameters are hidden from direct observation and photography. This fact creates additional difficulties for the synthesis of adversarial examples. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

Figure 1
<p>Block diagram of biometric-based key binding.</p>
Full article ">Figure 2
<p>A circle on a plane at different values of the power coefficient <span class="html-italic">p</span>.</p>
Full article ">Figure 3
<p>The direction of space compression of two features (<b>a</b>) with a positive significant correlation between features (the distance “a” is greater than the distance “b” since the feature space is not “flat” but curved due to the correlation); (<b>b</b>) with independent features (distance “a” is greater than “b”); (<b>c</b>) with different correlations.</p>
Full article ">Figure 4
<p>Probability densities of Measure (5) values (without square root) at <span class="html-italic">p</span> = 1, <span class="html-italic">I ≈ </span>1.75 bits. (<b>a</b>) for all classes where 1&gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, <span class="html-italic">n’</span> = 1; (<b>b</b>) for all classes where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, <span class="html-italic">n’</span> = 5; (<b>c</b>) for the «Genuine» class where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, for the «Impostors» class where |<span class="html-italic">C<sub>j,t</sub></span>| &lt; 0.3, <span class="html-italic">n’</span> = 5; (<b>d</b>) for the «Genuine» class where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, for the «Impostors» class where −1 &lt; <span class="html-italic">C<sub>j,t</sub></span> &lt; −0.95, <span class="html-italic">n’</span> = 5; (<b>e</b>) for all classes where |<span class="html-italic">C<sub>j,t</sub></span>| &lt; 0.3, <span class="html-italic">n’</span> = 5; (<b>f</b>) for all classes where −1 &lt; <span class="html-italic">C<sub>j,t</sub></span> &lt; −0.95, <span class="html-italic">n’</span> = 5.</p>
Full article ">Figure 5
<p>Spaces of three features (meta-features) and plots of the probability densities of their values: (<b>a</b>) the initial space of positively correlated features; (<b>b</b>) space of Bayes–Minkowski meta-features derived from the initial feature space by applying mapping (12) with <span class="html-italic">p</span> = 1.</p>
Full article ">Figure 6
<p>Initial features (left) and meta-features (right) generated by mapping (12) <span class="html-italic">p</span> = 0.</p>
Full article ">Figure 7
<p>Changing the accuracy of the identification of images based on the “naive” Bayes classifier with various average informativeness of the initial features, when applying the mapping in (12): (<b>a</b>) <span class="html-italic">I</span> ≈ 1 bits, <span class="html-italic">n</span> = 10 and <span class="html-italic">n’</span> = 45; (<b>b</b>) <span class="html-italic">I</span> ≈ 0.5 bits, <span class="html-italic">n</span> = 30 and <span class="html-italic">n’</span> = 435; (<b>c</b>) <span class="html-italic">I</span> ≈ 0.15 bits, <span class="html-italic">n</span> = 30 and <span class="html-italic">n’</span> = 435, and when applying the mapping in (11): (<b>d</b>) <span class="html-italic">I</span> ≈ 1 bits, <span class="html-italic">n</span> = 10 and <span class="html-italic">n’</span> = 45; (<b>e</b>) <span class="html-italic">I</span> ≈ 0.5 bits, <span class="html-italic">n</span> = 30 and <span class="html-italic">n’</span> = 435; (<b>f</b>) <span class="html-italic">I</span> ≈ 0.15 bits, <span class="html-italic">n</span> = 30 and <span class="html-italic">n’</span> = 435.</p>
Full article ">Figure 8
<p>Average informativeness of meta-features <span class="html-italic">I’</span> generated by the mapping in (12).</p>
Full article ">Figure 9
<p>Plots of probability densities of measured values in (16) after the mapping in (12) at <span class="html-italic">p</span> = 1, <span class="html-italic">I</span> ≈ 1.75 bits (<b>a</b>) for all classes where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, <span class="html-italic">n’</span> = 10; (<b>b</b>) for all classes where −1 &lt; <span class="html-italic">C<sub>j,t</sub></span> &lt; −0.95, <span class="html-italic">n’</span> = 10; (<b>c</b>) for the «Genuine» class where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, for the «Impostors» class where |<span class="html-italic">C<sub>j,t</sub></span>| &lt; 0.3, <span class="html-italic">n’</span> = 10; (<b>d</b>) for the «Genuine» class where −1 &lt; <span class="html-italic">C<sub>j,t</sub></span> &lt; −0.95, for the «Impostors» class where |<span class="html-italic">C<sub>j,t</sub></span>| &lt; 0.3, <span class="html-italic">n’</span> = 10; (<b>e</b>) for the classes where 1 &gt; <span class="html-italic">C<sub>j,t</sub></span> &gt; 0.95, <span class="html-italic">n’</span> = 10 («Genuine» class is located on the right); (<b>f</b>) for the classes where −1 &lt; <span class="html-italic">C<sub>j,t</sub></span> &lt; −0.95, <span class="html-italic">n’</span> = 10 («Genuine» class is located on the right).</p>
Full article ">Figure 10
<p>Scheme of the algorithm of synthesis and training of the correlation neuron.</p>
Full article ">Figure 11
<p>Average spectrum: (<b>a</b>) calculation process; (<b>b</b>) extracted from a voice image (from the TIMIT database) based on a rectangular window; (<b>c</b>) extracted from the acoustic image of the ear at the base of the Hamming window; (<b>d</b>) extracted from the acoustic image of the ear based on a rectangular window; (<b>e</b>) extracted from an acoustic image of an ear based on a Hamming window and reconstructed by an autoencoder.</p>
Full article ">Figure 12
<p>Results of user verification using two ears at C<sub>+</sub> = 0.5, C<sub>-</sub> = 0.5, AUC<sub>MAX</sub> = 0.3, K<sub>G</sub> = 6, N = 4096, η = 5.</p>
Full article ">
20 pages, 11330 KiB  
Article
Development of Impurity-Detection System for Tracked Rice Combine Harvester Based on DEM and Mask R-CNN
by Zhuohuai Guan, Haitong Li, Xu Chen, Senlin Mu, Tao Jiang, Min Zhang and Chongyou Wu
Sensors 2022, 22(23), 9550; https://doi.org/10.3390/s22239550 - 6 Dec 2022
Cited by 9 | Viewed by 2927
Abstract
Impurity rate is one of the key performance indicators of the rice combine harvester and is also the main basis for parameter regulation. At present, the tracked rice combine harvester impurity rates cannot be monitored in real time. Due to the lack of [...] Read more.
Impurity rate is one of the key performance indicators of the rice combine harvester and is also the main basis for parameter regulation. At present, the tracked rice combine harvester impurity rates cannot be monitored in real time. Due to the lack of parameter regulation basis, the harvest working parameters are set according to the operator’s experience and not adjusted during the operation, which leads to the harvest quality fluctuating greatly in a complex environment. In this paper, an impurity-detection system, including a grain-sampling device and machine vision system, was developed. Sampling device structure and impurity extraction algorithm were studied to enhance the impurity identification accuracy. To reduce the effect of impurity occlusion on visual recognition, an infusion-type sampling device was designed. The sampling device light source form was determined based on the brightness histogram analysis of a captured image under different light irradiations. The effect of sampling device structures on impurity visualization, grain distribution, and mass flow rate was investigated by the discrete element method (DEM). The impurity recognition algorithm was proposed based on Mask R-CNN, which mainly includes an impurity feature extraction network, an ROI generation network, and a target segmentation network. The test set experiment showed that the precision rate, recall rate, average precision, and comprehensive evaluation indicator of the impurity recognition model were 92.49%, 88.63%, 81.47%, and 90.52%, respectively. The conversion between impurity pixel number and its actual mass was realized according to the pixel density calibration test and impurity rate correction factor. The bench test result showed that the designed system has a good detection accuracy of 91.15~97.26% for the five varieties. The result relative error was in a range of 5.71~11.72% between the impurity-detection system and manual method in field conditions. The impurity-detection system could be applied to tracked rice combine harvesters to provide a reference for the adjustment of operating parameters. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Impurity-detection system for tracked rice combine harvester.</p>
Full article ">Figure 2
<p>Structure diagram of infusion-type sampling device. (1) Inlet, (2) dust baffle, (3) flow-rate-adjustment lever, (4) deflector, (5) baffle, (6) camera, (7) light source, (8) conveyor belt, (9) corrugation, (10) transparent platen, and (11) outlet.</p>
Full article ">Figure 3
<p>Different light irradiations of sampling device: (<b>a</b>) single-sided-strip LED, (<b>b</b>) double-sided-strip LED, (<b>c</b>) central-ring LED.</p>
Full article ">Figure 4
<p>EDEM simulation model.</p>
Full article ">Figure 5
<p>Flowchart of rice impurity recognition algorithm based on Mask R-CNN.</p>
Full article ">Figure 6
<p>The captured grain image and the impurity mask image. (<b>a</b>) Fine impurities; (<b>b</b>) coarse impurities; (<b>c</b>) impurities approximating the color of rice; (<b>d</b>) irregular shape impurities.</p>
Full article ">Figure 7
<p>Material pixel density calibration test. (<b>a</b>) Calibration device; (<b>b</b>) calibration image; (<b>c</b>) binarized image.</p>
Full article ">Figure 8
<p>Bench test. (<b>a</b>) Test bench 3D model; (<b>b</b>) test bench physical picture; (<b>c</b>) material collection.</p>
Full article ">Figure 9
<p>The grain images and the <span class="html-italic">V</span> component distribution under different light irradiations. (<b>a</b>) Grain images captured under different light irradiations; (<b>b</b>) the distribution of the <span class="html-italic">V</span> component.</p>
Full article ">Figure 10
<p><span class="html-italic">V</span> value distribution histogram under light irradiation. (<b>a</b>) Single-sided-strip LED; (<b>b</b>) double-sided-strip LED; (<b>c</b>) central-ring LED.</p>
Full article ">Figure 11
<p>Impurity visualization analysis under different deflector gap. (<b>a</b>) <span class="html-italic">d</span> = 7.5 mm, <span class="html-italic">S</span> = 2.73%, (<b>b</b>) <span class="html-italic">d</span> = 10 mm, <span class="html-italic">S</span> = 3.75%, (<b>c</b>) <span class="html-italic">d</span> = 12.5 mm, <span class="html-italic">S</span> = 6.92%, (<b>d</b>) <span class="html-italic">d</span> = 15 mm, <span class="html-italic">S</span> = 9.54%, (<b>e</b>) <span class="html-italic">d</span> = 17.5 mm, <span class="html-italic">S</span> = 12.10%, (<b>f</b>) <span class="html-italic">d</span> = 20 mm, <span class="html-italic">S</span> = 18.11%.</p>
Full article ">Figure 12
<p>Grain distribution analysis under different deflector gap. (<b>a</b>) <span class="html-italic">d</span> = 7.5 mm; (<b>b</b>) <span class="html-italic">d</span> = 10 mm; (<b>c</b>) <span class="html-italic">d</span> = 12.5 mm; (<b>d</b>) <span class="html-italic">d</span> = 15 mm; (<b>e</b>) <span class="html-italic">d</span> = 17.5 mm; (<b>f</b>) <span class="html-italic">d</span> = 20 mm.</p>
Full article ">Figure 13
<p>Grain mass flow rate under different deflector gaps.</p>
Full article ">Figure 14
<p>The variation curve of impurity occlusion rate and grain mass flow rate with the deflector gap.</p>
Full article ">Figure 15
<p>Impurity segmentation result. Masks are shown in color and confidences are also shown. (<b>a</b>) High impurity rate and dense rice; (<b>b</b>) low impurity rate and dense rice; (<b>c</b>) high impurity rate and sparse rice; (<b>d</b>) low impurity rate and sparse rice; (<b>e</b>) large size impurity.</p>
Full article ">Figure 16
<p>Impurity identification based on color space and morphology. (<b>a</b>) Original image; (<b>b</b>) image equalization; (<b>c</b>) threshold segmentation; (<b>d</b>) removing interference; (<b>e</b>) identification result.</p>
Full article ">Figure 17
<p>The relationship between pixel number and mass. (<b>a</b>) Rice; (<b>b</b>) impurity.</p>
Full article ">
35 pages, 8607 KiB  
Review
Flexible UWB and MIMO Antennas for Wireless Body Area Network: A Review
by Vikash Kumar Jhunjhunwala, Tanweer Ali, Pramod Kumar, Praveen Kumar, Pradeep Kumar, Sakshi Shrivastava and Arnav Abhijit Bhagwat
Sensors 2022, 22(23), 9549; https://doi.org/10.3390/s22239549 - 6 Dec 2022
Cited by 22 | Viewed by 5141
Abstract
In recent years, there has been a surge of interest in the field of wireless communication for designing a monitoring system to observe the activity of the human body remotely. With the use of wireless body area networks (WBAN), chronic health and physical [...] Read more.
In recent years, there has been a surge of interest in the field of wireless communication for designing a monitoring system to observe the activity of the human body remotely. With the use of wireless body area networks (WBAN), chronic health and physical activity may be tracked without interfering with routine lifestyle. This crucial real-time data transmission requires low power, high speed, and broader bandwidth communication. Ultrawideband (UWB) technology has been explored for short-range and high-speed applications to cater to these demands over the last decades. The antenna is a crucial component of the WBAN system, which lowers the overall system’s performance. The human body’s morphology necessitates a flexible antenna. In this article, we comprehensively survey the relevant flexible materials and their qualities utilized to develop the flexible antenna. Further, we retrospectively investigate the design issues and the strategies employed in designing the flexible UWB antenna, such as incorporating the modified ground layer, including the parasitic elements, coplanar waveguide, metamaterial loading, etc. To improve isolation and channel capacity in WBAN applications, the most recent decoupling structures proven in UWB MIMO technology are presented. Full article
(This article belongs to the Special Issue Antennas for Wireless Sensors)
Show Figures

Figure 1

Figure 1
<p>The classification of wireless body-centric communication.</p>
Full article ">Figure 2
<p>Applications of WBAN.</p>
Full article ">Figure 3
<p>A real-time telemedicine infrastructure.</p>
Full article ">Figure 4
<p>Wireless body area network bands.</p>
Full article ">Figure 5
<p>Materials for flexible antenna.</p>
Full article ">Figure 6
<p>Flexible material characteristics.</p>
Full article ">Figure 7
<p>(<b>a</b>) The GAF antenna prototype. (<b>b</b>) |S11| of the antenna. (<b>c</b>) |S11| of the antenna under bending. (<b>d</b>) Antennas under different bending scenarios. (<b>e</b>) |S11| curves when attached to the body. (<b>f</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B59-sensors-22-09549" class="html-bibr">59</a>].</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) The GAF antenna prototype. (<b>b</b>) |S11| of the antenna. (<b>c</b>) |S11| of the antenna under bending. (<b>d</b>) Antennas under different bending scenarios. (<b>e</b>) |S11| curves when attached to the body. (<b>f</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B59-sensors-22-09549" class="html-bibr">59</a>].</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) The GAF antenna prototype. (<b>b</b>) |S11| of the antenna. (<b>c</b>) |S11| of the antenna under bending. (<b>d</b>) Antennas under different bending scenarios. (<b>e</b>) |S11| curves when attached to the body. (<b>f</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B59-sensors-22-09549" class="html-bibr">59</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the antenna. (<b>c</b>) The antenna loaded on the different positions of the voxel model. (<b>d</b>) S11 of the loaded antenna. (<b>e</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B62-sensors-22-09549" class="html-bibr">62</a>].</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the antenna. (<b>c</b>) The antenna loaded on the different positions of the voxel model. (<b>d</b>) S11 of the loaded antenna. (<b>e</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B62-sensors-22-09549" class="html-bibr">62</a>].</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the antenna. (<b>c</b>) The antenna loaded on the different positions of the voxel model. (<b>d</b>) S11 of the loaded antenna. (<b>e</b>) Radiation pattern. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B62-sensors-22-09549" class="html-bibr">62</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the antenna on-body and off-body. (<b>c</b>) Radiation patterns. Reprinted (<b>a</b>–<b>c</b>) from Ref. [<a href="#B65-sensors-22-09549" class="html-bibr">65</a>].</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the antenna on-body and off-body. (<b>c</b>) Radiation patterns. Reprinted (<b>a</b>–<b>c</b>) from Ref. [<a href="#B65-sensors-22-09549" class="html-bibr">65</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) Fabricated antenna prototype. (<b>b</b>) Radiation patterns. (<b>c</b>) S11 of the antenna on-body and off-body. (<b>d</b>) S11 results for different bending degrees. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B6-sensors-22-09549" class="html-bibr">6</a>].</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) Fabricated antenna prototype. (<b>b</b>) Radiation patterns. (<b>c</b>) S11 of the antenna on-body and off-body. (<b>d</b>) S11 results for different bending degrees. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B6-sensors-22-09549" class="html-bibr">6</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the monopole antenna. (<b>c</b>) Gain of the monopole antenna. (<b>d</b>) The geometry of the AMC antenna. (<b>e</b>) S11 of the AMC antenna. (<b>f</b>) Gain of AMC antenna. (<b>g</b>) Radiation patterns. (<b>h</b>) S11 of the antenna on bending. Reprinted (<b>a</b>–<b>h</b>) from Ref. [<a href="#B29-sensors-22-09549" class="html-bibr">29</a>].</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Fabricated UWB antenna. (<b>b</b>) S11 of the monopole antenna. (<b>c</b>) Gain of the monopole antenna. (<b>d</b>) The geometry of the AMC antenna. (<b>e</b>) S11 of the AMC antenna. (<b>f</b>) Gain of AMC antenna. (<b>g</b>) Radiation patterns. (<b>h</b>) S11 of the antenna on bending. Reprinted (<b>a</b>–<b>h</b>) from Ref. [<a href="#B29-sensors-22-09549" class="html-bibr">29</a>].</p>
Full article ">Figure 12
<p>(<b>a</b>) The fabricated antenna prototype. (<b>b</b>) |S11| curve of the antenna. (<b>c</b>) |S21| curves of the antenna. (<b>d</b>) Measured radiation pattern. (<b>e</b>) Calculated |S11| for a flexible UWB MIMO antenna under bending. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B19-sensors-22-09549" class="html-bibr">19</a>].</p>
Full article ">Figure 12 Cont.
<p>(<b>a</b>) The fabricated antenna prototype. (<b>b</b>) |S11| curve of the antenna. (<b>c</b>) |S21| curves of the antenna. (<b>d</b>) Measured radiation pattern. (<b>e</b>) Calculated |S11| for a flexible UWB MIMO antenna under bending. Reprinted (<b>a</b>–<b>e</b>) from Ref. [<a href="#B19-sensors-22-09549" class="html-bibr">19</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) The antenna layout. (<b>b</b>) S11 curves of the antenna. (<b>c</b>) Isolation curve of the antenna. (<b>d</b>) Radiation pattern. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B69-sensors-22-09549" class="html-bibr">69</a>].</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) The antenna layout. (<b>b</b>) S11 curves of the antenna. (<b>c</b>) Isolation curve of the antenna. (<b>d</b>) Radiation pattern. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B69-sensors-22-09549" class="html-bibr">69</a>].</p>
Full article ">Figure 14
<p>(<b>a</b>) The antenna layout. (<b>b</b>) S11 curves of the antenna. (<b>c</b>) Isolation curve of the antenna. (<b>d</b>) Radiation pattern. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B74-sensors-22-09549" class="html-bibr">74</a>].</p>
Full article ">Figure 14 Cont.
<p>(<b>a</b>) The antenna layout. (<b>b</b>) S11 curves of the antenna. (<b>c</b>) Isolation curve of the antenna. (<b>d</b>) Radiation pattern. Reprinted (<b>a</b>–<b>d</b>) from Ref. [<a href="#B74-sensors-22-09549" class="html-bibr">74</a>].</p>
Full article ">
18 pages, 1498 KiB  
Article
Moving-Target Defense in Depth: Pervasive Self- and Situation-Aware VM Mobilization across Federated Clouds in Presence of Active Attacks
by Yousra Magdy, Mohamed Azab, Amal Hamada, Mohamed R. M. Rizk and Nayera Sadek
Sensors 2022, 22(23), 9548; https://doi.org/10.3390/s22239548 - 6 Dec 2022
Cited by 4 | Viewed by 3227
Abstract
Federated clouds are interconnected cooperative cloud infrastructures offering vast hosting capabilities, smooth workload migration and enhanced reliability. However, recent devastating attacks on such clouds have shown that such features come with serious security challenges. The oblivious heterogeneous construction, management, and policies employed in [...] Read more.
Federated clouds are interconnected cooperative cloud infrastructures offering vast hosting capabilities, smooth workload migration and enhanced reliability. However, recent devastating attacks on such clouds have shown that such features come with serious security challenges. The oblivious heterogeneous construction, management, and policies employed in federated clouds open the door for attackers to induce conflicts to facilitate pervasive coordinated attacks. In this paper, we present a novel proactive defense that aims to increase attacker uncertainty and complicate target tracking, a critical step for successful coordinated attacks. The presented systemic approach acts as a VM management platform with an intrinsic multidimensional hierarchical attack representation model (HARM) guiding a dynamic, self and situation-aware VM live-migration for moving-target defense (MtD). The proposed system managed to achieve the proposed goals in a resource-, energy-, and cost-efficient manner. Full article
(This article belongs to the Special Issue Internet of Things, Sensing and Cloud Computing)
Show Figures

Figure 1

Figure 1
<p>Architecture of federated cloud.</p>
Full article ">Figure 2
<p>Deployment and migration flow diagram.</p>
Full article ">Figure 3
<p>System architecture.</p>
Full article ">Figure 4
<p>Federated cloud model example.</p>
Full article ">Figure 5
<p>Comparison of security metrics on VMs before and after applying shuffle MtD.</p>
Full article ">Figure 6
<p>Comparison of ROA before and after applying shuffle MtD.</p>
Full article ">Figure 7
<p>Comparison of security metrics of VMs before and after applying shuffle and diversity MtD.</p>
Full article ">Figure 8
<p>Comparison of security metrics of hosts before and after applying shuffle and diversity MtD.</p>
Full article ">Figure 9
<p>Comparison of ROA of VMs before and after applying shuffle and diversity MtD.</p>
Full article ">
11 pages, 2300 KiB  
Article
Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System
by Mohamed Benomar, Steven Cao, Manoj Vishwanath, Khuong Vo and Hung Cao
Sensors 2022, 22(23), 9547; https://doi.org/10.3390/s22239547 - 6 Dec 2022
Cited by 11 | Viewed by 3443
Abstract
Despite the growing interest in the use of electroencephalogram (EEG) signals as a potential biometric for subject identification and the recent advances in the use of deep learning (DL) models to study neurological signals, such as electrocardiogram (ECG), electroencephalogram (EEG), electroretinogram (ERG), and [...] Read more.
Despite the growing interest in the use of electroencephalogram (EEG) signals as a potential biometric for subject identification and the recent advances in the use of deep learning (DL) models to study neurological signals, such as electrocardiogram (ECG), electroencephalogram (EEG), electroretinogram (ERG), and electromyogram (EMG), there has been a lack of exploration in the use of state-of-the-art DL models for EEG-based subject identification tasks owing to the high variability in EEG features across sessions for an individual subject. In this paper, we explore the use of state-of-the-art DL models such as ResNet, Inception, and EEGNet to realize EEG-based biometrics on the BED dataset, which contains EEG recordings from 21 individuals. We obtain promising results with an accuracy of 63.21%, 70.18%, and 86.74% for Resnet, Inception, and EEGNet, respectively, while the previous best effort reported accuracy of 83.51%. We also demonstrate the capabilities of these models to perform EEG biometric tasks in real-time by developing a portable, low-cost, real-time Raspberry Pi-based system that integrates all the necessary steps of subject identification from the acquisition of the EEG signals to the prediction of identity while other existing systems incorporate only parts of the whole system. Full article
(This article belongs to the Special Issue Biosignal Sensing Analysis (EEG, MEG, ECG, PPG))
Show Figures

Figure 1

Figure 1
<p>Electrode montage.</p>
Full article ">Figure 2
<p>EEG preprocessing steps.</p>
Full article ">Figure 3
<p>EEG epoch (<b>a</b>) raw data; and (<b>b</b>) preprocessed epoch.</p>
Full article ">Figure 4
<p>ResNet model architecture.</p>
Full article ">Figure 5
<p>Inception model architecture.</p>
Full article ">Figure 6
<p>EEGNet model architecture.</p>
Full article ">Figure 7
<p>System hardware setup.</p>
Full article ">Figure 8
<p>Real-time acquisition algorithm.</p>
Full article ">Figure 9
<p>P–R Curves for DL models (<b>a</b>) ResNet model; (<b>b</b>) Inception model; (<b>c</b>) EEGNet model.</p>
Full article ">Figure 9 Cont.
<p>P–R Curves for DL models (<b>a</b>) ResNet model; (<b>b</b>) Inception model; (<b>c</b>) EEGNet model.</p>
Full article ">
16 pages, 415 KiB  
Article
Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm
by Shaoming Qiu, Jiancheng Zhao, Yana Lv, Jikun Dai, Fen Chen, Yahui Wang and Ao Li
Sensors 2022, 22(23), 9546; https://doi.org/10.3390/s22239546 - 6 Dec 2022
Cited by 12 | Viewed by 3410
Abstract
With the rapid increase of smart Internet of Things (IoT) devices, edge networks generate a large number of computing tasks, which require edge-computing resource devices to complete the calculations. However, unreasonable edge-computing resource allocation suffers from high-power consumption and resource waste. Therefore, when [...] Read more.
With the rapid increase of smart Internet of Things (IoT) devices, edge networks generate a large number of computing tasks, which require edge-computing resource devices to complete the calculations. However, unreasonable edge-computing resource allocation suffers from high-power consumption and resource waste. Therefore, when user tasks are offloaded to the edge-computing system, reasonable resource allocation is an important issue. Thus, this paper proposes a digital-twin-(DT)-assisted edge-computing resource-allocation model and establishes a joint-optimization function of power consumption, delay, and unbalanced resource-allocation rate. Then, we develop a solution based on the improved whale optimization scheme. Specifically, we propose an improved whale optimization algorithm and design a greedy initialization strategy to improve the convergence speed for the DT-assisted edge-computing resource-allocation problem. Additionally, we redesign the whale search strategy to improve the allocation results. Several simulation experiments demonstrate that the improved whale optimization algorithm reduces the resource allocation and allocation objective function value, the power consumption, and the average resource allocation imbalance rate by 12.6%, 15.2%, and 15.6%, respectively. Overall, the power consumption with the assistance of the DT is reduced to 89.6% of the power required without DT assistance, thus, improving the efficiency of the edge-computing resource allocation. Full article
Show Figures

Figure 1

Figure 1
<p>DT-assisted edge-computing resource-allocation model.</p>
Full article ">Figure 2
<p>Resource equipment and user task encoding method.</p>
Full article ">Figure 3
<p>IPWOA flow chart.</p>
Full article ">Figure 4
<p>Optimized objective values after multiple iterations of different algorithms. Each algorithm iterates 200 times.</p>
Full article ">Figure 5
<p>The optimization target value after multiple rounds of iterations of different algorithms. Each algorithm was executed for multiple rounds, each round of iterations included 200 times, and the average optimal optimization target value was generated.</p>
Full article ">Figure 6
<p>Comparison of the power consumption of resource devices generated by different algorithms. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.</p>
Full article ">Figure 7
<p>Comparison of the power consumption impact of different algorithms for edge computing. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.</p>
Full article ">Figure 8
<p>Comparison of the impacts of a DT on the task-computing time.</p>
Full article ">Figure 9
<p>Comparison of the impacts of a DT on the edge-computing power consumption. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.</p>
Full article ">
19 pages, 6283 KiB  
Article
Development of a Visual Perception System on a Dual-Arm Mobile Robot for Human-Robot Interaction
by Wei-Ting Weng, Han-Pang Huang, Yu-Lin Zhao and Chun-Yeon Lin
Sensors 2022, 22(23), 9545; https://doi.org/10.3390/s22239545 - 6 Dec 2022
Cited by 6 | Viewed by 3001
Abstract
This paper presents the development of a visual-perception system on a dual-arm mobile robot for human-robot interaction. This visual system integrates three subsystems. Hand gesture recognition is utilized to trigger human-robot interaction. Engagement and intention of the participants are detected and quantified through [...] Read more.
This paper presents the development of a visual-perception system on a dual-arm mobile robot for human-robot interaction. This visual system integrates three subsystems. Hand gesture recognition is utilized to trigger human-robot interaction. Engagement and intention of the participants are detected and quantified through a cognitive system. Visual servoing uses YOLO to identify the object to be tracked and hybrid, model-based tracking to follow the object’s geometry. The proposed visual-perception system is implemented in the developed dual-arm mobile robot, and experiments are conducted to validate the proposed method’s effects on human-robot interaction applications. Full article
(This article belongs to the Special Issue Advanced Sensors for Intelligent Control Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The robot’s cognitive system.</p>
Full article ">Figure 2
<p>Schematic diagram of feature matching.</p>
Full article ">Figure 3
<p>The architecture of the engagement model.</p>
Full article ">Figure 4
<p>The architecture of the intention model.</p>
Full article ">Figure 5
<p>Score range of sentence sentiment.</p>
Full article ">Figure 6
<p>Schematic diagram of event trigger.</p>
Full article ">Figure 7
<p>Display of the three features for tracking. (<b>a</b>) The edge features. (<b>b</b>) The keypoint features. (<b>c</b>) The depth features.</p>
Full article ">Figure 8
<p>The architecture of the hybrid model-based tracking.</p>
Full article ">Figure 9
<p>Algorithm of the VVS method.</p>
Full article ">Figure 10
<p>Frames definition for visual-servo control. The symbol (*) is used to represent the desired positions.</p>
Full article ">Figure 11
<p>Block diagram of the visual-servo control for the arms.</p>
Full article ">Figure 12
<p>The mobile robot Mobi.</p>
Full article ">Figure 13
<p>Soft architecture.</p>
Full article ">Figure 14
<p>The hardware architecture of the mobile-robot system.</p>
Full article ">Figure 15
<p>Experimental scenario of long-term care centers.</p>
Full article ">Figure 16
<p>Snapshots from patrol to a conversation, then identifying the object to be tracked. (<b>a</b>,<b>b</b>) Patrol. (<b>c</b>) Recognition of a trigger gesture for human-robot interaction. (<b>d</b>) Deployment of engagement and intention model. (<b>e</b>,<b>f</b>) Identification of the object to be tracked.</p>
Full article ">Figure 17
<p>Snapshots of the dual-arm robot using a hybrid, model-based tracking method to track objects using its arms. (<b>a</b>–<b>d</b>) The process for the robotic grasping.</p>
Full article ">
14 pages, 1798 KiB  
Article
Image Classification Using Multiple Convolutional Neural Networks on the Fashion-MNIST Dataset
by Olivia Nocentini, Jaeseok Kim, Muhammad Zain Bashir and Filippo Cavallo
Sensors 2022, 22(23), 9544; https://doi.org/10.3390/s22239544 - 6 Dec 2022
Cited by 18 | Viewed by 6443
Abstract
As the elderly population grows, there is a need for caregivers, which may become unsustainable for society. In this situation, the demand for automated help increases. One of the solutions is service robotics, in which robots have automation and show significant promise in [...] Read more.
As the elderly population grows, there is a need for caregivers, which may become unsustainable for society. In this situation, the demand for automated help increases. One of the solutions is service robotics, in which robots have automation and show significant promise in working with people. In particular, household settings and aged people’s homes will need these robots to perform daily activities. Clothing manipulation is a daily activity and represents a challenging area for a robot. The detection and classification are key points for the manipulation of clothes. For this reason, in this paper, we proposed to study fashion image classification with four different neural network models to improve apparel image classification accuracy on the Fashion-MNIST dataset. The network models are tested with the highest accuracy with a Fashion-Product dataset and a customized dataset. The results show that one of our models, the Multiple Convolutional Neural Network including 15 convolutional layers (MCNN15), boosted the state of art accuracy, and it obtained a classification accuracy of 94.04% on the Fashion-MNIST dataset with respect to the literature. Moreover, MCNN15, with the Fashion-Product dataset and the household dataset, obtained 60% and 40% accuracy, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed multiple convolutional neural network models with different convolutional layers and optimized hyperparameters.</p>
Full article ">Figure 2
<p>In (<b>a</b>) The Fashion Product Images Dataset (<a href="https://www.kaggle.com/paramaggarwal/fashion-product-images-small" target="_blank">https://www.kaggle.com/paramaggarwal/fashion-product-images-small</a>, accessed on the 26 October 2022) and in (<b>b</b>) the Fashion household dataset.</p>
Full article ">Figure 3
<p>In (<b>a</b>), the progress of losses related to our model is shown, and in (<b>b</b>), the progress of accuracies is described.</p>
Full article ">Figure 4
<p>The confusion matrix of our MCNN15 model using the Fashion-MNIST dataset.</p>
Full article ">Figure 5
<p>The plotted receiver operating characteristic (ROC) curve for MCNN15.</p>
Full article ">
17 pages, 3695 KiB  
Article
Analysis and Correction of Measurement Error of Spherical Capacitive Sensor Caused by Assembly Error of the Inner Frame in the Aeronautical Optoelectronic Pod
by Tianxiang Ma, Shengqi Yang, Yongsen Xu, Dachuan Liu, Jinghua Hou and Yunqing Liu
Sensors 2022, 22(23), 9543; https://doi.org/10.3390/s22239543 - 6 Dec 2022
Cited by 3 | Viewed by 1928
Abstract
The ball joint is a multi-degree-of-freedom transmission pair, if it can replace the inner frame in the aviation photoelectric pod to carry the optical load, which will greatly simplify the system structure of the photoelectric pod and reduce the space occupied by the [...] Read more.
The ball joint is a multi-degree-of-freedom transmission pair, if it can replace the inner frame in the aviation photoelectric pod to carry the optical load, which will greatly simplify the system structure of the photoelectric pod and reduce the space occupied by the inner frame. However, installation errors in ball joint siting introduce nonlinear errors that are difficult to correct and two degree of freedom angular displacement of the ball joint is difficult to detect, which limits application in the precision control of two degrees of freedom systems. Studies of spherical capacitive sensors to date have not tested sensors for use in an inner frame stabilisation mechanism nor have they analysed the influence of installation error on sensor output. A two-axis angular experimental device was designed to measure the performance of a ball joint capacitive sensor in a frame stabilisation mechanism in an aeronautical optoelectronic pod, and a mathematical model to compensate for ball joint capacitive sensor installation error was created and tested. The experimental results show that the resolution of the capacitive sensor was 0.02° in the operating range ±4°, the repeatability factor was 0.86%, and the pulse response time was 39 μs. The designed capacitive sensor has a simple structure, high measurement accuracy, and strong robustness, and it can be integrated into ball joint applications in the frames of aeronautical photoelectric pods. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>The sensor structure schematic diagram: (<b>a</b>) the four-quadrant differential capacitance is balanced; (<b>b</b>) the four-quadrant differential capacitance is unbalanced.</p>
Full article ">Figure 2
<p>Functional block diagram.</p>
Full article ">Figure 3
<p>Model of the capacitive sensor.</p>
Full article ">Figure 4
<p>The 5 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m axial installation error analysis: (<b>a</b>) differential capacitance in the <math display="inline"><semantics> <msup> <mn>30</mn> <mo>∘</mo> </msup> </semantics></math> direction; (<b>b</b>) output capacitance in the <math display="inline"><semantics> <msup> <mn>45</mn> <mo>∘</mo> </msup> </semantics></math> direction.</p>
Full article ">Figure 5
<p>The 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m axial installation error analysis: (<b>a</b>) differential capacitance in the 30° direction; (<b>b</b>) output capacitance in the 45° direction.</p>
Full article ">Figure 6
<p>The 5 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m radial installation error analysis: (a) differential capacitance in the 30° direction; (b) output capacitance in the 45° direction.</p>
Full article ">Figure 7
<p>The 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m radial installation error analysis: (<b>a</b>) differential capacitance in the 30° direction; (<b>b</b>) output capacitance in the 45° direction.</p>
Full article ">Figure 8
<p>The 5 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m composite installation error analysis: (<b>a</b>) differential capacitance in the 30° direction; (<b>b</b>) differential capacitance in the 45° direction.</p>
Full article ">Figure 9
<p>The 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m composite installation error analysis: (<b>a</b>) differential capacitance in the 30° direction; (<b>b</b>) differential capacitance in the 45° direction.</p>
Full article ">Figure 10
<p>The (30 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 40 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, and 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m) composite installation errors analysis: (<b>a</b>) differential capacitance in direction <math display="inline"><semantics> <msup> <mn>45</mn> <mo>∘</mo> </msup> </semantics></math>; (<b>b</b>) differential capacitance in direction <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math>.</p>
Full article ">Figure 11
<p>The (30 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 40 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, and 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m) composite installation error analysis: (<b>a</b>) differential capacitance in the 45° direction; (<b>b</b>) differential capacitance in the 150° direction.</p>
Full article ">Figure 12
<p>The 50 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m composite installation error analysis: (<b>a</b>) differential capacitance in the 45° direction; (<b>b</b>) differential capacitance in the 150° direction.</p>
Full article ">Figure 13
<p>The spherical electrodes design and processing: (<b>a</b>) the drive electrode; (<b>b</b>) the sense electrode.</p>
Full article ">Figure 14
<p>Altitude datum measurement operations: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>h</mi> <mn>1</mn> </msub> </semantics></math> measurement; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>h</mi> <mn>2</mn> </msub> </semantics></math> measurement; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>h</mi> <mn>3</mn> </msub> </semantics></math> measurement.</p>
Full article ">Figure 15
<p>Capacitive sensor system.</p>
Full article ">Figure 16
<p>PCB of the signal processing circuit.</p>
Full article ">Figure 17
<p>Signal processing circuit output curves: (<b>a</b>) driver circuit output curves; (<b>b</b>) sensing circuit output curves.</p>
Full article ">Figure 18
<p>Single-axis measurement experiments: (<b>a</b>) solution results of the angular displacement signal <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math>; (<b>b</b>) solution results of the angular displacement signal <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 19
<p>Dual-axis measurement experiments in the 45° direction: (<b>a</b>) solution results of the angular displacement signals <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>; (<b>b</b>) 45° displacement trajectories fitted by <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 20
<p>Dual-axis measurement experiments in the 150° direction: (<b>a</b>) solution results of the angular displacement signals <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>; (<b>b</b>) 150° displacement trajectories fitted by <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 21
<p>Dual-axis compensation experiments in the 45° direction: (<b>a</b>) compensation results of the angular displacement signals <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>; (<b>b</b>) 45° displacement trajectories fitted by <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 22
<p>Dual-axis compensation experiments in the 150° direction: (<b>a</b>) compensation results of the angular displacement signals <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>; (<b>b</b>) 150° displacement trajectories fitted by <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 23
<p>Theoretical and predicted output values of output capacitance for comparison: (<b>a</b>) hysteresis error in the 45° direction; (<b>b</b>) repeatability error in the 45° direction.</p>
Full article ">Figure 24
<p>The sensor resolution test: (<b>a</b>) angular displacement <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> noise calculation; (<b>b</b>) angular displacement <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math> noise calculation.</p>
Full article ">Figure 25
<p>The temperature experiment.</p>
Full article ">
18 pages, 1925 KiB  
Article
Repeatability of the Vibroarthrogram in the Temporomandibular Joints
by Adam Łysiak, Tomasz Marciniak and Dawid Bączkowicz
Sensors 2022, 22(23), 9542; https://doi.org/10.3390/s22239542 - 6 Dec 2022
Cited by 2 | Viewed by 2202
Abstract
Current research concerning the repeatability of the joint’s sounds examination in the temporomandibular joints (TMJ) is inconclusive; thus, the aim of this study was to investigate the repeatability of the specific features of the vibroarthrogram (VAG) in the TMJ using accelerometers. The joint [...] Read more.
Current research concerning the repeatability of the joint’s sounds examination in the temporomandibular joints (TMJ) is inconclusive; thus, the aim of this study was to investigate the repeatability of the specific features of the vibroarthrogram (VAG) in the TMJ using accelerometers. The joint sounds of both TMJs were measured with VAG accelerometers in two groups, study and control, each consisting of 47 participants (n = 94). Two VAG recording sessions consisted of 10 jaw open/close cycles guided by a metronome. The intraclass correlation coefficient (ICC) was calculated for seven VAG signal features. Additionally, a k-nearest-neighbors (KNN) classifier was defined and compared with a state-of-the-art method (joint vibration analysis (JVA) decision tree). ICC indicated excellent (for the integral below 300 Hz feature), good (total integral, integral above 300 Hz, and median frequency features), moderate (integral below to integral above 300 Hz ratio feature) and poor (peak amplitude feature) reliability. The accuracy scores for the KNN classifier (up to 0.81) were higher than those for the JVA decision tree (up to 0.60). The results of this study could open up a new field of research focused on the features of the vibroarthrogram in the context of the TMJ, further improving the diagnosing process. Full article
(This article belongs to the Special Issue Biomedical Data in Human-Machine Interaction)
Show Figures

Figure 1

Figure 1
<p>Exemplary VAG signal for (<b>a</b>) asymptomatic and (<b>b</b>) symptomatic temporomandibular joints.</p>
Full article ">Figure 2
<p>Sensors and their placement on the subject’s joints.</p>
Full article ">Figure 3
<p>Box plots of <span class="html-italic">raw</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure 4
<p>Boxplots of the <span class="html-italic">norm1</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure 5
<p>Box plots of <span class="html-italic">norm2</span> features: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A1
<p>Box plots of <span class="html-italic">raw</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A2
<p>Box plots of <span class="html-italic">raw</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A3
<p>Box plots of <span class="html-italic">norm1</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A4
<p>Box plots of <span class="html-italic">norm1</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A5
<p>Box plots of <span class="html-italic">norm2</span> features obtained for the first measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A6
<p>Box plots of <span class="html-italic">norm2</span> features obtained for the second measurement: (<b>a</b>) TI feature, (<b>b</b>) IB3 feature, (<b>c</b>) IA3 feature, (<b>d</b>) IBAR feature, (<b>e</b>) PA feature, (<b>f</b>) PF feature, (<b>g</b>) MF feature.</p>
Full article ">Figure A7
<p>Confusion matrices for <span class="html-italic">raw</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A8
<p>Confusion matrices for <span class="html-italic">norm1</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A9
<p>Confusion matrices for <span class="html-italic">norm2</span> features used in the JVA decision tree classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">Figure A10
<p>Confusion matrices for <span class="html-italic">raw</span> features used in the KNN classifier for the (<b>a</b>) first and (<b>b</b>) second signals.</p>
Full article ">
32 pages, 8026 KiB  
Review
A Comprehensive Review on Photoacoustic-Based Devices for Biomedical Applications
by Rita Clarisse Silva Barbosa and Paulo M. Mendes
Sensors 2022, 22(23), 9541; https://doi.org/10.3390/s22239541 - 6 Dec 2022
Cited by 16 | Viewed by 5807
Abstract
The photoacoustic effect is an emerging technology that has sparked significant interest in the research field since an acoustic wave can be produced simply by the incidence of light on a material or tissue. This phenomenon has been extensively investigated, not only to [...] Read more.
The photoacoustic effect is an emerging technology that has sparked significant interest in the research field since an acoustic wave can be produced simply by the incidence of light on a material or tissue. This phenomenon has been extensively investigated, not only to perform photoacoustic imaging but also to develop highly miniaturized ultrasound probes that can provide biologically meaningful information. Therefore, this review aims to outline the materials and their fabrication process that can be employed as photoacoustic targets, both biological and non-biological, and report the main components’ features to achieve a certain performance. When designing a device, it is of utmost importance to model it at an early stage for a deeper understanding and to ease the optimization process. As such, throughout this article, the different methods already implemented to model the photoacoustic effect are introduced, as well as the advantages and drawbacks inherent in each approach. However, some remaining challenges are still faced when developing such a system regarding its fabrication, modeling, and characterization, which are also discussed. Full article
(This article belongs to the Special Issue Photoacoustic Sensing, Imaging, and Communications)
Show Figures

Figure 1

Figure 1
<p>Images generated with photoacoustic-based ultrasound probes. (<b>a</b>) Left atrium (LA) wall M-mode image of a swine’s heart; (<b>b</b>) 2D images of aorta (left) and carotid artery (right) of swine samples (scale bar: 2 mm); (<b>c</b>) 2D images of an ex vivo piece of normal term human placenta (left) and 3D rendering of the reconstructed image (right); (<b>d</b>) B-mode intraluminal imaging of a swine carotid artery. Reproduced with permission from [<a href="#B1-sensors-22-09541" class="html-bibr">1</a>,<a href="#B59-sensors-22-09541" class="html-bibr">59</a>,<a href="#B60-sensors-22-09541" class="html-bibr">60</a>,<a href="#B61-sensors-22-09541" class="html-bibr">61</a>].</p>
Full article ">Figure 2
<p>Scheme of the photophone setup, the first optical communication device created by Alexander Graham Bell and his assistant. Reproduced with permission from [<a href="#B68-sensors-22-09541" class="html-bibr">68</a>].</p>
Full article ">Figure 3
<p>Schematic representation of the photoacoustics effect.</p>
Full article ">Figure 4
<p>Absorption coefficient versus wavelength for different endogenous contrast agents. Adapted from [<a href="#B12-sensors-22-09541" class="html-bibr">12</a>,<a href="#B23-sensors-22-09541" class="html-bibr">23</a>,<a href="#B30-sensors-22-09541" class="html-bibr">30</a>,<a href="#B44-sensors-22-09541" class="html-bibr">44</a>].</p>
Full article ">Figure 5
<p>Normalized absorption coefficient versus wavelength for some exogenous contrast agents. Adapted from [<a href="#B23-sensors-22-09541" class="html-bibr">23</a>,<a href="#B78-sensors-22-09541" class="html-bibr">78</a>,<a href="#B107-sensors-22-09541" class="html-bibr">107</a>,<a href="#B108-sensors-22-09541" class="html-bibr">108</a>].</p>
Full article ">Figure 6
<p>Images of the optical fiber’s distal end covered by crystal violet-PDMS composite. Reproduced with permission from [<a href="#B61-sensors-22-09541" class="html-bibr">61</a>].</p>
Full article ">Figure 7
<p>Images of the optical fiber’s distal end covered by gold nanoparticles-PDMS composite. Reproduced with permission from [<a href="#B61-sensors-22-09541" class="html-bibr">61</a>].</p>
Full article ">Figure 8
<p>Image of the optical fiber’s distal end covered by reduced graphene oxide combined with PDMS. Reproduced with permission from [<a href="#B118-sensors-22-09541" class="html-bibr">118</a>].</p>
Full article ">Figure 9
<p>Organization of the carbon nanofibers and PDMS layers. Reproduced with permission from [<a href="#B123-sensors-22-09541" class="html-bibr">123</a>].</p>
Full article ">Figure 10
<p>Comparison between the performance of CNT–PDMS composite, gold nanoparticles-PDMS composite, and chromium film. (<b>a</b>) Frequency spectra normalized to the DC value of the CNT–PDMS composite. (<b>b</b>) Frequency spectra normalized to each DC value compared to that of the laser that was employed. Reproduced with permission from [<a href="#B93-sensors-22-09541" class="html-bibr">93</a>].</p>
Full article ">Figure 11
<p>Comparison of acoustic pressure at 3 mm away from the coatings and normalized power spectra generated for the MWCNT–PDMS integrated coating (red line), MWCNT–xylene/PDMS coating (green line), and MWCNT–gel/PDMS coating (blue line). Reproduced with permission from [<a href="#B63-sensors-22-09541" class="html-bibr">63</a>].</p>
Full article ">Figure 12
<p>CNTs’ average light absorbance (A<sub>avg</sub>) as a function of the light incidence angle. Reproduced with permission from [<a href="#B132-sensors-22-09541" class="html-bibr">132</a>].</p>
Full article ">Figure 13
<p>Normalized ultrasound power spectra of several materials used for photoacoustic-based ultrasound transmitters. Adapted from [<a href="#B61-sensors-22-09541" class="html-bibr">61</a>,<a href="#B82-sensors-22-09541" class="html-bibr">82</a>,<a href="#B110-sensors-22-09541" class="html-bibr">110</a>,<a href="#B118-sensors-22-09541" class="html-bibr">118</a>,<a href="#B120-sensors-22-09541" class="html-bibr">120</a>,<a href="#B123-sensors-22-09541" class="html-bibr">123</a>,<a href="#B125-sensors-22-09541" class="html-bibr">125</a>,<a href="#B126-sensors-22-09541" class="html-bibr">126</a>,<a href="#B133-sensors-22-09541" class="html-bibr">133</a>].</p>
Full article ">Figure 14
<p>Curves showing the wavelength at which maximum optical absorption occurs for each material. Adapted from [<a href="#B61-sensors-22-09541" class="html-bibr">61</a>,<a href="#B134-sensors-22-09541" class="html-bibr">134</a>,<a href="#B135-sensors-22-09541" class="html-bibr">135</a>,<a href="#B136-sensors-22-09541" class="html-bibr">136</a>,<a href="#B137-sensors-22-09541" class="html-bibr">137</a>,<a href="#B138-sensors-22-09541" class="html-bibr">138</a>,<a href="#B139-sensors-22-09541" class="html-bibr">139</a>].</p>
Full article ">Figure 15
<p>Combination of PDMS layers with thin metallic layers. (<b>a</b>) Chromium layer between two PDMS layers. (<b>b</b>) Titanium layer sandwiched between PDMS layers. Reproduced with permission from [<a href="#B142-sensors-22-09541" class="html-bibr">142</a>].</p>
Full article ">Figure 16
<p>Ultrasound wave generated through the photoacoustic effect in a model developed in COMSOL Multiphysics compared against an analytical solution: (<b>a</b>) over time; (<b>b</b>) over frequency. Reproduced with permission from [<a href="#B154-sensors-22-09541" class="html-bibr">154</a>].</p>
Full article ">Figure 17
<p>Example of geometry designed in a simulation tool.</p>
Full article ">Figure 18
<p>Mesh built in a simulation tool.</p>
Full article ">Figure 19
<p>Photoacoustic effect simulation results. (<b>a</b>) Acoustic pressure over time. (<b>b</b>) Acoustic pressure spectrum. (<b>c</b>) Radiation pattern for the predominant emission frequency.</p>
Full article ">Figure 20
<p>Schematic of the experimental setup for an all-optical ultrasound probe. Reproduced with permission from [<a href="#B60-sensors-22-09541" class="html-bibr">60</a>].</p>
Full article ">Figure 21
<p>Pictures of MEMS moving mirrors. Reproduced with permission from [<a href="#B167-sensors-22-09541" class="html-bibr">167</a>].</p>
Full article ">Figure 22
<p>Schematic of optical ultrasound detectors. (<b>a</b>) Mach-Zehnder Interferometer. (<b>b</b>) Fiber Bragg grating. (<b>c</b>) Micro-ring Resonator. (<b>d</b>) Fabry-Pérot. Reproduced with permission from [<a href="#B91-sensors-22-09541" class="html-bibr">91</a>,<a href="#B173-sensors-22-09541" class="html-bibr">173</a>,<a href="#B174-sensors-22-09541" class="html-bibr">174</a>,<a href="#B175-sensors-22-09541" class="html-bibr">175</a>].</p>
Full article ">
23 pages, 2173 KiB  
Article
Validity of Two Consumer Multisport Activity Tracker and One Accelerometer against Polysomnography for Measuring Sleep Parameters and Vital Data in a Laboratory Setting in Sleep Patients
by Mario Budig, Riccardo Stoohs and Michael Keiner
Sensors 2022, 22(23), 9540; https://doi.org/10.3390/s22239540 - 6 Dec 2022
Cited by 10 | Viewed by 5036
Abstract
Two commercial multisport activity trackers (Garmin Forerunner 945 and Polar Ignite) and the accelerometer ActiGraph GT9X were evaluated in measuring vital data, sleep stages and sleep/wake patterns against polysomnography (PSG). Forty-nine adult patients with suspected sleep disorders (30 males/19 females) completed a one-night [...] Read more.
Two commercial multisport activity trackers (Garmin Forerunner 945 and Polar Ignite) and the accelerometer ActiGraph GT9X were evaluated in measuring vital data, sleep stages and sleep/wake patterns against polysomnography (PSG). Forty-nine adult patients with suspected sleep disorders (30 males/19 females) completed a one-night PSG sleep examination followed by a multiple sleep latency test (MSLT). Sleep parameters, time in bed (TIB), total sleep time (TST), wake after sleep onset (WASO), sleep onset latency (SOL), awake time (WASO + SOL), sleep stages (light, deep, REM sleep) and the number of sleep cycles were compared. Both commercial trackers showed high accuracy in measuring vital data (HR, HRV, SpO2, respiratory rate), r > 0.92. For TIB and TST, all three trackers showed medium to high correlation, r > 0.42. Garmin had significant overestimation of TST, with MAE of 84.63 min and MAPE of 25.32%. Polar also had an overestimation of TST, with MAE of 45.08 min and MAPE of 13.80%. ActiGraph GT9X results were inconspicuous. The trackers significantly underestimated awake times (WASO + SOL) with weak correlation, r = 0.11–0.57. The highest MAE was 50.35 min and the highest MAPE was 83.02% for WASO for Garmin and ActiGraph GT9X; Polar had the highest MAE of 21.17 min and the highest MAPE of 141.61% for SOL. Garmin showed significant deviations for sleep stages (p < 0.045), while Polar only showed significant deviations for sleep cycle (p = 0.000), r < 0.50. Garmin and Polar overestimated light sleep and underestimated deep sleep, Garmin significantly, with MAE up to 64.94 min and MAPE up to 116.50%. Both commercial trackers Garmin and Polar did not detect any daytime sleep at all during the MSLT test. The use of the multisport activity trackers for sleep analysis can only be recommended for general daily use and for research purposes. If precise data on sleep stages and parameters are required, their use is limited. The accuracy of the vital data measurement was adequate. Further studies are needed to evaluate their use for medical purposes, inside and outside of the sleep laboratory. The accelerometer ActiGraph GT9X showed overall suitable accuracy in detecting sleep/wake patterns. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Bland–Altman plots of sleep parameters (<span class="html-italic">n</span> = 49). TIB = time in bed, TST = total sleep time, Awake = awake time (WASO + SOL), WASO = wake after sleep onset, SOL = sleep onset latency (all expressed in minutes), SE = sleep efficiency (in %). <span class="html-italic">x</span>-axis represents the mean values of the device and PSG; <span class="html-italic">y</span>-axis represents the differences between the PSG and the device; dashed black line represents the upper and lower limit of agreement (mean +/− 1.96 SD); solid red line represents the mean value of difference; solid blue line represents the trend; shaded green area represents 95% CI (confidence interval) of mean difference.</p>
Full article ">Figure 2
<p>Boxplot analysis of calculated deviation in sleep parameters, Garmin, Polar and ActiGraph GT9X against the gold standard PSG (<span class="html-italic">n</span> = 49), TIB = time in bed, TST = total sleep time, Awake = awake time (WASO + SOL), WASO = wake after sleep onset, SOL = sleep onset latency (all expressed in minutes); the x represents the mean value of deviation.</p>
Full article ">Figure 3
<p>Bland–Altman plots of sleep stages (<span class="html-italic">n</span> = 49). NREM, light sleep (NREM1 + NREM2), deep sleep (SWS, NREM3) and REM sleep = rapid eye movement sleep (all expressed in minutes). <span class="html-italic">x</span>-axis represents the mean values of the device and PSG; <span class="html-italic">y</span>-axis represents the differences between the PSG and the device; dashed black line represents the upper and lower limit of agreement (mean +/− 1.96 SD); solid red line represents the mean value of difference; solid blue line represents the trend line; shaded green area represents 95% CI (confidence interval) of mean difference.</p>
Full article ">Figure 4
<p>Boxplot analysis of calculated deviation (<b>a</b>) in sleep stages, Garmin and Polar against the gold standard PSG (<span class="html-italic">n</span> = 49), Light = light sleep, Deep = slow-wave sleep (SWS), REM = rapid eye movement sleep (all expressed in minutes) and (<b>b</b>) in sleep onset time = start of sleep and wake-up time = end of sleep (deviation expressed in minutes), Garmin, Polar and accelerometer (ActiGraph GT9X) against the gold standard PSG (<span class="html-italic">n</span> = 49); the x represents the mean value of deviation.</p>
Full article ">Figure 5
<p>Night sleep hypnograms (Garmin, PSG, Polar); (<b>a</b>) CPAP patient (male); (<b>b</b>) sleep patient (female). Arousal = partial, temporary or complete wake-up reaction with sleep-disrupting effect [<a href="#B66-sensors-22-09540" class="html-bibr">66</a>]; MT = movement time; Wake = awake time; REM = rapid eye movement sleep; S1–S3 represent NREM1–NREM3 (N1 + 2 = light sleep; N3 = deep sleep [<a href="#B70-sensors-22-09540" class="html-bibr">70</a>]). <span class="html-italic">x</span>-axis represents the time in hours; <span class="html-italic">y</span>-axis represents the respective sleep stage.</p>
Full article ">
16 pages, 4903 KiB  
Article
Learning for Data Synthesis: Joint Local Salient Projection and Adversarial Network Optimization for Vehicle Re-Identification
by Yanbing Chen, Wei Ke, Wei Zhang, Cui Wang, Hao Sheng and Zhang Xiong
Sensors 2022, 22(23), 9539; https://doi.org/10.3390/s22239539 - 6 Dec 2022
Viewed by 1897
Abstract
The problem of vehicle re-identification in surveillance scenarios has grown in popularity as a research topic. Deep learning has been successfully applied in re-identification tasks in the last few years due to its superior performance. However, deep learning approaches require a large volume [...] Read more.
The problem of vehicle re-identification in surveillance scenarios has grown in popularity as a research topic. Deep learning has been successfully applied in re-identification tasks in the last few years due to its superior performance. However, deep learning approaches require a large volume of training data, and it is particularly crucial in vehicle re-identification tasks to have a sufficient amount of varying image samples for each vehicle. To collect and construct such a large and diverse dataset from natural environments is labor intensive. We offer a novel image sample synthesis framework to automatically generate new variants of training data by augmentation. First, we use an attention module to locate a local salient projection region in an image sample. Then, a lightweight convolutional neural network, the parameter agent network, is responsible for generating further image transformation states. Finally, an adversarial module is employed to ensure that the images in the dataset are distorted, while retaining their structural identities. This adversarial module helps to generate more appropriate and difficult training samples for vehicle re-identification. Moreover, we select the most difficult sample and update the parameter agent network accordingly to improve the performance. Our method draws on the adversarial networks strategy and the self-attention mechanism, which can dynamically decide the region selection and transformation degree of the synthesis images. Extensive experiments on the VeRi-776, VehicleID, and VERI-Wild datasets achieve good performance. Specifically, our method outperforms the state-of-the-art in MAP accuracy on VeRi-776 by 2.15%. Moreover, on VERI-Wil, a significant improvement of 7.15% is achieved. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Image augmentation results from an original to an augmented sample.</p>
Full article ">Figure 2
<p>Overall framework. This framework has three parts: Salient Projection Region Location (<span style="color: #0000FF">blue part</span>). Local Region Projection Transformation (<span style="color: #FF0000">red part</span>). Transformation State Adversarial Module (<span style="color: #00FF00">green part</span>).</p>
Full article ">Figure 3
<p>Images results from an original to Spatial Relation-Aware samples.</p>
Full article ">Figure 4
<p>Overview of the Parameter Agent Network.</p>
Full article ">Figure 5
<p>Overview of the transformation state generation process from <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>s</mi> <mn>2</mn> </msub> </semantics></math>,...<math display="inline"><semantics> <msub> <mi>s</mi> <mn>11</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Distance of different images. The augmented samples <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math> are matched with the input image one by one.</p>
Full article ">Figure 7
<p>Framework of the Baseline.</p>
Full article ">Figure 8
<p>Augmented results with various modules. <b>Col 1:</b> Input images. <b>Col 2:</b> Heatmap using Relation-Aware Attention. <b>Col 3:</b> Augmented results using SPRL + LRPT. <b>Col 4:</b> Augmented results using SPRL + LRPT + TSAM.</p>
Full article ">Figure 9
<p>Workflow of the Salient Projection Region Location and Local Region Projection Transformation.</p>
Full article ">Figure 10
<p>Visualization of the Input Images and the Eleven Augmented Images.</p>
Full article ">
14 pages, 2199 KiB  
Article
Surface Functionalization Strategies of Polystyrene for the Development Peptide-Based Toxin Recognition
by Ahmed M. Debela, Catherine Gonzalez, Monica Pucci, Shemsia M. Hudie and Ingrid Bazin
Sensors 2022, 22(23), 9538; https://doi.org/10.3390/s22239538 - 6 Dec 2022
Cited by 2 | Viewed by 3323
Abstract
The development of a robust surface functionalization method is indispensable in controlling the efficiency, sensitivity, and stability of a detection system. Polystyrene (PS) has been used as a support material in various biomedical fields. Here, we report various strategies of polystyrene surface functionalization [...] Read more.
The development of a robust surface functionalization method is indispensable in controlling the efficiency, sensitivity, and stability of a detection system. Polystyrene (PS) has been used as a support material in various biomedical fields. Here, we report various strategies of polystyrene surface functionalization using siloxane derivative, divinyl sulfone, cyanogen bromide, and carbonyl diimidazole for the immobilization of biological recognition elements (peptide developed to detect ochratoxin A) for a binding assay with ochratoxin A (OTA). Our objective is to develop future detection systems that would use polystyrene cuvettes such as immobilization support of biological recognition elements. The goal of this article is to demonstrate the proof of concept of this immobilization support. The results obtained reveal the successful modification of polystyrene surfaces with the coupling agents. Furthermore, the immobilization of biological recognition elements, for the OTA binding assay with horseradish peroxidase conjugated to ochratoxin A (OTA-HRP) also confirms that the characteristics of the functionalized peptide immobilized on polystyrene retains its ability to bind to its ligand. The presented strategies on the functionalization of polystyrene surfaces will offer alternatives to the possibilities of immobilizing biomolecules with excellent order- forming monolayers, due to their robust surface chemistries and validate a proof of concept for the development of highly efficient, sensitive, and stable future biosensors for food or water pollution monitoring. Full article
(This article belongs to the Special Issue Chemical Sensors in Environmental Pollution and Green Energy)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of peptide functionalization of polystyrene cuvettes.</p>
Full article ">Figure 2
<p>Water contact angle of PS substrates before and after functionalization.</p>
Full article ">Figure 3
<p>The XPS spectra depicts the N1s, O1s, C1s, and S2p core levels for the various substrates functionalized by the methods under study. (<b>A</b>) series represent the XPS spectra of –CDI-PEP modified PS. (<b>B</b>) are the XPS spectra of –CN-PEP modified PS. (<b>C</b>) are the XPS spectra of –DV-PEP modified PS. (<b>D</b>) are the XPS spectra of PS GP modified PS.</p>
Full article ">Figure 4
<p>The ATR FT-IR of the various PS surfaces following activation with UV piranha, coupling agents and peptide coupling: (<b>A</b>) CDI modified PS. (<b>B</b>) CNB modified PS. (<b>C</b>) DVS modified PS. (<b>D</b>) GPTS modified PS.</p>
Full article ">Figure 5
<p>Atomic force micrographs (scale 2 × 2 µm<sup>2</sup>) after immobilization of the various PS-coupling agents and peptide coupling: (<b>A</b>) GPTS modified PS, (<b>B</b>) PS-GP-Pept. (<b>C</b>) PS-CDI (<b>D</b>) PS-CDI-Pept. (<b>E</b>) PS-DVS (<b>F</b>) PS-DV-Pept. (<b>G</b>) PS-CNB and (<b>H</b>) PS-CN-Pept. The images on the right in each figure corresponds to 3 D views.</p>
Full article ">Figure 6
<p>OTA calibration curves in PBS on 96-well plate. Competitive ELISA for the detection of OTA in PBS were performed with a concentration of peptide SNLHPK (1 µg/mL) for a concentration of OTA-HRP labeled at 1000 µg·L<sup>−1</sup>. B and B0 represent the bound enzyme activity measured in the presence or absence of competitor respectively. Each point is the average ± standard deviation of three independent assays each with 4 measurement (<span class="html-italic">n</span> = 12).</p>
Full article ">Figure 7
<p>Binding curve for serial dilutions OTA-HRP (in 10 mM PBS pH 7.4) to cuvettes functionalized with 2 mg/mL of the peptide SNLHPK. (<b>A</b>) CDI modified PS. (<b>B</b>) CNBr modified PS. (<b>C</b>) GPTS modified PS. (<b>D</b>) DVS modified PS. In all experiments, the blank is the solution of unconjugated HRP.</p>
Full article ">
23 pages, 813 KiB  
Article
Consensus Tracking of Nonlinear Agents Using Distributed Nonlinear Dynamic Inversion with Switching Leader-Follower Connection
by Sabyasachi Mondal and Antonios Tsourdos
Sensors 2022, 22(23), 9537; https://doi.org/10.3390/s22239537 - 6 Dec 2022
Cited by 1 | Viewed by 1903
Abstract
In this paper, a consensus tracking protocol for nonlinear agents is presented, which is based on the Nonlinear Dynamic Inversion (NDI) technique. Implementation of such a technique is new in the context of the consensus tracking problem. The tracking capability of nonlinear dynamic [...] Read more.
In this paper, a consensus tracking protocol for nonlinear agents is presented, which is based on the Nonlinear Dynamic Inversion (NDI) technique. Implementation of such a technique is new in the context of the consensus tracking problem. The tracking capability of nonlinear dynamic inversion (NDI) is exploited for a leader-follower multi-agent scenario. We have provided all the mathematical details to establish its theoretical foundation. Additionally, a convergence study is provided to show the efficiency of the proposed controller. The performance of the proposed controller is evaluated in the presence of both (a) random switching topology among the agents and (b) random switching of leader–follower connections, which is realistic and not reported in the literature. The follower agents track various trajectories generated by a dynamic leader, which describes the tracking capability of the proposed controller. The results obtained from the simulation study show how efficiently this controller can handle the switching topology and switching leader-follower connections. Full article
(This article belongs to the Collection Sensors and Intelligent Control Systems)
Show Figures

Figure 1

Figure 1
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 2
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>2</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 3
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 4
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>2</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 5
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mi>i</mi> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 6
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mn>2</mn> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 7
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 8
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>2</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 9
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 10
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>2</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 11
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mi>i</mi> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 12
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mn>2</mn> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 13
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mi>i</mi> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 14
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mn>2</mn> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 15
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 16
<p>Control <math display="inline"><semantics> <msub> <mi mathvariant="bold">U</mi> <mn>2</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 17
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 18
<p>Consensus tracking of state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>2</mn> </msub> </semantics></math> of the agents.</p>
Full article ">Figure 19
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mi>i</mi> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">Figure 20
<p>Consensus error <math display="inline"><semantics> <msub> <mi mathvariant="bold">E</mi> <mn>2</mn> </msub> </semantics></math> in state <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1</mn> </msub> </semantics></math> of agents.</p>
Full article ">
23 pages, 9022 KiB  
Article
Sea Mine Detection Framework Using YOLO, SSD and EfficientDet Deep Learning Models
by Dan Munteanu, Diana Moina, Cristina Gabriela Zamfir, Ștefan Mihai Petrea, Dragos Sebastian Cristea and Nicoleta Munteanu
Sensors 2022, 22(23), 9536; https://doi.org/10.3390/s22239536 - 6 Dec 2022
Cited by 22 | Viewed by 7935
Abstract
In the context of new geopolitical tensions due to the current armed conflicts, safety in terms of navigation has been threatened due to the large number of sea mines placed, in particular, within the sea conflict areas. Additionally, since a large number of [...] Read more.
In the context of new geopolitical tensions due to the current armed conflicts, safety in terms of navigation has been threatened due to the large number of sea mines placed, in particular, within the sea conflict areas. Additionally, since a large number of mines have recently been reported to have drifted into the territories of the Black Sea countries such as Romania, Bulgaria Georgia and Turkey, which have intense commercial and tourism activities in their coastal areas, the safety of those economic activities is threatened by possible accidents that may occur due to the above-mentioned situation. The use of deep learning in a military operation is widespread, especially for combating drones and other killer robots. Therefore, the present research addresses the detection of floating and underwater sea mines using images recorded from cameras (taken from drones, submarines, ships and boats). Due to the low number of sea mine images, the current research used both an augmentation technique and synthetic image generation (by overlapping images with different types of mines over water backgrounds), and two datasets were built (for floating mines and for underwater mines). Three deep learning models, respectively, YOLOv5, SSD and EfficientDet (YOLOv5 and SSD for floating mines and YOLOv5 and EfficientDet for underwater mines), were trained and compared. In the context of using three algorithm models, YOLO, SSD and EfficientDet, the new generated system revealed high accuracy in object recognition, namely the detection of floating and anchored mines. Moreover, tests carried out on portable computing equipment, such as Raspberry Pi, illustrated the possibility of including such an application for real-time scenarios, with the time of 2 s per frame being improved if devices use high-performance cameras. Full article
(This article belongs to the Special Issue ICSTCC 2022: Advances in Monitoring and Control)
Show Figures

Figure 1

Figure 1
<p>The framework structure of present research.</p>
Full article ">Figure 2
<p>YOLO model [<a href="#B43-sensors-22-09536" class="html-bibr">43</a>].</p>
Full article ">Figure 3
<p>SSD deep learning model [<a href="#B43-sensors-22-09536" class="html-bibr">43</a>].</p>
Full article ">Figure 4
<p>FPN top-down perspective [<a href="#B51-sensors-22-09536" class="html-bibr">51</a>].</p>
Full article ">Figure 5
<p>EfficientDet architecture [<a href="#B49-sensors-22-09536" class="html-bibr">49</a>].</p>
Full article ">Figure 6
<p>Background images.</p>
Full article ">Figure 7
<p>Mine Images.</p>
Full article ">Figure 8
<p>Synthetic images with floating mines.</p>
Full article ">Figure 9
<p>Annotating floating hand CVAT images.</p>
Full article ">Figure 10
<p>CVAT image annotation—naval mines for submarines.</p>
Full article ">Figure 11
<p>YOLOv5 network training parameters.</p>
Full article ">Figure 12
<p>Metrics during YOLOv5 training.</p>
Full article ">Figure 13
<p>Floating mine detection model YOLOv5.</p>
Full article ">Figure 14
<p>YOLOv5 network training parameters of naval mines for submarines.</p>
Full article ">Figure 15
<p>Underwater mine detection—model YOLOv5.</p>
Full article ">Figure 16
<p>mAP accuracy for SSD model.</p>
Full article ">Figure 17
<p>SSD floating mine detection.</p>
Full article ">Figure 18
<p>EfficientDet Network Models.</p>
Full article ">Figure 19
<p>Evolution of the loss function during training.</p>
Full article ">Figure 20
<p>EfficientDet training results.</p>
Full article ">Figure 21
<p>Underwater mine detection with EfficientDet.</p>
Full article ">Figure 22
<p>Ship and mine detection using Raspberry Pi and YOLO model.</p>
Full article ">
Previous Issue
Back to TopTop