[go: up one dir, main page]

Next Issue
Volume 21, May-1
Previous Issue
Volume 21, April-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 8 (April-2 2021) – 318 articles

Cover Story (view full-size image): Metal-modified montmorillonite (MMT) particles have been fabricated for the design of novel cost-effective hybrid materials that are suitable as fluorescence-based sensing platforms. The combined effect of MMT and the metallic moiety, ascribed to the aggregation-induced emission and metal-enhanced fluorescent phenomena, respectively, leads to a remarkable fluorescent enhancement. We showed that such signal amplification improves the sensitivity of fluorescent-based detection mechanisms, such as ELISA assays, and allows the direct detection of biomolecules exploiting their self-fluorescence. The versatility of the proposed hybrid materials was further demonstrated by exploring their plasmonic properties to develop liquid label-free detection systems.View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 6134 KiB  
Article
Intelligent LED Certification System in Mass Production
by Galina Malykhina, Dmitry Tarkhov, Viacheslav Shkodyrev and Tatiana Lazovskaya
Sensors 2021, 21(8), 2891; https://doi.org/10.3390/s21082891 - 20 Apr 2021
Viewed by 2104
Abstract
It is impossible to effectively use light-emitting diodes (LEDs) in medicine and telecommunication systems without knowing their main characteristics, the most important of them being efficiency. Reliable measurement of LED efficiency holds particular significance for mass production automation. The method for measuring LED [...] Read more.
It is impossible to effectively use light-emitting diodes (LEDs) in medicine and telecommunication systems without knowing their main characteristics, the most important of them being efficiency. Reliable measurement of LED efficiency holds particular significance for mass production automation. The method for measuring LED efficiency consists in comparing two cooling curves of the LED crystal obtained after exposure to short current pulses of positive and negative polarities. The measurement results are adversely affected by noise in the electrical measuring circuit. The widely used instrumental noise suppression filters, as well as classical digital infinite impulse response (IIR), finite impulse response (FIR) filters, and adaptive filters fail to yield satisfactory results. Unlike adaptive filters, blind methods do not require a special reference signal, which makes them more promising for removing noise and reconstructing the waveform when measuring the efficiency of LEDs. The article suggests a method for sequential blind signal extraction based on a cascading neural network. Statistical analysis of signal and noise values has revealed that the signal and the noise have different forms of the probability density function (PDF). Therefore, it is preferable to use high-order statistical moments characterizing the shape of the PDF for signal extraction. Generalized statistical moments were used as an objective function for optimization of neural network parameters, namely, generalized skewness and generalized kurtosis. The order of the generalized moments was chosen according to the criterion of the maximum Mahalanobis distance. The proposed method has made it possible to implement a multi-temporal comparison of the crystal cooling curves for measuring LED efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of voltage variation across an LED, having three heating and cooling periods of the crystal during the LED test. (<b>a</b>) The positive signal and the negative signal are presented in the different scales. (<b>b</b>) The different gains for positive and negative signals are provided in measuring channels. The gain 1/10 was established for the positive signal and 1/100 for the negative signal.</p>
Full article ">Figure 2
<p>Single-layer neural network training.</p>
Full article ">Figure 3
<p>Activation function (<b>a</b>) and objective function surface (<b>b</b>).</p>
Full article ">Figure 4
<p>Dependence of root-mean-square error of kurtosis on the volume of sample.</p>
Full article ">Figure 5
<p>Dependencies of Mahalanobis distance on the order of statistical moments.</p>
Full article ">Figure 6
<p>Estimation of autoregressive coefficients (blue line) with confidence intervals (red dashes).</p>
Full article ">Figure 7
<p>Dependence of extracted signal on discrete time <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mfrac bevelled="true"> <mi>t</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> </mrow> </mfrac> <mo>,</mo> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">Δ</mi> </mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>μ</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mo> </mo> </mrow> </semantics></math>implying an extraction based on skewness.</p>
Full article ">Figure 8
<p>Dependence of extracted signal on discrete time <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mfrac bevelled="true"> <mi>t</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> </mrow> </mfrac> <mo>,</mo> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">Δ</mi> </mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>μ</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mtext> </mtext> </mrow> </semantics></math>implying an extraction based on kurtosis.</p>
Full article ">Figure 9
<p>Dependence of the extracted signal on discrete time <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mfrac bevelled="true"> <mi>t</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> </mrow> </mfrac> <mo>,</mo> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>μ</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mtext> </mtext> </mrow> </semantics></math>implying an extraction obtained using the combined criterion based on skewness and kurtosis.</p>
Full article ">Figure 10
<p>Dependence of the extracted signal on discrete time <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mfrac bevelled="true"> <mi>t</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> </mrow> </mfrac> <mo>,</mo> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">Δ</mi> </mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>μ</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mo> </mo> </mrow> </semantics></math>implying an extraction obtained using the combined criterion based on skewness and kurtosis.</p>
Full article ">Figure 11
<p>Dependence of the extracted signal on discrete time <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mfrac bevelled="true"> <mi>t</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>t</mi> </mrow> </mfrac> <mo>,</mo> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">Δ</mi> </mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> <mo>μ</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mo> </mo> </mrow> </semantics></math>implying an extraction obtained using the combined criterion based on generalized skewness, generalized kurtosis and second order adaptation algorithm.</p>
Full article ">Figure 12
<p>Noise suppression using a 9th order Butterworth filter with a cutoff frequency of 330 kHz.</p>
Full article ">Figure 13
<p>Noise suppression using a 9th order Type I Chebyshev filter with a cutoff frequency of 330 kHz.</p>
Full article ">Figure 14
<p>Noise suppression via averaging of corresponding pulses.</p>
Full article ">
16 pages, 5271 KiB  
Communication
Increasing the Reliability of Data Collection of Laser Line Triangulation Sensor by Proper Placement of the Sensor
by Dominik Heczko, Petr Oščádal, Tomáš Kot, Daniel Huczala, Ján Semjon and Zdenko Bobovský
Sensors 2021, 21(8), 2890; https://doi.org/10.3390/s21082890 - 20 Apr 2021
Cited by 10 | Viewed by 3860
Abstract
In this paper, we investigated the effect of the incidence angle of a laser ray on the reflected laser intensity. A dataset on this dependence is presented for materials usually used in the industry, such as transparent and non-transparent plastics and aluminum alloys [...] Read more.
In this paper, we investigated the effect of the incidence angle of a laser ray on the reflected laser intensity. A dataset on this dependence is presented for materials usually used in the industry, such as transparent and non-transparent plastics and aluminum alloys with different surface roughness. The measurements have been performed with a laser line triangulation sensor and a UR10e robot. The presented results are proposing where to place the sensor relative to the scanned object, thus increasing the reliability of the sensor data collection. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Reflection phenomenon. For an ideal specular reflection, the angle of incidence (AoI) θ<sub>i</sub> is the same as the angle of reflection θ<sub>r</sub> relative to the surface normal n; (<b>a</b>) the general distribution of the reflection; (<b>b</b>) the distribution of the reflection by [<a href="#B23-sensors-21-02890" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>LLT sensor schematic. The α angle is the angle between the laser beam and the camera axis (triangulation angle); β is the vertical field of view of the CCD camera; δ is the angle of the cone of the projected laser. α, β, and δ are constants, which is given by the sensor manufacturer. θ is the AoI of the laser rays (beams) plane relative to the surface normal (n).</p>
Full article ">Figure 3
<p>Schematic of the experimental measurement. D<sub>REF</sub> is the reference distance of the sensor (Z-axis), θ is the AoI of the laser rays (beams) plane relative to the surface normal (n). Point B represents the laser line (it is perpendicular to the image) around which the robot rotates the LLT sensor.</p>
Full article ">Figure 4
<p>An example of raw intensity data for the full profile (3200 points) provided by the sensor control unit. The X-axis represents captured points, Y-axis represents intensity from 0 to 10 mW.</p>
Full article ">Figure 5
<p>Experimental workplace.</p>
Full article ">Figure 6
<p>Communication diagram of the workplace.</p>
Full article ">Figure 7
<p>Plastic holder for scanning samples of thickness: (<b>a</b>) 3; (<b>b</b>) 5; (<b>c</b>) 12 mm.</p>
Full article ">Figure 8
<p>Scanned samples: (<b>a</b>) Transparent colored plastics (acrylic sheets); (<b>b</b>) transparent pure plastic (acrylic sheet); (<b>c</b>) non-transparent colored plastics (PVC plastics); (<b>d</b>) aluminum alloy with different roughness (Ra0.8–12.5).</p>
Full article ">Figure 9
<p>Raw intensity data in a part of the scanned profile. Differences of measured intensity at one pose: (<b>a</b>) Scan no. 4; (<b>b</b>) scan no. 14; (<b>c</b>) scan no. 16.</p>
Full article ">Figure 10
<p>The measured intensity of one profile on a shiny smooth surface. Points in the laser line are no longer detected on the left and right side because of low laser intensity.</p>
Full article ">Figure 11
<p>Schematic of the laser ray geometry for computation of AoI of i-th laser ray.</p>
Full article ">Figure 12
<p>Processed intensity of aluminum alloy of different roughness: (<b>a</b>) Ra0.8; (<b>b</b>) Ra1.6; (<b>c</b>) Ra3.2; (<b>d</b>) Ra6.3; (<b>e</b>) Ra12.5; (<b>f</b>) median values in each measured position for all aluminum samples.</p>
Full article ">Figure 13
<p>Raw intensity data for orange plexiglass. The other transparent plastics have similar intensity.</p>
Full article ">Figure 14
<p>Raw intensity data for red plexiglass at AoI −21° and −15°. Points in the laser line are no longer detected on the left and right side because of the low laser intensity; we can also observe the noise which leads to a high occurrence of outliers.</p>
Full article ">Figure 15
<p>Median values of intensity in measured positions of transparent plastic samples.</p>
Full article ">Figure 16
<p>The processed intensity of non-transparent plastics of different colors: (<b>a</b>) Black; (<b>b</b>) gray; (<b>c</b>) median values in each measured position for non-transparent plastic samples.</p>
Full article ">Figure 16 Cont.
<p>The processed intensity of non-transparent plastics of different colors: (<b>a</b>) Black; (<b>b</b>) gray; (<b>c</b>) median values in each measured position for non-transparent plastic samples.</p>
Full article ">Figure 17
<p>Dependence of the laser intensity on the angle of incidence for all samples; the specular reflection area was marked by a red rectangle with a dashed line.</p>
Full article ">
18 pages, 19369 KiB  
Article
Can Markerless Pose Estimation Algorithms Estimate 3D Mass Centre Positions and Velocities during Linear Sprinting Activities?
by Laurie Needham, Murray Evans, Darren P. Cosker and Steffi L. Colyer
Sensors 2021, 21(8), 2889; https://doi.org/10.3390/s21082889 - 20 Apr 2021
Cited by 19 | Viewed by 4733
Abstract
The ability to accurately and non-invasively measure 3D mass centre positions and their derivatives can provide rich insight into the physical demands of sports training and competition. This study examines a method for non-invasively measuring mass centre velocities using markerless human pose estimation [...] Read more.
The ability to accurately and non-invasively measure 3D mass centre positions and their derivatives can provide rich insight into the physical demands of sports training and competition. This study examines a method for non-invasively measuring mass centre velocities using markerless human pose estimation and Kalman smoothing. Marker (Qualysis) and markerless (OpenPose) motion capture data were captured synchronously for sprinting and skeleton push starts. Mass centre positions and velocities derived from raw markerless pose estimation data contained large errors for both sprinting and skeleton pushing (mean ± SD = 0.127 ± 0.943 and −0.197 ± 1.549 m·s−1, respectively). Signal processing methods such as Kalman smoothing substantially reduced the mean error (±SD) in horizontal mass centre velocities (0.041 ± 0.257 m·s−1) during sprinting but the precision remained poor. Applying pose estimation to activities which exhibit unusual body poses (e.g., skeleton pushing) appears to elicit more erroneous results due to poor performance of the pose estimation algorithm. Researchers and practitioners should apply these methods with caution to activities beyond sprinting as pose estimation algorithms may not generalise well to the activity of interest. Retraining the model using activity specific data to produce more specialised networks is therefore recommended. Full article
(This article belongs to the Special Issue Sensors in Sports Biomechanics)
Show Figures

Figure 1

Figure 1
<p>Example individual trial demonstrating sagittal plane CoM positions for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data during sprinting. Shaded areas depict the left foot (red shading) and right foot (green shading) stance phase. Footfall events were computed from marker-based foot kinematics [<a href="#B31-sensors-21-02889" class="html-bibr">31</a>]. OpenPose joint centre reconstructions (top) at touch-down and toe-off events are overlaid for context.</p>
Full article ">Figure 2
<p>Upper-Example individual trial demonstrating vertical CoM velocities for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data as a function of horizontal CoM position during sprinting. Lower -Example individual trial demonstrating horizontal CoM velocities for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data as a function of horizontal CoM position during sprinting. Shaded areas depict the left foot (red shading) and right foot (green shading) stance phase. Footfall events were computed from marker-based foot kinematics [<a href="#B31-sensors-21-02889" class="html-bibr">31</a>]. OpenPose joint centre reconstructions (top) at touch-down and toe-off events are overlaid for context.</p>
Full article ">Figure 3
<p>Bland–Altman and linear regression plots demonstrating the mean differences between OpenPose unfiltered (<b>top-left</b>), low-pass filtered (<b>top-right</b>) and Kalman smoothed (<b>lower-left</b>) CoM horizontal velocity against marker-based CoM horizontal velocity during sprinting.</p>
Full article ">Figure 4
<p>Example individual trial demonstrating sagittal plane CoM positions for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data during pushing. Shaded areas depict the left foot (red shading) and right foot (green shading) stance phase. Footfall events were computed from marker-based foot kinematics [<a href="#B31-sensors-21-02889" class="html-bibr">31</a>]. OpenPose joint centre reconstructions (top) at touch-down and toe-off events are overlaid for context.</p>
Full article ">Figure 5
<p>Upper-Example individual trial demonstrating vertical CoM velocities for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data as a function of horizontal CoM position during sprinting. Lower-Example individual trial demonstrating horizontal CoM velocities for criterion (cyan), unfiltered (green), low-pass filtered (red) and Kalman smoothed (blue) data as a function of horizontal CoM position during pushing. Shaded areas depict the left foot (red shading) and right foot (green shading) stance phase. Footfall events were computed from marker-based foot kinematics [<a href="#B31-sensors-21-02889" class="html-bibr">31</a>]. OpenPose joint centre reconstructions (top) at touch-down and toe-off events are overlaid for context.</p>
Full article ">Figure A1
<p>OpenPose Body_25 keypoint model. Adapted from OpenPose GitHub repository [<a href="https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md#pose-output-format" target="_blank">https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md#pose-output-format</a>, accessed on 15 July 2020].</p>
Full article ">Figure A2
<p>Example CoM model accuracy check for an individual participant’s flight phase using marker-based data. Unfiltered vertical CoM trajectory (left – green) is shown with a 2nd order polynomial fit to the data (left- blue). The double differentiated result (right - cyan) provides an indication of the accuracy of the CoM model.</p>
Full article ">Figure A3
<p>OpenPose keypoint detection examples depicting pose estimation during sprinting. Blue circles depict the 2D OpenPose keypoint locations with the circle fill colour representing the left (green) and right (red) sides of the body. Cubes depict the 3D reconstructed joint centres for the left (green) and right (red) sides of the body projected onto the 2D image plane. This example demonstrates overall good quality 2D detections and 3D reconstruction. Errors in the detection of the right foot can be observed, however, this will largely be corrected during the 3D fusion process which will use detections from other fields of view.</p>
Full article ">Figure A4
<p>OpenPose keypoint detection examples depicting pose estimation during pushing. Blue circles depict the 2D OpenPose keypoint locations with the circle fill colour representing the left (green) and right (red) sides of the body. Cubes depict the 3D reconstructed joint centres for the left (green) and right (red) sides of the body projected onto the 2D image plane. This example demonstrates that limb switching has occurred for the legs but this has largely been corrected during the 3D fusion process. However, further issues can been seen with the left arm where OpenPose has detected joint centres with limited success in multiple fields of view.</p>
Full article ">Figure A5
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM horizontal position against marker-based CoM horizontal position during sprinting.</p>
Full article ">Figure A6
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM vertical position against marker-based CoM vertical position during sprinting.</p>
Full article ">Figure A7
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM horizontal position against marker-based CoM horizontal position during pushing.</p>
Full article ">Figure A8
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM vertical position against marker-based CoM vertical position during pushing.</p>
Full article ">Figure A9
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM vertical velocity against marker-based CoM vertical velocity during sprinting.</p>
Full article ">Figure A10
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM horizontal velocity against marker-based CoM horizontal velocity during pushing.</p>
Full article ">Figure A11
<p>Bland-Altman and linear regression plots demonstrating the differences between OpenPose unfiltered (top-left), low-pass filtered (top-right) and Kalman smoothed (lower-left) CoM vertical velocity against marker-based CoM vertical velocity during pushing.</p>
Full article ">
20 pages, 4997 KiB  
Article
Short- and Long-Term Effects of a Scapular-Focused Exercise Protocol for Patients with Shoulder Dysfunctions—A Prospective Cohort
by Cristina dos Santos, Mark A. Jones and Ricardo Matias
Sensors 2021, 21(8), 2888; https://doi.org/10.3390/s21082888 - 20 Apr 2021
Cited by 7 | Viewed by 4976
Abstract
Current clinical practice lacks consistent evidence in the management of scapular dyskinesis. This study aims to determine the short- and long-term effects of a scapular-focused exercise protocol facilitated by real-time electromyographic biofeedback (EMGBF) on pain and function, in individuals with rotator cuff related [...] Read more.
Current clinical practice lacks consistent evidence in the management of scapular dyskinesis. This study aims to determine the short- and long-term effects of a scapular-focused exercise protocol facilitated by real-time electromyographic biofeedback (EMGBF) on pain and function, in individuals with rotator cuff related pain syndrome (RCS) and anterior shoulder instability (ASI). One-hundred and eighty-three patients were divided into two groups (n = 117 RCS and n = 66 ASI) and guided through a structured exercise protocol, focusing on scapular dynamic control. Values of pain and function (shoulder pain and disability index (SPADI) questionnaire, complemented by the numeric pain rating scale (NPRS) and disabilities of the arm, shoulder, and hand (DASH) questionnaire) were assessed at the initial, 4-week, and 2-year follow-up and compared within and between. There were significant differences in pain and function improvement between the initial and 4-week assessments. There were no differences in the values of DASH 1st part and SPADI between the 4-week and 2-year follow-up. There were no differences between groups at the baseline and long-term, except for DASH 1st part and SPADI (p < 0.05). Only 29 patients (15.8%) had a recurrence episode at follow-up. These results provide valuable information on the positive results of the protocol in the short- and long-term. Full article
Show Figures

Figure 1

Figure 1
<p>Resume of a session of the scapular-focused exercise protocol.</p>
Full article ">Figure 2
<p>Scapular-focused exercise protocol flow diagram.</p>
Full article ">Figure A1
<p>Assessment and treatment form.</p>
Full article ">Figure A2
<p>Reassessment and next treatment form.</p>
Full article ">Figure A3
<p>Scapular-focused treatment protocol.</p>
Full article ">
18 pages, 14763 KiB  
Article
Analytical Evaluation of Signal-to-Noise Ratios for Avalanche- and Single-Photon Avalanche Diodes
by Andre Buchner, Stefan Hadrath, Roman Burkard, Florian M. Kolb, Jennifer Ruskowski, Manuel Ligges and Anton Grabmaier
Sensors 2021, 21(8), 2887; https://doi.org/10.3390/s21082887 - 20 Apr 2021
Cited by 16 | Viewed by 6664
Abstract
Performance of systems for optical detection depends on the choice of the right detector for the right application. Designers of optical systems for ranging applications can choose from a variety of highly sensitive photodetectors, of which the two most prominent ones are linear [...] Read more.
Performance of systems for optical detection depends on the choice of the right detector for the right application. Designers of optical systems for ranging applications can choose from a variety of highly sensitive photodetectors, of which the two most prominent ones are linear mode avalanche photodiodes (LM-APDs or APDs) and Geiger-mode APDs or single-photon avalanche diodes (SPADs). Both achieve high responsivity and fast optical response, while maintaining low noise characteristics, which is crucial in low-light applications such as fluorescence lifetime measurements or high intensity measurements, for example, Light Detection and Ranging (LiDAR), in outdoor scenarios. The signal-to-noise ratio (SNR) of detectors is used as an analytical, scenario-dependent tool to simplify detector choice for optical system designers depending on technologically achievable photodiode parameters. In this article, analytical methods are used to obtain a universal SNR comparison of APDs and SPADs for the first time. Different signal and ambient light power levels are evaluated. The low noise characteristic of a typical SPAD leads to high SNR in scenarios with overall low signal power, but high background illumination can saturate the detector. LM-APDs achieve higher SNR in systems with higher signal and noise power but compromise signals with low power because of the noise characteristic of the diode and its readout electronics. Besides pure differentiation of signal levels without time information, ranging performance in LiDAR with time-dependent signals is discussed for a reference distance of 100 m. This evaluation should support LiDAR system designers in choosing a matching photodiode and allows for further discussion regarding future technological development and multi pixel detector designs in a common framework. Full article
(This article belongs to the Special Issue Single Photon Counting Image Sensor)
Show Figures

Figure 1

Figure 1
<p>APD with biasing voltage and transimpedance amplifier.</p>
Full article ">Figure 2
<p>SPAD with a combined active/passive quenching circuit from [<a href="#B16-sensors-21-02887" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>SNR for a single pixel SPAD and APD detector for a distance of 100 m and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> accumulations. (<b>a</b>) SNR values for APD [<a href="#B21-sensors-21-02887" class="html-bibr">21</a>] (<a href="#sensors-21-02887-t001" class="html-table">Table 1</a>). (<b>b</b>) Achievable SNR for a SPAD detector in single photon mode with parameters according to <a href="#sensors-21-02887-t002" class="html-table">Table 2</a> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>SPAD</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> drawn as red contour for <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mrow> <mi>PDE</mi> </mrow> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> (solid line), <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mrow> <mi>PDE</mi> </mrow> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> (dashed line) and <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mrow> <mi>PDE</mi> </mrow> </msub> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math> (dot-dashed line).</p>
Full article ">Figure 4
<p>Behavior of the SPAD detector for higher multi-event levels <math display="inline"><semantics> <mi>k</mi> </semantics></math> and limiting case of the linear detector. <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>Ph</mi> </mrow> </msub> </mrow> </semantics></math> is set to 30 following from dead time and system range. SNR = 3 contour drawn for <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mrow> <mi>PDE</mi> </mrow> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, multi-event levels <math display="inline"><semantics> <mi>k</mi> </semantics></math> and the linear detector (dashed line). Line for k = 1 is equivalent to <a href="#sensors-21-02887-f003" class="html-fig">Figure 3</a>b.</p>
Full article ">Figure 5
<p>Comparison case for a single pixel APD detector of diameter 500 µm and a SPAD detector with 100 pixels and respective diameter of 50 µm.</p>
Full article ">Figure 6
<p>SNR = 3 contours of the multi pixel SPAD detector with <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mrow> <mi>PDE</mi> </mrow> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> for different pixel numbers <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>SPAD</mi> </mrow> </msub> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>1</mn> <mo>,</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>100</mn> <mo>,</mo> <mo> </mo> <mn>1000</mn> <mo>,</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mn>000</mn> </mrow> </mfenced> </mrow> </semantics></math> plotted over total event rates <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">B</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">L</mi> </msub> </mrow> </semantics></math>. Other values from <a href="#sensors-21-02887-t002" class="html-table">Table 2</a>. Red cross: Multi pixel scenario used in <a href="#sensors-21-02887-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>SNR from (30) for constant event rates <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">L</mi> </msub> <mo>=</mo> <mn>0.1</mn> <mrow> <mtext> </mtext> <mi>GHz</mi> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">B</mi> </msub> <mo>=</mo> <mn>0.4</mn> <mrow> <mtext> </mtext> <mi>GHz</mi> </mrow> </mrow> </semantics></math>. SPAD and scenario parameters from <a href="#sensors-21-02887-t002" class="html-table">Table 2</a> and <a href="#sensors-21-02887-t003" class="html-table">Table 3</a> respectively. Red contour for SNR = 3. Red cross: multi pixel SPAD with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>SPAD</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mrow> <mi>DCR</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi>kHz</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Contour lines for receiver performance when comparing a single APD to an equally sized SPAD detector (<b>a</b>) in different operation modes of the SPAD (single/multi-event detection) and (<b>b</b>) additionally for 10,000 SPAD pixels <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>SPAD</mi> </mrow> </msub> </mrow> </semantics></math>. Contour lines according to (14) and (30) with values from <a href="#sensors-21-02887-t001" class="html-table">Table 1</a>, <a href="#sensors-21-02887-t002" class="html-table">Table 2</a> and <a href="#sensors-21-02887-t003" class="html-table">Table 3</a> at SNR = 3 are shown in each case.</p>
Full article ">
22 pages, 4143 KiB  
Article
Comparison of Spaceborne and UAV-Borne Remote Sensing Spectral Data for Estimating Monsoon Crop Vegetation Parameters
by Jayan Wijesingha, Supriya Dayananda, Michael Wachendorf and Thomas Astor
Sensors 2021, 21(8), 2886; https://doi.org/10.3390/s21082886 - 20 Apr 2021
Cited by 10 | Viewed by 3301
Abstract
Various remote sensing data have been successfully applied to monitor crop vegetation parameters for different crop types. Those successful applications mostly focused on one sensor system or a single crop type. This study compares how two different sensor data (spaceborne multispectral vs unmanned [...] Read more.
Various remote sensing data have been successfully applied to monitor crop vegetation parameters for different crop types. Those successful applications mostly focused on one sensor system or a single crop type. This study compares how two different sensor data (spaceborne multispectral vs unmanned aerial vehicle borne hyperspectral) can estimate crop vegetation parameters from three monsoon crops in tropical regions: finger millet, maize, and lablab. The study was conducted in two experimental field layouts (irrigated and rainfed) in Bengaluru, India, over the primary agricultural season in 2018. Each experiment contained n = 4 replicates of three crops with three different nitrogen fertiliser treatments. Two regression algorithms were employed to estimate three crop vegetation parameters: leaf area index, leaf chlorophyll concentration, and canopy water content. Overall, no clear pattern emerged of whether multispectral or hyperspectral data is superior for crop vegetation parameter estimation: hyperspectral data showed better estimation accuracy for finger millet vegetation parameters, while multispectral data indicated better results for maize and lablab vegetation parameter estimation. This study’s outcome revealed the potential of two remote sensing platforms and spectral data for monitoring monsoon crops also provide insight for future studies in selecting the optimal remote sensing spectral data for monsoon crop parameter estimation. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Bengaluru, India; (<b>b</b>) Overview of the two experiment sites overlaid with Google satellite layer; (<b>c</b>) irrigated experiment layout, and (<b>d</b>) rainfed experiment layout with true colour composite Cubert hyperspectral image (Red = 642 nm, Green = 550 nm, Blue = 494 nm).</p>
Full article ">Figure 2
<p>Average spectral reflectance data for millet, lablab, and maize from Cubert (black) and WorldView3 (grey) data for irrigated (solid line) and rainfed (dashed line) experiments.</p>
Full article ">Figure 3
<p>Distribution of crop-wise vegetation indices (VI) for finger millet, lablab, and maize from Cubert (CUB) and WorldView3 (WV3) data from irrigated (grey) and rainfed (black) experiments. (NDVI: normalised difference vegetation index, DATT4: The 4th VI introduced by Datt (1998), MTVI: modified triangular vegetation index, REIP: red-edge inflexion point, and WI: water index).</p>
Full article ">Figure 4
<p>Observed vs predicted values of the best performing models for (<b>a</b>) leaf area index (LAI), (<b>b</b>) leaf chlorophyll content (LCC), and (<b>c</b>) canopy water content (CWC). The remote sensing data type (CUB or WV3) and modelling method (LR or RFR) for the best models are indicated as “RS data type + modelling method” (e.g., CUB + LR). The blue line is the fitted regression line between predicted and observed values, and the black line is the 1:1 line.</p>
Full article ">Figure A1
<p>Correlation between vegetation indexes from two remote sensing data Cubert (black) and WorldView3 (grey) and crop vegetation parameters leaf area index (LAI), leaf chlorophyll content (LCC), and crop water content (CWC) for finger millet, maize, and lablab.</p>
Full article ">Figure A2
<p>Distribution of actual impurity reduction value-based important wavebands for two remote sensing datasets (Cubert–black and WorldView3–grey) for leaf area index (LAI) estimation (<b>a</b>,<b>d</b>,<b>g</b>), leaf chlorophyll content (LCC) estimation (<b>b</b>,<b>e</b>,<b>h</b>), canopy water content (CWC) estimation (<b>c</b>,<b>f</b>,<b>i</b>) for finger millet (<b>a</b>,<b>b</b>,<b>c</b>), lablab (<b>d</b>,<b>e</b>,<b>f</b>), and maize (<b>g</b>,<b>h</b>,<b>i</b>) crops.</p>
Full article ">Figure A3
<p>Distribution of the normalised residuals values against (<b>a</b>) water treatments and (<b>b</b>) fertiliser treatments from the best models for leaf area index (LAI) estimation, leaf chlorophyll content (LCC) estimation, and canopy water content (CWC) estimation for finger millet, lablab, and maize. The dashed line at y = 0 represents zero normalised residual value. (ns or NS: not significant, *: <span class="html-italic">p</span> &lt; 0.05, **: <span class="html-italic">p</span> &lt; 0.01, ***: <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">
19 pages, 7240 KiB  
Communication
A Smart Sensing System of Water Quality and Intake Monitoring for Livestock and Wild Animals
by Wei Tang, Amin Biglari, Ryan Ebarb, Tee Pickett, Samuel Smallidge and Marcy Ward
Sensors 2021, 21(8), 2885; https://doi.org/10.3390/s21082885 - 20 Apr 2021
Cited by 10 | Viewed by 8939
Abstract
This paper presents a water intake monitoring system for animal agriculture that tracks individual animal watering behavior, water quality, and water consumption. The system is deployed in an outdoor environment to reach remote areas. The proposed system integrates motion detectors, cameras, water level [...] Read more.
This paper presents a water intake monitoring system for animal agriculture that tracks individual animal watering behavior, water quality, and water consumption. The system is deployed in an outdoor environment to reach remote areas. The proposed system integrates motion detectors, cameras, water level sensors, flow meters, Radio-Frequency Identification (RFID) systems, and water temperature sensors. The data collection and control are performed using Arduino microcontrollers with custom-designed circuit boards. The data associated with each drinking event are water consumption, water temperature, drinking duration, animal identification, and pictures. The data and pictures are automatically stored on Secure Digital (SD) cards. The prototypes are deployed in a remote grazing site located in Tucumcari, New Mexico, USA. The system can be used to perform water consumption and watering behavior studies of both domestic animals and wild animals. The current system automatically records the drinking behavior of 29 cows in a two-week duration in the remote ranch. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The proposed smart water intake monitoring system with multiple sensors and data collections.</p>
Full article ">Figure 2
<p>Hardware block diagram of the smart water intake monitoring system.</p>
Full article ">Figure 3
<p>The flow chart of the control program for sensing, processing, and data storage.</p>
Full article ">Figure 4
<p>Drinking station deployed in the testing site.</p>
Full article ">Figure 5
<p>Drinking station with a cow drinking from the trough.</p>
Full article ">Figure 6
<p>System setup at a remote testing site with drinking stations and a central water source.</p>
Full article ">Figure 7
<p>Setup at system test site with a central electrical box and four drinking stations.</p>
Full article ">Figure 8
<p>The side view of the drinking station showing the Radio-Frequency Identification (RFID) reader and the motion detector triggered camera.</p>
Full article ">Figure 9
<p>The flow meter and temperature sensor deployed in the trough.</p>
Full article ">Figure 10
<p>The junction electrical box implemented in each drinking station.</p>
Full article ">Figure 11
<p>The central electrical box implemented for four drinking stations. (<b>a</b>) Four Arduino boards for each drinking station. (<b>b</b>) An Arduino boards with breakout boards. (<b>c</b>) Water information display on an LCD screen.</p>
Full article ">Figure 12
<p>Statistical box plot of the water level sensor readings.</p>
Full article ">Figure 13
<p>Recorded water intake data by each individual cow.</p>
Full article ">Figure 14
<p>Recorded visits of the drinking stations by each individual cow.</p>
Full article ">Figure 15
<p>Daily average watering behavior of the drinking station in terms the number of visits and water intake (liters).</p>
Full article ">Figure 16
<p>Image taken by the cameras triggered by motion detector: (<b>a</b>) Animal ID readable. (<b>b</b>) Animal ID not unreadable due to light. (<b>c</b>) Animal ID blocked. (<b>d</b>) False trigger of the camera with no drinking event. (<b>e</b>) Evening drinking event with animal ID readable. (<b>f</b>) Evening event with animal ID unreadable.</p>
Full article ">Figure 17
<p>Recorded visit duration by individual animals.</p>
Full article ">Figure 18
<p>Recorded water temperature data with different day and time.</p>
Full article ">
17 pages, 42546 KiB  
Communication
Miniaturised Low-Cost Gamma Scanning Platform for Contamination Identification, Localisation and Characterisation: A New Instrument in the Decommissioning Toolkit
by Yannick Verbelen, Peter G. Martin, Kamran Ahmad, Suresh Kaluvan and Thomas B. Scott
Sensors 2021, 21(8), 2884; https://doi.org/10.3390/s21082884 - 20 Apr 2021
Cited by 6 | Viewed by 3192
Abstract
Formerly clandestine, abandoned and legacy nuclear facilities, whether associated with civil or military applications, represent a significant decommissioning challenge owing to the lack of knowledge surrounding the existence, location and types of radioactive material(s) that may be present. Consequently, mobile and highly deployable [...] Read more.
Formerly clandestine, abandoned and legacy nuclear facilities, whether associated with civil or military applications, represent a significant decommissioning challenge owing to the lack of knowledge surrounding the existence, location and types of radioactive material(s) that may be present. Consequently, mobile and highly deployable systems that are able to identify, spatially locate and compositionally assay contamination ahead of remedial actions are of vital importance. Deployment imposes constraints to dimensions resulting from small diameter access ports or pipes. Herein, we describe a prototype low-cost, miniaturised and rapidly deployable ‘cell characterisation’ gamma-ray scanning system to allow for the examination of enclosed (internal) or outdoor (external) spaces for radioactive ‘hot-spots’. The readout from the miniaturised and lead-collimated gamma-ray spectrometer, that is progressively rastered through a stepped snake motion, is combined with distance measurements derived from a single-point laser range-finder to obtain an array of measurements in order to yield a 3-dimensional point-cloud, based on a polar coordinate system—scaled for radiation intensity. Existing as a smaller and more cost-effective platform than presently available, we are able to produce a millimetre-accurate 3D volumetric rendering of a space—whether internal or external, onto which fully spectroscopic radiation intensity data can be overlain to pinpoint the exact positions at which (even low abundance) gamma-emitting materials exist. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Figure 1
<p>CC-RIAS prototype with components identified</p>
Full article ">Figure 2
<p>The CC-RIAS device deployed; (<b>a</b>) in the field using a tripod to survey for fine-scale fallout particle material, and (<b>b</b>) on the end of a PaR manipulation system as used for remote-handling applications at nuclear facilities. The scanning area is highlighted in (<b>a</b>), with the <math display="inline"> <semantics> <msup> <mrow/> <mn>137</mn> </msup> </semantics> </math>Cs radioactive sources highlighted in (<b>b</b>).</p>
Full article ">Figure 3
<p>Block diagram of the CC-RIAS prototype.</p>
Full article ">Figure 4
<p>Schematic representation of the detector, showing its CZT detection crystal (cube in the far back with dimensions <span class="html-italic">e</span> × <span class="html-italic">f</span> × <span class="html-italic">g</span>), collimator with aperture dimensions <span class="html-italic">a</span> × <span class="html-italic">c</span> and thickness <span class="html-italic">b</span>, focal length <span class="html-italic">d</span>, and pixel projection on a surface at a distance <span class="html-italic">h</span>. The green cross represents the reference planes, with the unattenuated pixel area marked in red, and extended (but partially attenuated by the collimator) pixel area is marked in blue.</p>
Full article ">Figure 5
<p>Calculation of the travel distance <span class="html-italic">z</span> through solid collimator material for incident angles larger than a direct projection through the collimator aperture allows. Colors are matched with <a href="#sensors-21-02884-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>3D radiation intensity (colour-map) point-cloud of an internal space exhibiting vertical faces representative of walls. A region with elevated activity at the centre of the scan is identified—representing an accumulation of radioactive material. Even without visible image overlay, a high density point cloud enables observers to identify object contours with ease. In this scan, the pixel overlap is approx. 700%, causing a spatial averaging effect equivalent to a Gaussian blur. In this scan, the frame projection is ca. 0.8 m tall and 0.6 m wide at a distance of ca. 2.5 m.</p>
Full article ">Figure 7
<p>Comparison of gamma spectra captured by the CC-RIAS at different deployment locations within the REE ore processing facilities of PChP. Photopeaks at different energies correspond to decay products of NORM (U and Th series), in varying concentrations as isotopes are selectively removed in the REE purification process. Count rates were normalised to facilitate comparison of spectral recordings at different locations. All graphs are on a linear scale.</p>
Full article ">Figure 8
<p>(<b>a</b>) and (<b>b</b>) overlain radiation intensity scans (CPS) obtained from two perspectives by the CC-RIAS system (position of scanning device indicated by white dots), (<b>c</b>) 3-dimensional photogrammetry reconstruction of an object of interest, and (<b>d</b>) the 3D photo-realistic rendering overlain with the point-cloud data captured by the CC-RIAS platform.</p>
Full article ">Figure 9
<p>Comparison of gamma spectra captured by the CC-RIAS for different deployment locations in the CEZ, showing consistency in contamination across the exclusion zone. The photopeak at 662 keV identifies <math display="inline"> <semantics> <msup> <mrow/> <mn>137</mn> </msup> </semantics> </math>Cs as it decays into <math display="inline"> <semantics> <msup> <mrow/> <mn>137</mn> </msup> </semantics> </math>Ba from its metastable isomer <math display="inline"> <semantics> <msup> <mrow/> <mrow> <mn>137</mn> <mi mathvariant="normal">m</mi> </mrow> </msup> </semantics> </math>Ba. CC-RIAS confirms <math display="inline"> <semantics> <msup> <mrow/> <mn>137</mn> </msup> </semantics> </math>Cs is the only gamma-emitting fallout isotope of significance remaining in the CEZ as of 2019. All measurements made at a distance of 1–10 cm from the surface, with normalised peak heights to facilitate comparison.</p>
Full article ">Figure 10
<p>Mobile deployment of CC-RIAS in outdoor environments on the PChP nuclear site in Kamianska, Ukraine. CC-RIAS was deployed to scan waste water tanks to determine radiological contamination.</p>
Full article ">
25 pages, 8725 KiB  
Article
A Smart Home Energy Management System Using Two-Stage Non-Intrusive Appliance Load Monitoring over Fog-Cloud Analytics Based on Tridium’s Niagara Framework for Residential Demand-Side Management
by Yung-Yao Chen, Ming-Hung Chen, Che-Ming Chang, Fu-Sheng Chang and Yu-Hsiu Lin
Sensors 2021, 21(8), 2883; https://doi.org/10.3390/s21082883 - 20 Apr 2021
Cited by 23 | Viewed by 5548
Abstract
Electricity is a vital resource for various human activities, supporting customers’ lifestyles in today’s modern technologically driven society. Effective demand-side management (DSM) can alleviate ever-increasing electricity demands that arise from customers in downstream sectors of a smart grid. Compared with the traditional means [...] Read more.
Electricity is a vital resource for various human activities, supporting customers’ lifestyles in today’s modern technologically driven society. Effective demand-side management (DSM) can alleviate ever-increasing electricity demands that arise from customers in downstream sectors of a smart grid. Compared with the traditional means of energy management systems, non-intrusive appliance load monitoring (NIALM) monitors relevant electrical appliances in a non-intrusive manner. Fog (edge) computing addresses the need to capture, process and analyze data generated and gathered by Internet of Things (IoT) end devices, and is an advanced IoT paradigm for applications in which resources, such as computing capability, of a central data center acted as cloud computing are placed at the edge of the network. The literature leaves NIALM developed over fog-cloud computing and conducted as part of a home energy management system (HEMS). In this study, a Smart HEMS prototype based on Tridium’s Niagara Framework® has been established over fog (edge)-cloud computing, where NIALM as an IoT application in energy management has also been investigated in the framework. The SHEMS prototype established over fog-cloud computing in this study utilizes an artificial neural network-based NIALM approach to non-intrusively monitor relevant electrical appliances without an intrusive deployment of plug-load power meters (smart plugs), where a two-stage NIALM approach is completed. The core entity of the SHEMS prototype is based on a compact, cognitive, embedded IoT controller that connects IoT end devices, such as sensors and meters, and serves as a gateway in a smart house/smart building for residential DSM. As demonstrated and reported in this study, the established SHEMS prototype using the investigated two-stage NIALM approach is feasible and usable. Full article
(This article belongs to the Special Issue Advanced Sensing for Intelligent Transport Systems and Smart Society)
Show Figures

Figure 1

Figure 1
<p>Appliance load monitoring approaches: (<b>a</b>) Intrusive appliance load monitoring (IALM); (<b>b</b>) non-intrusive appliance load monitoring (NIALM). Besides IALM, NIALM conducted in a practical field of interest such as a smart house can draw several inferences such as appliance-level energy consumption from total (circuit-level) energy consumption acquired by only one minimal set of plug-panel current and voltage sensors (or alternatively retrieved from a single source such as a power utility-owned smart meter [<a href="#B21-sensors-21-02883" class="html-bibr">21</a>,<a href="#B22-sensors-21-02883" class="html-bibr">22</a>]) instead of deployed and installed plug-load power meters (smart plugs) for relevant individual electrical appliances.</p>
Full article ">Figure 2
<p>Developed NIALM approach over fog-cloud computing for smart home energy management in this study. Edge SEMC refers to the smart energy management controller considering edge computing.</p>
Full article ">Figure 3
<p>Developed smart home energy management system (SHEMS) prototype considering new two-stage NIALM over fog-cloud computing in this study, where the prototype is based on Tridium’s Niagara Framework<sup>®</sup>. The framework runtime is targeted for Java 8 SE compact3 profile compliant virtual machines (VMs). MATLAB<sup>®</sup> is suited and used as the cloud here.</p>
Full article ">Figure 4
<p>Configuration of the Niagara Framework<sup>®</sup> that considers a supervisor PC/laptop and a single-networked, embedded JACE<sup>®</sup> controller for SHEMS in this study.</p>
Full article ">Figure 5
<p>Investigation of the presented NIALM approach: (<b>a</b>) Pipeline of the two-stage NIALM, in simple terms, not requiring a one-time intrusive period for the preliminary NIALM process [<a href="#B6-sensors-21-02883" class="html-bibr">6</a>]; (<b>b</b>) the two-stage NIALM, based on the Niagara Framework<sup>®</sup>, over fog-cloud computing.</p>
Full article ">Figure 6
<p>Detecting an appliance event in acquired total energy consumption through transient-passing event detection.</p>
Full article ">Figure 7
<p>Representative artificial neural networks (ANNs): (<b>a</b>) An NN is fully connected; (<b>b</b>) An NN considers dropout—a hyperparameter whose value is specified and used to control the learning process of an NN—that can prevent overfitting [<a href="#B46-sensors-21-02883" class="html-bibr">46</a>] (another way used to prevent overfitting is early stopping).</p>
Full article ">Figure 7 Cont.
<p>Representative artificial neural networks (ANNs): (<b>a</b>) An NN is fully connected; (<b>b</b>) An NN considers dropout—a hyperparameter whose value is specified and used to control the learning process of an NN—that can prevent overfitting [<a href="#B46-sensors-21-02883" class="html-bibr">46</a>] (another way used to prevent overfitting is early stopping).</p>
Full article ">Figure 8
<p>Experimental setup of the developed SHEMS prototype utilizing the presented two-stage NIALM approach over fog-cloud computing for the purpose of residential DSM. An electrical network topology with four power line branches in parallel is also shown.</p>
Full article ">Figure 8 Cont.
<p>Experimental setup of the developed SHEMS prototype utilizing the presented two-stage NIALM approach over fog-cloud computing for the purpose of residential DSM. An electrical network topology with four power line branches in parallel is also shown.</p>
Full article ">Figure 9
<p>Training trajectory of the developed feed-forward, multi-layer ANN in this experiment, where the training process terminates at the 141-th epoch.</p>
Full article ">Figure 10
<p>Remote deployment of the well-trained and -tested feed-forward ANN in cloud computing: (<b>a</b>) An executable Jar (Java ARchive) file was upgraded; (<b>b</b>) the upgraded executable Jar file is executed on the networked, embedded eSEMC as Edge AI.</p>
Full article ">Figure 10 Cont.
<p>Remote deployment of the well-trained and -tested feed-forward ANN in cloud computing: (<b>a</b>) An executable Jar (Java ARchive) file was upgraded; (<b>b</b>) the upgraded executable Jar file is executed on the networked, embedded eSEMC as Edge AI.</p>
Full article ">Figure 11
<p>A user interface (UI) developed in Niagara Workbench for the established SHEMS prototype using the presented two-stage NIALM over fog-cloud computing in this study.</p>
Full article ">Figure 12
<p>Appliance event messages received, in a created LINE group, by a LINE-Notify mobile phone over the Internet for load recognition via on-line load monitoring in the presented two-stage NIALM approach for load management/residential DSM in this study.</p>
Full article ">
19 pages, 1381 KiB  
Article
Online Adaptive Prediction of Human Motion Intention Based on sEMG
by Zhen Ding, Chifu Yang, Zhipeng Wang, Xunfeng Yin and Feng Jiang
Sensors 2021, 21(8), 2882; https://doi.org/10.3390/s21082882 - 20 Apr 2021
Cited by 17 | Viewed by 3457
Abstract
Accurate and reliable motion intention perception and prediction are keys to the exoskeleton control system. In this paper, a motion intention prediction algorithm based on sEMG signal is proposed to predict joint angle and heel strike time in advance. To ensure the accuracy [...] Read more.
Accurate and reliable motion intention perception and prediction are keys to the exoskeleton control system. In this paper, a motion intention prediction algorithm based on sEMG signal is proposed to predict joint angle and heel strike time in advance. To ensure the accuracy and reliability of the prediction algorithm, the proposed method designs the sEMG feature extraction network and the online adaptation network. The feature extraction utilizes the convolution autoencoder network combined with muscle synergy characteristics to get the high-compression sEMG feature to aid motion prediction. The adaptation network ensures the proposed prediction method can still maintain a certain prediction accuracy even the sEMG signals distribution changes by adjusting some parameters of the feature extraction network and the prediction network online. Ten subjects were recruited to collect surface EMG data from nine muscles on the treadmill. The proposed prediction algorithm can predict the knee angle 101.25 ms in advance with 2.36 degrees accuracy. The proposed prediction algorithm also can predict the occurrence time of initial contact 236±9 ms in advance. Meanwhile, the proposed feature extraction method can achieve 90.71±3.42% accuracy of sEMG reconstruction and can guarantee 73.70±5.01% accuracy even when the distribution of sEMG is changed without any adjustment. The online adaptation network enhances the accuracy of sEMG reconstruction of CAE to 87.65±3.83% and decreases the angle prediction error from 4.03 to 2.36. The proposed method achieves effective motion prediction in advance and alleviates the influence caused by the non-stationary of sEMG. Full article
(This article belongs to the Special Issue Wearable Sensors and Systems for Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>A sketch of the sensors setup and the exoskeleton system.</p>
Full article ">Figure 2
<p>The lower limb joint angle and special event for a gait.</p>
Full article ">Figure 3
<p>The structure of motion prediction algorithm with adaptation network.</p>
Full article ">Figure 4
<p>The structure of the feature extraction network. (<b>a</b>) Convolutional Auto Encoder; (<b>b</b>) The Structure of Encoder; (<b>c</b>) The rearrangement of sEMG channels.</p>
Full article ">Figure 5
<p>Similarity of synergy weights in different periods. Time 1∼5 represent the continuous time period, each period is 6 min of data.</p>
Full article ">Figure 6
<p>The architecture of motion prediction network.</p>
Full article ">Figure 7
<p>The architecture of adaptation network.</p>
Full article ">Figure 8
<p>The raw and the reconstruction sEMG signals of the nine muscles. On the horizontal axis is time and on the vertical axis is myoelectric amplitude. (<b>a</b>) Rectus Femoris; (<b>b</b>) Vastus Medial; (<b>c</b>) Vastus Lateralis; (<b>d</b>) Tibialis Anterior; (<b>e</b>) Soleus and Semitendinosus; (<b>f</b>) Biceps Femoris; (<b>g</b>) Medial Gastrocnemius; (<b>h</b>) Lateral Gastrocnemius.</p>
Full article ">Figure 9
<p>Changes of loss with proposed and two comparative convolutional autoencoder architectures. NMF rearrangement means the muscle channel of the sEMG image is rearranged according to the muscle synergy trick. No rearrangement means the muscle channel of the sEMG image does not rearrange. The random rearrangement means the muscle channel of the sEMG image is rearranged but rearrangement random.</p>
Full article ">Figure 10
<p>The results of motion prediction. (<b>a</b>) The results of predicted time to heel strike. The green curve is the prediction result, and the black curve is the actual time to hell strike. (<b>b</b>) The mean of maximum angle error for four groups of joint trajectory prediction.</p>
Full article ">Figure 11
<p>The mean and standard deviation results of four groups of joint trajectory prediction label. The green curve is the prediction result of joint trajectory. The black curve is the actual joint trajectory. All the results are drawn in a gait stride. (<b>a</b>) Prediction of joint angle 6.75 ms ahead of time; (<b>b</b>) Prediction of joint angle 33.75 ms ahead of time; (<b>c</b>) Prediction of joint angle 67.5 ms ahead of time; (<b>d</b>) Prediction of joint angle 101.25 ms ahead of time.</p>
Full article ">Figure 12
<p>The result adaptation of feature extraction and motion prediction network with and without exoskeleton. (<b>a</b>) The adaptation results of CAE; (<b>b</b>) The adaptation results of motion prediction; (<b>c</b>) The adaptation results of CAE with exoskeleton; (<b>d</b>) The adaptation results of motion prediction with exoskeleton. ‘Inter-subject test &amp; TL’ means the feature extraction and prediction network are trained using data of group A and tested using the data from group B. Meanwhile, the partial parameters of feature extraction and prediction network are online tuned by the adaptation network.</p>
Full article ">Figure 13
<p>The 101.25 ms advanced motion prediction and adaptation results for two comparative experiments on online adaptation network. (<b>a</b>) Adjusting the parameters of FC3 layer; (<b>b</b>) Adjusting the parameters with even index of FC3 layer; (<b>c</b>) Adjusting the parameters of FC4 layer; (<b>d</b>) The adaptation results of angle prediction. Different parameters chosen to be adjusted are in the motion prediction network. The ‘-a’,‘-b’, and ‘-c’ in sub-figure (<b>d</b>) represent the adaptation network adjusts all the parameters of the FC3 layer, the parameters of even index in the FC3 layer, and the parameter in the F4 layer, respectively.</p>
Full article ">
31 pages, 11468 KiB  
Article
Application of MEMS Sensors for Evaluation of the Dynamics for Cargo Securing on Road Vehicles
by Jozef Gnap, Juraj Jagelčák, Peter Marienka, Marcel Frančák and Mariusz Kostrzewski
Sensors 2021, 21(8), 2881; https://doi.org/10.3390/s21082881 - 20 Apr 2021
Cited by 28 | Viewed by 4056
Abstract
Safety is one of the key aspects of the successful transport of cargo. In the case of road transport, the dynamics of a vehicle during normal events such as braking, steering, and evasive maneuver are variable in different places in the vehicle. Several [...] Read more.
Safety is one of the key aspects of the successful transport of cargo. In the case of road transport, the dynamics of a vehicle during normal events such as braking, steering, and evasive maneuver are variable in different places in the vehicle. Several manufacturers provide different dataloggers with acceleration sensors, but the results are not comparable due to different sensor parameters, measurement ranges, sampling frequencies, data filtration, and evaluation of different periods of acceleration. The position of the sensor in the loading area is also important. The accelerations are not the same at all points in the vehicle. The article deals with the measurement of these dynamic events with MEMS sensors on selected points of a vehicle loaded with cargo and with changes in dynamics after certain events that could occur during regular road transport of cargo to analyze the possibilities for monitoring accelerations and the related forces acting on the cargo during transport. The article uses evaluation times of 80, 300, and 1000 ms for accelerations. With the measured values, it is possible to determine the places with a higher risk of cargo damage and not only to adjust the packaging and securing of the cargo, but also to modify the transport routes. Concerning the purposes of securing the cargo in relation to EN 12195-1 and the minimum values of forces for securing the cargo, we focused primarily on the places where the acceleration of 0.5 g was exceeded when analyzing the monitored route. There were 32 of these points in total, all of which were measured by a sensor located at the rear of the semi-trailer. In 31 cases, the limit of 0.5 g was exceeded for an 80-ms evaluation time, and in one case, the value of 0.51 g was reached in the transverse direction for a 300-ms evaluation time. Full article
Show Figures

Figure 1

Figure 1
<p>A theoretical course of braking [<a href="#B74-sensors-21-02881" class="html-bibr">74</a>].</p>
Full article ">Figure 2
<p>Illustration of applicable acceleration coefficients for road transport [<a href="#B75-sensors-21-02881" class="html-bibr">75</a>,<a href="#B76-sensors-21-02881" class="html-bibr">76</a>].</p>
Full article ">Figure 3
<p>Effect of evaluation time on the course of braking obtained from sensor B during a semi-trailer’s braking.</p>
Full article ">Figure 4
<p>Sensor positions in case of Series 1 braking tests.</p>
Full article ">Figure 5
<p>A monitored transport route (map by OpenStreetMap).</p>
Full article ">Figure 6
<p>Point dependency of individual microelectromechanical system (MEMS) sensors against sensor A with evaluation times application.</p>
Full article ">Figure 7
<p>Series 1-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 8
<p>Series 2-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 9
<p>Series 3-A Point Dependency of Individual MEMS Sensors against Sensor A.</p>
Full article ">Figure 10
<p>Series 4-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 11
<p>Series 5-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 12
<p>Series 6-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 13
<p>Series 7-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 14
<p>Series 8-Point dependency of individual MEMS sensors against sensor A.</p>
Full article ">Figure 15
<p>A comparison of mean fully developed deceleration (MFDD) in the case of a loaded and an empty vehicle.</p>
Full article ">Figure 16
<p>Exceptional events during transport (map by OpenStreetMap).</p>
Full article ">Figure 17
<p>Numbers of events per particular acceleration intervals.</p>
Full article ">
22 pages, 6902 KiB  
Article
A Smartcard-Based User-Controlled Single Sign-On for Privacy Preservation in 5G-IoT Telemedicine Systems
by Tzu-Wei Lin, Chien-Lung Hsu, Tuan-Vinh Le, Chung-Fu Lu and Bo-Yu Huang
Sensors 2021, 21(8), 2880; https://doi.org/10.3390/s21082880 - 20 Apr 2021
Cited by 18 | Viewed by 3894
Abstract
Healthcare is now an important part of daily life because of rising consciousness of health management. Medical professionals can know users’ health condition if they are able to access information immediately. Telemedicine systems, which provides long distance medical communication and services, is a [...] Read more.
Healthcare is now an important part of daily life because of rising consciousness of health management. Medical professionals can know users’ health condition if they are able to access information immediately. Telemedicine systems, which provides long distance medical communication and services, is a multi-functional remote medical service that can help patients in bed in long-distance communication environments. As telemedicine systems work in public networks, privacy preservation issue of sensitive and private transmitted information is important. One of the means of proving a user’s identity are user-controlled single sign-on (UCSSO) authentication scheme, which can establish a secure communication channel using authenticated session keys between the users and servers of telemedicine systems, without threats of eavesdropping, impersonation, etc., and allow patients access to multiple telemedicine services with a pair of identity and password. In this paper, we proposed a smartcard-based user-controlled single sign-on (SC-UCSSO) for telemedicine systems that not only remains above merits but achieves privacy preservation and enhances security and performance compared to previous schemes that were proved with BAN logic and automated validation of internet security protocols and applications (AVISPA). Full article
(This article belongs to the Collection Security, Trust and Privacy in New Computing Environments)
Show Figures

Figure 1

Figure 1
<p>A general telemedicine system with asynchronous and synchronous telemedicine.</p>
Full article ">Figure 2
<p>System structure of the proposed scheme.</p>
Full article ">Figure 3
<p>Registration phase of the proposed scheme.</p>
Full article ">Figure 4
<p>The authenticated key exchange phase of the proposed scheme.</p>
Full article ">Figure 5
<p>Offline password change phase of the proposed scheme.</p>
Full article ">Figure 6
<p>HLPSL specification of user.</p>
Full article ">Figure 7
<p>HLPSL specification of server.</p>
Full article ">Figure 8
<p>HLPSL specification of session role, environment role, and goals.</p>
Full article ">Figure 9
<p>Results of AVISPA.</p>
Full article ">Figure 10
<p>Computational complexity of server with varying number of users.</p>
Full article ">Figure 11
<p>Computational complexity of user with varying number of servers.</p>
Full article ">Figure 12
<p>Multi-function smart token.</p>
Full article ">Figure 13
<p>Smartcard.</p>
Full article ">Figure 14
<p>Interface of registration.</p>
Full article ">Figure 15
<p>Interface of login.</p>
Full article ">Figure 16
<p>Interface of choosing services.</p>
Full article ">Figure 17
<p>Interface of account checking.</p>
Full article ">
22 pages, 2925 KiB  
Article
Heating Homes with Servers: Workload Scheduling for Heat Reuse in Distributed Data Centers
by Marcel Antal, Andrei-Alexandru Cristea, Victor-Alexandru Pădurean, Tudor Cioara, Ionut Anghel, Claudia Antal (Pop), Ioan Salomie and Nicolas Saintherant
Sensors 2021, 21(8), 2879; https://doi.org/10.3390/s21082879 - 20 Apr 2021
Cited by 5 | Viewed by 3614
Abstract
Data centers consume lots of energy to execute their computational workload and generate heat that is mostly wasted. In this paper, we address this problem by considering heat reuse in the case of a distributed data center that features IT equipment (i.e., servers) [...] Read more.
Data centers consume lots of energy to execute their computational workload and generate heat that is mostly wasted. In this paper, we address this problem by considering heat reuse in the case of a distributed data center that features IT equipment (i.e., servers) installed in residential homes to be used as a primary source of heat. We propose a workload scheduling solution for distributed data centers based on a constraint satisfaction model to optimally allocate workload on servers to reach and maintain the desired home temperature setpoint by reusing residual heat. We have defined two models to correlate the heat demand with the amount of workload to be executed by the servers: a mathematical model derived from thermodynamic laws calibrated with monitored data and a machine learning model able to predict the amount of workload to be executed by a server to reach a desired ambient temperature setpoint. The proposed solution was validated using the monitored data of an operational distributed data center. The server heat and power demand mathematical model achieve a correlation accuracy of 11.98% while in the case of machine learning models, the best correlation accuracy of 4.74% is obtained for a Gradient Boosting Regressor algorithm. Also, our solution manages to distribute the workload so that the temperature setpoint is met in a reasonable time, while the server power demand is accurately following the heat demand. Full article
Show Figures

Figure 1

Figure 1
<p>Distributed DC and computing power-based heating.</p>
Full article ">Figure 2
<p>Thermal aware workload scheduling optimization subproblems.</p>
Full article ">Figure 3
<p>Measurements used in the machine learning processes.</p>
Full article ">Figure 4
<p>Test case distributed DC.</p>
Full article ">Figure 5
<p>Data pre-processing pipeline.</p>
Full article ">Figure 6
<p>Example of relevant data samples obtained using the pre-processing pipeline.</p>
Full article ">Figure 7
<p>Power prediction for temperature change.</p>
Full article ">Figure 8
<p>Initial conditions: the 4 QRad server heaters’ total power demand and initial temperatures.</p>
Full article ">Figure 9
<p>GBR heat demand estimation model power prediction.</p>
Full article ">Figure 10
<p>CPU active cored due to task scheduling on the <b>le</b><b>ft</b> and CPU power demand estimated and predicted on the <b>right</b>.</p>
Full article ">Figure 11
<p>QRad Simulation Results: temperature evolution (<b>left</b>) and power demand evolution (<b>right</b>).</p>
Full article ">
11 pages, 1611 KiB  
Communication
Thermionic Electron Beam Current and Accelerating Voltage Controller for Gas Ion Sources
by Jarosław Sikora, Bartosz Kania and Janusz Mroczka
Sensors 2021, 21(8), 2878; https://doi.org/10.3390/s21082878 - 20 Apr 2021
Cited by 4 | Viewed by 2645
Abstract
Thermionic emission sources are key components of electron impact gas ion sources used in measuring instruments, such as mass spectrometers, ionization gauges, and apparatus for ionization cross-section measurements. The repeatability of the measurements taken with such instruments depends on the stability of the [...] Read more.
Thermionic emission sources are key components of electron impact gas ion sources used in measuring instruments, such as mass spectrometers, ionization gauges, and apparatus for ionization cross-section measurements. The repeatability of the measurements taken with such instruments depends on the stability of the ion current, which is a function, among other things, of the electron beam current and electron accelerating voltage. In this paper, a laboratory thermionic electron beam current and accelerating voltage controller is presented, based on digital algorithm implementation. The average value of the percentage standard deviation of the emission current is 0.021%, and the maximum electron accelerating voltage change versus the emission current is smaller than 0.011% in the full operating range of the emission current. Its application as a trap current or emission current-regulated ion source power supply could be useful in many measuring instruments, such as in microelectromechanical system (MEMS) mass spectrometers as universal gas sensors, where a stable emission current and electron energy are needed. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>A simplified diagram of the control system electrical circuit. <span class="html-italic">V<sub>c</sub></span> is the cathode voltage, <span class="html-italic">I<sub>e</sub></span> is the emission current, <span class="html-italic">V<sub>a</sub></span> is the anode circuit supply voltage, <span class="html-italic">V</span> is the electron accelerating voltage. <span class="html-italic">R</span><sub>1</sub> = 12 kOhm; <span class="html-italic">R</span><sub>2</sub> = 2.4 kOhm; <span class="html-italic">R</span><sub>3</sub> = 10 kOhm; <span class="html-italic">R</span><sub>4</sub> = 120 kOhm; <span class="html-italic">R</span><sub>5</sub> = <span class="html-italic">R</span><sub>6</sub> = 570 Ohm. The operational amplifier A<sub>1</sub> and differential amplifier A<sub>3</sub> are supplied from the voltage source of +/−12 V/5 A, and the operational amplifier A<sub>2</sub> is supplied from the voltage source of 125 V/100 mA. All supplied voltage sources are referenced to the ground.</p>
Full article ">Figure 2
<p>A block diagram of the designed control system.</p>
Full article ">Figure 3
<p>The general algorithm of the presented control system.</p>
Full article ">Figure 4
<p>Results of the feedforward control of the electron accelerating voltage <span class="html-italic">V</span>.</p>
Full article ">Figure 5
<p>Plot of the emission current <span class="html-italic">I<sub>e</sub></span> versus the electron accelerating voltage <span class="html-italic">V</span>.</p>
Full article ">Figure 6
<p>Standard deviation of the emission current versus its intensity; the accelerating voltage <span class="html-italic">V</span> = 100 V.</p>
Full article ">Figure 7
<p>Percentage standard deviation of the emission current versus its intensity.</p>
Full article ">
40 pages, 30857 KiB  
Review
Chemical Gas Sensors: Recent Developments, Challenges, and the Potential of Machine Learning—A Review
by Usman Yaqoob and Mohammad I. Younis
Sensors 2021, 21(8), 2877; https://doi.org/10.3390/s21082877 - 20 Apr 2021
Cited by 123 | Viewed by 16103
Abstract
Nowadays, there is increasing interest in fast, accurate, and highly sensitive smart gas sensors with excellent selectivity boosted by the high demand for environmental safety and healthcare applications. Significant research has been conducted to develop sensors based on novel highly sensitive and selective [...] Read more.
Nowadays, there is increasing interest in fast, accurate, and highly sensitive smart gas sensors with excellent selectivity boosted by the high demand for environmental safety and healthcare applications. Significant research has been conducted to develop sensors based on novel highly sensitive and selective materials. Computational and experimental studies have been explored in order to identify the key factors in providing the maximum active location for gas molecule adsorption including bandgap tuning through nanostructures, metal/metal oxide catalytic reactions, and nano junction formations. However, there are still great challenges, specifically in terms of selectivity, which raises the need for combining interdisciplinary fields to build smarter and high-performance gas/chemical sensing devices. This review discusses current major gas sensing performance-enhancing methods, their advantages, and limitations, especially in terms of selectivity and long-term stability. The discussion then establishes a case for the use of smart machine learning techniques, which offer effective data processing approaches, for the development of highly selective smart gas sensors. We highlight the effectiveness of static, dynamic, and frequency domain feature extraction techniques. Additionally, cross-validation methods are also covered; in particular, the manipulation of the k-fold cross-validation is discussed to accurately train a model according to the available datasets. We summarize different chemresistive and FET gas sensors and highlight their shortcomings, and then propose the potential of machine learning as a possible and feasible option. The review concludes that machine learning can be very promising in terms of building the future generation of smart, sensitive, and selective sensors. Full article
(This article belongs to the Special Issue Electronic Tongues, Electronic Noses, and Electronic Eyes)
Show Figures

Figure 1

Figure 1
<p>Different kinds of sensing materials with micro/nanostructures, their advantages (in black color), and limitations (in red color), indicating the need of hybridization/composition, doping, and p–n junction formation. In particular, the figure suggests that the decoration/doping of metal/hetroatom over the base materials will enhance the catalytic reaction for specific gases and form the charge accumulation depletion region to improve the sensing performances. Reproduced from multiple sources with permission from References [<a href="#B18-sensors-21-02877" class="html-bibr">18</a>,<a href="#B41-sensors-21-02877" class="html-bibr">41</a>,<a href="#B42-sensors-21-02877" class="html-bibr">42</a>,<a href="#B46-sensors-21-02877" class="html-bibr">46</a>,<a href="#B80-sensors-21-02877" class="html-bibr">80</a>,<a href="#B81-sensors-21-02877" class="html-bibr">81</a>,<a href="#B82-sensors-21-02877" class="html-bibr">82</a>,<a href="#B83-sensors-21-02877" class="html-bibr">83</a>,<a href="#B84-sensors-21-02877" class="html-bibr">84</a>,<a href="#B85-sensors-21-02877" class="html-bibr">85</a>,<a href="#B86-sensors-21-02877" class="html-bibr">86</a>,<a href="#B87-sensors-21-02877" class="html-bibr">87</a>,<a href="#B88-sensors-21-02877" class="html-bibr">88</a>,<a href="#B89-sensors-21-02877" class="html-bibr">89</a>,<a href="#B90-sensors-21-02877" class="html-bibr">90</a>,<a href="#B91-sensors-21-02877" class="html-bibr">91</a>]. Copyright 2016 Elsevier [<a href="#B18-sensors-21-02877" class="html-bibr">18</a>], copyright 2019 Elsevier [<a href="#B41-sensors-21-02877" class="html-bibr">41</a>], copyright 2016 Elsevier [<a href="#B42-sensors-21-02877" class="html-bibr">42</a>], copyright 2020 ACS [<a href="#B46-sensors-21-02877" class="html-bibr">46</a>], copyright 2020 [<a href="#B80-sensors-21-02877" class="html-bibr">80</a>], copyright 2019 Elsevier [<a href="#B92-sensors-21-02877" class="html-bibr">92</a>], copyright 2006 ACS [<a href="#B82-sensors-21-02877" class="html-bibr">82</a>], copyright 2007 ACS [<a href="#B83-sensors-21-02877" class="html-bibr">83</a>], copyright 2013 Elsevier [<a href="#B84-sensors-21-02877" class="html-bibr">84</a>], copyright 2014 Elsevier [<a href="#B85-sensors-21-02877" class="html-bibr">85</a>], copyright 2016 Elsevier [<a href="#B86-sensors-21-02877" class="html-bibr">86</a>], copyright 2018 ACS [<a href="#B87-sensors-21-02877" class="html-bibr">87</a>], copyright 2015 Elsevier [<a href="#B88-sensors-21-02877" class="html-bibr">88</a>], copyright 2014 Elsevier [<a href="#B89-sensors-21-02877" class="html-bibr">89</a>], copyright 2013 Nature [<a href="#B90-sensors-21-02877" class="html-bibr">90</a>], and copyright 2012 ACS [<a href="#B91-sensors-21-02877" class="html-bibr">91</a>].</p>
Full article ">Figure 2
<p>The computational investigation using DFT calculations: (<b>A</b>) DFT calculation for phosphorene: (a-1) shows adsorption of NO<sub>2</sub> molecule on phosphorene surface with oxygen atoms pointing downward and the corresponding generated electron and hole clouds; (a-2) left axis indicates the histogram graph (red bars) with adsorption energies of different gases at fixed 20 ppb concentration; (a-3) displays the change in phosphorene band structure after adsorption of different gases including NO<sub>2</sub>, H<sub>2</sub>, H<sub>2</sub>S, and CO. A clear change on the energy level of conduction band can be observed with the NO<sub>2</sub> adsorption on phosphorene surface, reproduced with permission from [<a href="#B94-sensors-21-02877" class="html-bibr">94</a>], copyright 2015 Nature. (<b>B</b>) DFT calculations of boron (B)-, aluminum (Al)-, and gallium-(Ga) doped graphene for NO<sub>2</sub> detection: (b-1) illustrates a schematic for the NO<sub>2</sub> molecule adsorption on all three kinds of graphene surfaces—clear and strong adsorption of NO<sub>2</sub> molecule on Al-doped graphene can be seen (middle image); (b-(2–4)) show DOS for all kinds of graphene when exposed to NO<sub>2</sub>—Al-doped graphene displayed maximum change at Fermi energy level suggesting its higher sensing ability toward NO<sub>2</sub> molecule (b-3). Reproduced with permission from [<a href="#B95-sensors-21-02877" class="html-bibr">95</a>], copyright 2016 Elsevier.</p>
Full article ">Figure 3
<p>Computational analysis of MoS<sub>2</sub> and Au-doped MoS<sub>2</sub> for gas sensing using DFT calculations: (<b>A</b>) The charge transfer between pure MoS<sub>2</sub> sheet and different target molecules; maximum charge transfer of 0.10 e can be seen for NO<sub>2</sub>, which is quite low, indicating the significance of heteroatom doping (reproduced with permission from [<a href="#B99-sensors-21-02877" class="html-bibr">99</a>], copyright 2013 Springer Nature). (<b>B</b>) The DFT results for Au-doped MoS<sub>2</sub> sheet: (b-1) a schematic illustration of C<sub>2</sub>H<sub>4</sub> and C<sub>2</sub>H<sub>6</sub> molecule on Au-doped MoS<sub>2</sub> with the corresponding bond length distance; (b-3,4) the DOS graphs for both the molecules before and after adsorption. A relatively larger change at Fermi energy level was found for C<sub>2</sub>H<sub>4</sub>, indicating better sensitivity of Au-doped MoS<sub>2</sub> towards C<sub>2</sub>H<sub>4</sub>. Reproduced with permission from [<a href="#B80-sensors-21-02877" class="html-bibr">80</a>], copyright 2020 Front. Mater.</p>
Full article ">Figure 4
<p>Computational study of metal oxide gas sensors: (<b>A</b>) NO<sub>2</sub>-sensing mechanism on WO<sub>3</sub> surface and DOS graph before and after exposure of NO<sub>2</sub>: (a-1) The dissociation of NO<sub>2</sub> molecule to NO when interacting with WO<sub>3</sub> surface while leaving behind oxygen adsorbent. (a-2) The formation of NO<sub>2</sub> from generated NO after reacting with one-half oxygen in the environment. This chain cycle significantly improves the oxygen adsorbent population consequently enhances the sensor response. (a-3) The DOS graph. A clear shift can be observed after NO exposure in the magnified image (reproduced with permission from [<a href="#B100-sensors-21-02877" class="html-bibr">100</a>], copyright 2014 Elsevier). (<b>B</b>) (b-1) A schematic image of Al-doped ZnO structure; (b-2) the change in Fermi energy level before and after Al doping in ZnO structure (reproduced with permission from [<a href="#B101-sensors-21-02877" class="html-bibr">101</a>], copyright 2013 RSC).</p>
Full article ">Figure 5
<p>Experimental results of graphene-based gas sensors: (<b>A</b>) Ag NPs decorated on sulfonated reduced graphene oxide (Ag-S-rGO) and their sensing performance for NO<sub>2</sub> and NH<sub>3</sub> detection: (a-1) SEM image of Ag-S-rGO. Uniformly distributed Ag NPs can be seen on S-rGO surface. (a-2) Sensor response graph towards different concentrations of NO<sub>2</sub> gas indicating good sensitivity, fast response/recovery, and linear change in sensor response at different concentrations. (a-3) Response graph to different gases/compounds including 100 ppm NO<sub>2</sub>, 1 mL (13 ppt) of NH<sub>3</sub>, 1 mL (46 ppt) of methanol, 1 mL (32 ppt) of ethanol, 1 mL (18 ppt) of toluene, 40% RH, 60% RH, and 80% RH. Reproduced with permission from [<a href="#B103-sensors-21-02877" class="html-bibr">103</a>], copyright 2014 ACS. (<b>B</b>) The results for Au @ graphene to detect H<sub>2</sub> gas: (b-1) SEM image of Au@graphene; inset shows the schematic image of the device. (b-2) The response of pristine graphene and Au@graphene to 500 ppm H<sub>2</sub> gas when 1 V and 60 V DC was applied. Sensor shows the maximum response when 60 V was applied, indicating the importance of biasing voltage. (b-3) A cross-sensitivity graph, suggesting very poor selectivity of the fabricated sensor. Reproduced with permission from [<a href="#B104-sensors-21-02877" class="html-bibr">104</a>], copyright 2019 RSC.</p>
Full article ">Figure 6
<p>Representation of the sensing performances of 3D graphene: (<b>A</b>) Sensing results of B- and N-doped RGOH: (a-1) Schematic diagram of the fabricated sensor device with SEM images of highly porous B- and N-doped graphene. (a-2,3) The response of B-RGOH and N-RGOH at different concentration levels of NO<sub>2</sub> gas. Results show that N-RGOH revealed a good response at lower NO<sub>2</sub> concentration. (a-4) Responses of the B- and N-RGOH to 800 ppb NO<sub>2</sub>, 80% RH, 1000 ppm CO<sub>2</sub>, 100 ppm NH<sub>3</sub>, saturated methanol, ethanol, and acetone vapors. Reproduced with permission from [<a href="#B39-sensors-21-02877" class="html-bibr">39</a>], copyright 2019 ACS. (<b>B</b>) The sensing performances of Pt decorated over highly porous 3D graphene: (b-1) An optical and schematic image of fabricated sensor device. (b-2) SEM image of highly porous 3D graphene; inset shows the TEM image of Pt NP. (b-3) The symmetric response of Pt-3D graphene to various H<sub>2</sub> concentration levels at 200 °C. Reproduced with permission from [<a href="#B81-sensors-21-02877" class="html-bibr">81</a>], copyright 2019 Elsevier.</p>
Full article ">Figure 7
<p>Demonstration of the sensing results for physically and chemically synthesized MoS<sub>2</sub> sheets: (<b>A</b>) Sensing properties of Pt NPs decorated on physically grown MoS<sub>2</sub> for NH<sub>3</sub> and H<sub>2</sub>S detection: (a-1) SEM image of monolayer MoS<sub>2</sub> decorated with Pt NPs. (a-2) The formation of Schottky barrier between Pt–MoS<sub>2</sub> and barrier height. (a-3,4) Sensor response towards the NH<sub>3</sub> and H<sub>2</sub>S. Reproduced with permission from [<a href="#B107-sensors-21-02877" class="html-bibr">107</a>], copyright 2018 Royal society. (<b>B</b>) The sensing performance of Pt NPs decorated on the chemically synthesized MoS<sub>2</sub> sheets towards humidity: (b-1) SEM image of Pt NPs decorated on MoS<sub>2</sub>. The inset reveals a higher magnified image. (b-2) Sensor response towards different levels of relative humidity. The sensor showed a good response with full recovery, revealing that a chemically synthesized MoS<sub>2</sub> is a promising candidate. (b-3) Stability test was checked after 1.5 months, and degradation in sensor response was observed. Reproduced with permission from [<a href="#B92-sensors-21-02877" class="html-bibr">92</a>], copyright 2018 IOP.</p>
Full article ">Figure 8
<p>Metal oxide-based gas sensors: (<b>A</b>) Pd NPs decorated on the single SnO<sub>2</sub> NW for O<sub>2</sub> and H<sub>2</sub> detection: (a-1) The formation of the depletion region on the nanowire (NW) surface with the Pt NP deposition. As shown in <a href="#sensors-21-02877-f008" class="html-fig">Figure 8</a> (a-1), the depletion region facilitated in increasing the oxygen adsorbent on the sensing material and consequently improved the sensor performance. (a-2) The Pt NP deposition rate. (b-3) IDS was measured during Au deposition. The onset of the large current increased beyond ≈4 × 10<sup>3</sup> s can be attributed to the percolated pathways generated through excessive Au particles. (a-4) The sensing performances of pure SnO<sub>2</sub> and Pd–SnO<sub>2</sub> NW to O<sub>2</sub> and H<sub>2</sub>. Reproduced with permission from [<a href="#B109-sensors-21-02877" class="html-bibr">109</a>], copyright 2005 ACS. (<b>B</b>) Sensing performance of CuO@SnO<sub>2</sub> core–shell structure for formaldehyde detection. (b-1) Schematic diagram of the fabrication process. (b-2) SEM image of core–shell NW structure. (b-3) Sensor response with different shell thickness; SnO<sub>2</sub> with 24 nm thick shell revealed maximum response. (b-4) Selectivity graph. The inset revealed the TEM image of the core–shell NW structure. Reproduced with permission from [<a href="#B111-sensors-21-02877" class="html-bibr">111</a>], copyright 2019 Elsevier.</p>
Full article ">Figure 9
<p>Metal oxide-based gas sensors: (<b>A</b>) Pd/Au NPs decorated on the SnO<sub>2</sub> nanosheets for temperature-dependent acetone and HCHO detection: (a-1) Synthesis process of Pd/Au NPs decorated on the SnO<sub>2</sub> nanosheets. (a-2) The SEM image for the Pd/Au-decorated SnO<sub>2</sub> nanosheets. (a-3) The temperature-dependent response of the as-fabricated sensors: (I) sensing performance of SnO<sub>2</sub>, Pd@SnO<sub>2</sub>, Au@SnO<sub>2</sub>, and Pd/Au@SnO<sub>2</sub> to 50 ppm acetone, with Pd/Au@SnO<sub>2</sub> demonstrating maximum response at 250 °C; (II) Pd/Au@SnO<sub>2</sub> sensor response towards various compounds (50 ppm) at different OT, showing maximum response for HCHO @ 110 °C and acetone @ 250 °C. (a-4 (I, II)) The sensor response @ 250 °C and 110 °C for a wide range of acetone and HCHO concentration levels, respectively. (a-4 (III)) The responses toward 1 ppm acetone compared to 1 ppm of the other common interfered biomarker gases. Reproduced with permission from [<a href="#B57-sensors-21-02877" class="html-bibr">57</a>], copyright 2019 Elsevier. (<b>B</b>) Ag@Pt core–shell NSs on the ZnO NWs for HCHO detection: (b-1) Fabricated chemiresistive sensor device. (b-2) SEM image of Ag/Pt core–shell on ZnO NWs. The inset reveals the magnified image of Ag@Pt core–shell NSs. (b-3) Sensor response by varying the Pt and Ag content ratio. The Pt60 and Ag40 showed a maximum response at 280 °C and were selected for further measurements. (b-4) Optimized sensor response toward different concentration levels of HCHO. (b-5) Sensor response to other compounds. Reproduced with permission from [<a href="#B58-sensors-21-02877" class="html-bibr">58</a>], copyright 2020 ACS.</p>
Full article ">Figure 10
<p>Overall machine learning process from sensors response, data processing, and model training to prediction accuracy.</p>
Full article ">Figure 11
<p>Illustration of hold-out and <span class="html-italic">k</span>-fold cross-validation techniques.</p>
Full article ">Figure 12
<p>Early fire detection in smart homes using machine learning: (a-1) overall system overview, (a-2) whole hardware and experimental setup, (a-3) sensor measurements taken under usual conditions, (a-4) sensor measurements taken under extreme conditions, and (a-5) prediction accuracy histogram before and after replacing the missing values with the mean values computed from the training data. Reproduced with permission from [<a href="#B73-sensors-21-02877" class="html-bibr">73</a>], copyright 2020 IEEE.</p>
Full article ">Figure 13
<p>Smart sensing using machine learning: (<b>A</b>) Sensing of several compounds using single graphene-based chemiresistive sensor: (a-1) Typical response of chemiresistive sensor, regions I, baseline; II response time; and III, recovery. It indicates the potential features that can be extracted including the maximum change in resistance (ΔR), area of the response (red shaded area), and area of the recovery (green shaded area). (a-2) Normalized response for all compounds, ready for feature extraction. (a-3) PCA scores for the 11 compounds revealed small overlapping. (a-4) Different classifier prediction accuracy histogram, reproduced with permission from [<a href="#B123-sensors-21-02877" class="html-bibr">123</a>], copyright 2016 ACS. (<b>B</b>) e-nose system for smart detection of different compounds under dry and humid air: (b-1) Optical image for all the sensors. (b-2,3) Sensor response under dry and humid air, respectively. No. 9 and no. 71 sensors were not affected by humidity. (b-4,5) Corresponding PCA results of all sensor data in air and humid environment; PCA also confirmed the little humidity effect on no. 9 and no. 71 sensors. Reproduced with permission from [<a href="#B130-sensors-21-02877" class="html-bibr">130</a>], copyright 2017 MDPI.</p>
Full article ">Figure 14
<p>(<b>A</b>) e-nose system consisting of analog and digital sensors for detection of different compounds: (a-1) Raw data of analog sensors to acetone under humid conditions. (a-2) Raw data of digital sensors after exposure to acetone under humid conditions. (a-3) PCA score plot showing the effect of humidity. The measurements are labeled by condition and target compound. (a-4) LDA scores plot for the discrimination of compounds and concentrations of the measurements performed under dry and humid conditions. Reproduced with permission from [<a href="#B132-sensors-21-02877" class="html-bibr">132</a>], copyright 2019 ACS. (<b>B</b>) Smart sensing of different compounds using thermal fingerprints of a single sensor based on Pt NPs@SnO<sub>2</sub> NWs. (b-1) SEM image of Pt NPs decorated on SnO<sub>2</sub> NWs. (b-2) Higher magnified TEM image of single SnO<sub>2</sub> NW decorated with Pt NPs. (b-3) Dynamic response during the exposure of different ethanol concentrations @ 250 °C. (b-4) Sensor response to ethanol at different operating temperatures. (b-5) Three-dimensional plot of the principal components revealing excellent discrimination among different compounds. Reproduced with permission from [<a href="#B133-sensors-21-02877" class="html-bibr">133</a>], copyright 2019 Elsevier.</p>
Full article ">Figure 15
<p>(<b>A</b>) Food classification using an array of 20 chemiresistive sensors and machine learning: (a-1) Schematic diagram of functionalized carbon nanotube (CNT) chemiresistive device. (a-2) Sensing response for S4, S5, S6, and S20 toward five kinds of cheeses. The response is represented as a change in conductance normalized to the conductance at the start of the exposure (ΔG/G0). Each exposure started at t = 60 s and ends at t = 180 s (marked by dashed vertical lines used as features for KNN training). Each response was an average of 40 separate sensing experiments. The shaded area represents the standard deviation of the response. (a-3) PCA of extracted features from the five-cheese dataset showing the first two principal components. (a-4) Top 16 overall most important features in the five-cheese, wherein (t-1) demonstrates the prediction accuracy toward different kinds of cheese using KNN and f-RF classifiers. Reproduced with permission from [<a href="#B135-sensors-21-02877" class="html-bibr">135</a>], copyright 2019 ACS.</p>
Full article ">Figure 16
<p>(<b>A</b>,<b>B</b>) summarize the detection results for different VOCs using chemiresistive devices and machine learning. (a-1) Schematic diagram of graphene nanoribbon sensors. (a-2) Real-time resistance change response toward different amines and alcohols. (a-3) 3D LDA graph representation with 100% accuracy. Reproduced with permission from [<a href="#B129-sensors-21-02877" class="html-bibr">129</a>], copyright 2020 ACS. (b-1) Schematic diagram of fabricated hollow SnO<sub>2</sub> sphere-based chemical sensor; inset presents the SEM image of synthesized hollow spheres. (b-2) Real-time resistance change response to different chemicals at various concentration levels. (b-3) Average accuracy bar graph for each model. (b-4) Representation of concentration prediction for each chemical. Reproduced with permission from [<a href="#B126-sensors-21-02877" class="html-bibr">126</a>], copyright 2020 Elsevier.</p>
Full article ">Figure 17
<p>Smart gas sensing using FET-based devices and machine learning. (<b>A</b>) Modified single Si NW-based FET device for VOCs detection: (a-1) Schematic diagram of the fabricated FET device. (a-2) Typical IDS response curve of FET device labeled with the extracted features (V<sub>th</sub>), ion, hole mobility, and SS for data processing. (a-3) Variations in (a) V<sub>th</sub>, (b) μ<sub>h</sub>, (c) SS, and (d) ion of sensor S4 upon exposure to various VOCs at a concentration of pa/po = 0.08. Reproduced with permission from [<a href="#B137-sensors-21-02877" class="html-bibr">137</a>], copyright 2014 ACS. (<b>B</b>) Metal catalyst decorated on the CNT-based FET device for detection of purine compounds: (b-1) Optical and schematic images of the fabricated device. The inset reveals the SEM image of CNTs decorated with NPs. (b-2) Pt NP-decorated NTFET response with and without caffeine (1 mM) solutions. Selected features (11) were calculated from the NTFET curves before and after exposure of caffeine solutions. (b-3) LDA plots for purine compound discrimination. Reproduced with permission from [<a href="#B139-sensors-21-02877" class="html-bibr">139</a>], copyright 2019 ACS.</p>
Full article ">Figure 18
<p>Pristine graphene and AL-RUO<sub>2</sub>-based GFET devices for detection of different compounds under dry and humid environments. (a-1) Schematic illustration of extracted features for data processing, movement of the charge-neutral point (Np), variation in hole branch, variation in electron branch, and variation in height of the charge Np. (a-2) 3D gas-sensing patterns of the binary gas mixtures projected onto a representative 2D plot. The gas-sensing patterns are grouped by light blue colored regions, and the corresponding background relative humidity (R.H) levels are labeled. (a-3 (<b>I</b>)) Training accuracy and the training loss history from results using the pristine graphene; (<b>II</b>) confusion matrix of the pristine GFET. (a-4 (<b>I</b>)) Training accuracy and the training loss from results using the ALD-RuO<sub>2</sub> GFET; (<b>II</b>) confusion matrix of the ALD-RuO<sub>2</sub> GFET with 100% true values. (a-5) Normalized feature importance for the eight tested features. Reproduced with permission from [<a href="#B140-sensors-21-02877" class="html-bibr">140</a>], copyright 2020 Springer Nature.</p>
Full article ">Figure 19
<p>Smart breath analyzers using chemiresistive devices and machine learning. (<b>A</b>) Lung cancer detection using functionalized Au NPs from exhaled breath: (a-1) Optical image of sensor array; SEM image of Au NPs. (a-2) Breath sample collection and sensing setup. (a-3) Functionalized Au NPs sensor response towards healthy (filled symbol) and lung cancers patients (empty symbol). The significant change in response can be seen from the graph. (a-4) PCA of the dataset of real and simulated breath. Reproduced with permission from [<a href="#B146-sensors-21-02877" class="html-bibr">146</a>], copyright 2020 Nature. (<b>B</b>) Functionalized Au NPs onto flexible substrate was used to detect the ovarian carcinoma from exhaled breath. (b-1) Schematic diagram of fabricated sensor device with bending illustration upon exposure of any VOC. (b-2) Normalized strain response produced in different ligands under various VOCs. (b-3) Extracted feature graph for data processing. (b-4) Separation of the OC-positive and control groups (OC = ovarian cancer) using LDA. Reproduced with permission from [<a href="#B148-sensors-21-02877" class="html-bibr">148</a>], copyright 2015 ACS.</p>
Full article ">Figure 20
<p>Smart breath analyzers using FET devices and machine learning: (<b>A</b>) Surface-modified single Si NW FET device for cancer detection: (a-1) Schematic diagram of fabricated sensor. (a-2) Variation in IDS curve upon exposure to the increasing concentrations of VOCs. (a-3) CV1 values resulting from DFA analysis of the breath samples gastric cancer vs. control, early stages vs. advanced stages, smokers vs. nonsmokers, H. pylori-positive vs. H. pylori-negative. Reproduced with permission from [<a href="#B141-sensors-21-02877" class="html-bibr">141</a>], copyright 2015 ACS. (<b>B</b>) Ionic liquid-functionalized CNT-based FET sensing array was used for detection of different VOCs: (b-1) Schematic diagram of a fabricated array. (b-2,3) PCA results towards different VOCs. (b-4) The stability of ionic liquid after several days. Reproduced with permission from [<a href="#B149-sensors-21-02877" class="html-bibr">149</a>], copyright 2018 ACS.</p>
Full article ">
19 pages, 7381 KiB  
Article
Development of Piezoelectric Energy Harvester System through Optimizing Multiple Structural Parameters
by Hailu Yang, Ya Wei, Weidong Zhang, Yibo Ai, Zhoujing Ye and Linbing Wang
Sensors 2021, 21(8), 2876; https://doi.org/10.3390/s21082876 - 20 Apr 2021
Cited by 20 | Viewed by 5269
Abstract
Road power generation technology is of significance for constructing smart roads. With a high electromechanical conversion rate and high bearing capacity, the stack piezoelectric transducer is one of the most used structures in road energy harvesting to convert mechanical energy into electrical energy. [...] Read more.
Road power generation technology is of significance for constructing smart roads. With a high electromechanical conversion rate and high bearing capacity, the stack piezoelectric transducer is one of the most used structures in road energy harvesting to convert mechanical energy into electrical energy. To further improve the energy generation efficiency of this type of piezoelectric energy harvester (PEH), this study theoretically and experimentally investigated the influences of connection mode, number of stack layers, ratio of height to cross-sectional area and number of units on the power generation performance. Two types of PEHs were designed and verified using a laboratory accelerated pavement testing system. The findings of this study can guide the structural optimization of PEHs to meet different purposes of sensing or energy harvesting. Full article
(This article belongs to the Special Issue Piezoelectric Energy Harvesting Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>PEH road. (<b>a</b>) Schematic of PEH road; (<b>b</b>) schematic of internal force of the PEH.</p>
Full article ">Figure 2
<p>The PEH designed for road energy harvesting.</p>
Full article ">Figure 3
<p>Connection mode of multilayer stacked piezoelectric ceramics. (<b>a</b>) Series connection mode; (<b>b</b>) parallel connection mode.</p>
Full article ">Figure 4
<p>The test system.</p>
Full article ">Figure 5
<p>Structure of the PEH. (<b>a</b>) Diagram of the PEH. (<b>b</b>) Arrangement of piezoelectric units in the PEH.</p>
Full article ">Figure 6
<p>Diagram of the laboratory pavement loading test.</p>
Full article ">Figure 7
<p>Piezoelectric energy collection circuit. (<b>a</b>) Schematic of the circuit. (<b>b</b>) The PCB of the circuit.</p>
Full article ">Figure 8
<p>The specimens of Test A.</p>
Full article ">Figure 9
<p>Comparison of electrical properties of different connection modes: (<b>a</b>) open peak–peak voltages of A-1 and A-2; (<b>b</b>) charge variations of A-1 and A-2; (<b>c</b>) generated electrical energy of A-1 and A-2.</p>
Full article ">Figure 9 Cont.
<p>Comparison of electrical properties of different connection modes: (<b>a</b>) open peak–peak voltages of A-1 and A-2; (<b>b</b>) charge variations of A-1 and A-2; (<b>c</b>) generated electrical energy of A-1 and A-2.</p>
Full article ">Figure 10
<p>The specimens of Test B.</p>
Full article ">Figure 11
<p>Comparison of electrical properties of stack piezoelectric units with different layers: (<b>a</b>) the open peak–peak voltage of B-1, B-2 and B-3; (<b>b</b>) charge variation of B-1, B-2 and B-3; (<b>c</b>) generated electric energy of B-1, B-2 and B-3.</p>
Full article ">Figure 12
<p>The specimens of test C.</p>
Full article ">Figure 13
<p>Comparison of electrical properties of stack piezoelectric unites with the same volume and different height to cross section ratio: (<b>a</b>) the open peak–peak voltage of C-1, C-2 and C-3; (<b>b</b>) charge variation of C-1, C-2 and C-3; (<b>c</b>) generated electrical energy of C-1, C-2 and C-3.</p>
Full article ">Figure 14
<p>The number of piezoelectric units from 8 to 15.</p>
Full article ">Figure 15
<p>The electric energy signal of the PEH with different numbers of units.</p>
Full article ">Figure 16
<p>The laboratory pavement loading test of the PEH road. (<b>a</b>) The accelerated pavement testing device. (<b>b</b>) The laboratory test road. (<b>c</b>) Monitor system of the voltage. (<b>d</b>) Voltage waveform of the energy storage capacitor.</p>
Full article ">Figure 17
<p>The voltage of the supercapacitor.</p>
Full article ">
30 pages, 5533 KiB  
Review
Chest-Worn Inertial Sensors: A Survey of Applications and Methods
by Mohammad Hasan Rahmani, Rafael Berkvens and Maarten Weyn
Sensors 2021, 21(8), 2875; https://doi.org/10.3390/s21082875 - 19 Apr 2021
Cited by 37 | Viewed by 6331
Abstract
Inertial Measurement Units (IMUs) are frequently implemented in wearable devices. Thanks to advances in signal processing and machine learning, applications of IMUs are not limited to those explicitly addressing body movements such as Activity Recognition (AR). On the other hand, wearing IMUs on [...] Read more.
Inertial Measurement Units (IMUs) are frequently implemented in wearable devices. Thanks to advances in signal processing and machine learning, applications of IMUs are not limited to those explicitly addressing body movements such as Activity Recognition (AR). On the other hand, wearing IMUs on the chest offers a few advantages over other body positions. AR and posture analysis, cardiopulmonary parameters estimation, voice and swallowing activity detection and other measurements can be approached through chest-worn inertial sensors. This survey tries to introduce the applications that come with the chest-worn IMUs and summarizes the existing methods, current challenges and future directions associated with them. In this regard, this paper references a total number of 57 relevant studies from the last 10 years and categorizes them into seven application areas. We discuss the inertial sensors used as well as their placement on the body and their associated validation methods based on the application categories. Our investigations show meaningful correlations among the studies within the same application categories. Then, we investigate the data processing architectures of the studies from the hardware point of view, indicating a lack of effort on handling the main processing through on-body units. Finally, we propose combining the discussed applications in a single platform, finding robust ways for artifact cancellation, and planning optimized sensing/processing architectures for them, to be taken more seriously in future research. Full article
(This article belongs to the Special Issue Applications and Innovations on Sensor-Enabled Wearable Devices)
Show Figures

Figure 1

Figure 1
<p>Area of interest of this survey.</p>
Full article ">Figure 2
<p>Use of ACM on the sternum to capture cardiopulmonary activity and sounds as well as body motion and position [<a href="#B36-sensors-21-02875" class="html-bibr">36</a>].</p>
Full article ">Figure 3
<p>Distribution of the IMUs on chest per application area based on the referenced studies. The percentages are calculated to represent the ratio of the referenced studies in an application area that rely on a specific body site in proportion to the total referenced studies of that application area.</p>
Full article ">Figure 4
<p>Examples of IMU attachments on the body taken from the referenced studies. (<b>a</b>): IMU attached to skin for SCG [<a href="#B54-sensors-21-02875" class="html-bibr">54</a>]. (<b>b</b>): Use of stretching strap to attach the IMU over clothes for localization [<a href="#B68-sensors-21-02875" class="html-bibr">68</a>]. (<b>c</b>): Elastic strap used to attach smartphone over clothes for ER [<a href="#B51-sensors-21-02875" class="html-bibr">51</a>]. (<b>d</b>): Use of a soft nylon necklace over and underneath clothes for EE estimation [<a href="#B34-sensors-21-02875" class="html-bibr">34</a>]. (<b>e</b>): Attachment of IMU over the skin using adhesive tape for voice analysis [<a href="#B42-sensors-21-02875" class="html-bibr">42</a>].</p>
Full article ">Figure 5
<p>Examples of IMU coordinates alignment on body taken from the referenced studies. (<b>a</b>,<b>b</b>): IMU acceleration coordinates with respect to body axes for SCG, respectively, from [<a href="#B19-sensors-21-02875" class="html-bibr">19</a>,<a href="#B29-sensors-21-02875" class="html-bibr">29</a>]. (<b>c</b>): Calibration of the IMU pose with initial heading of the subject within the world map frame for PDR [<a href="#B68-sensors-21-02875" class="html-bibr">68</a>].</p>
Full article ">
19 pages, 658 KiB  
Article
Energy Allocation for LoRaWAN Nodes with Multi-Source Energy Harvesting
by Philip-Dylan Gleonec, Jeremy Ardouin, Matthieu Gautier and Olivier Berder
Sensors 2021, 21(8), 2874; https://doi.org/10.3390/s21082874 - 19 Apr 2021
Cited by 7 | Viewed by 3085
Abstract
Many connected devices are expected to be deployed during the next few years. Energy harvesting appears to be a good solution to power these devices but is not a reliable power source due to the time-varying nature of most energy sources. It is [...] Read more.
Many connected devices are expected to be deployed during the next few years. Energy harvesting appears to be a good solution to power these devices but is not a reliable power source due to the time-varying nature of most energy sources. It is possible to harvest energy from multiple energy sources to tackle this problem, thus increasing the amount and the consistency of harvested energy. Additionally, a power management system can be implemented to compute how much energy can be consumed and to allocate this energy to multiple tasks, thus adapting the device quality of service to its energy capabilities. The goal is to maximize the amount of measured and transmitted data while avoiding power failures as much as possible. For this purpose, an industrial sensor node platform was extended with a multi-source energy-harvesting circuit and programmed with a novel energy-allocation system for multi-task devices. In this paper, a multi-source energy-harvesting LoRaWAN node is proposed and optimal energy allocation is proposed when the node runs different sensing tasks. The presented hardware platform was built with off-the-shelf components, and the proposed power management system was implemented on this platform. An experimental validation on a real LoRaWAN network shows that a gain of 51% transmitted messages and 62% executed sensing tasks can be achieved with the multi-source energy-harvesting and power-management system, compared to a single-source system. Full article
(This article belongs to the Special Issue Energy Harvesting Communication and Computing Systems)
Show Figures

Figure 1

Figure 1
<p>Full block diagram of the proposed energy harvesting IoT node.</p>
Full article ">Figure 2
<p>Architecture of the proposed multi-source energy-harvesting board.</p>
Full article ">Figure 3
<p>Single-task IoT node platform.</p>
Full article ">Figure 4
<p>Experimental measurement of single and multi- source energy harvesting systems.</p>
Full article ">Figure 5
<p>Number of task executions for single- and multi-source energy-harvesting systems.</p>
Full article ">
18 pages, 3225 KiB  
Article
HRV Features as Viable Physiological Markers for Stress Detection Using Wearable Devices
by Kayisan M. Dalmeida and Giovanni L. Masala
Sensors 2021, 21(8), 2873; https://doi.org/10.3390/s21082873 - 19 Apr 2021
Cited by 79 | Viewed by 8582
Abstract
Stress has been identified as one of the major causes of automobile crashes which then lead to high rates of fatalities and injuries each year. Stress can be measured via physiological measurements and in this study the focus will be based on the [...] Read more.
Stress has been identified as one of the major causes of automobile crashes which then lead to high rates of fatalities and injuries each year. Stress can be measured via physiological measurements and in this study the focus will be based on the features that can be extracted by common wearable devices. Hence, the study will be mainly focusing on heart rate variability (HRV). This study is aimed at investigating the role of HRV-derived features as stress markers. This is achieved by developing a good predictive model that can accurately classify stress levels from ECG-derived HRV features, obtained from automobile drivers, by testing different machine learning methodologies such as K-Nearest Neighbor (KNN), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Random Forest (RF) and Gradient Boosting (GB). Moreover, the models obtained with highest predictive power will be used as reference for the development of a machine learning model that would be used to classify stress from HRV features derived from heart rate measurements obtained from wearable devices. We demonstrate that HRV features constitute good markers for stress detection as the best machine learning model developed achieved a Recall of 80%. Furthermore, this study indicates that HRV metrics such as the Average of normal-to-normal (NN) intervals (AVNN), Standard deviation of the average NN intervals (SDNN) and the Root mean square differences of successive NN intervals (RMSSD) were important features for stress detection. The proposed method can be also used on all applications in which is important to monitor the stress levels in a non-invasive manner, e.g., in physical rehabilitation, anxiety relief or mental wellbeing. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the experimental procedure followed for stress detection on data obtained from Apple Watch users.</p>
Full article ">Figure 2
<p>Flow chart illustrating the Feature Selection Process implemented in this study.</p>
Full article ">Figure 3
<p>Heat map plot of Pearson’s Correlation Feature Selection performed on original-dataset.</p>
Full article ">Figure 4
<p>Feature Importance of features from original-dataset using Extra Trees Classifier.</p>
Full article ">Figure 5
<p>ROC curve plot of each classification model trained on original-dataset. The AUROC scores were achieved by the models during stress prediction of the test dataset from the original-dataset.</p>
Full article ">Figure 6
<p>Model performance comparison of machine learning algorithms trained on original-dataset.</p>
Full article ">Figure 7
<p>ROC curve plot of each classification model tested on modified-dataset. The AUROC scores were achieved by the models during stress prediction of the test dataset from the modified-dataset.</p>
Full article ">Figure 8
<p>User Interface of the Stress Detection Web Application developed using <span class="html-italic">Streamlit</span>.</p>
Full article ">Figure 9
<p>Mean Prediction Probability obtain from the stress detection app with volunteers input data, who were subjected to different stress conditions (after work stress and relaxation).</p>
Full article ">
17 pages, 33091 KiB  
Article
Impact of Scene Content on High Resolution Video Quality
by Miroslav Uhrina, Anna Holesova, Juraj Bienik and Lukas Sevcik
Sensors 2021, 21(8), 2872; https://doi.org/10.3390/s21082872 - 19 Apr 2021
Cited by 3 | Viewed by 3600
Abstract
This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were [...] Read more.
This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were encoded at 5 different constant bitrates in two widely video compression standards H.264/AVC and H.265/HEVC at Full HD and Ultra HD resolutions, which means 160 annotated video sequences were created. The length of Group of Pictures (GOP) was set to half the framerate value, as is typical for video intended for transmission over a noisy communication channel. The evaluation was performed in two laboratories: one situated at the University of Zilina, and the second at the VSB—Technical University in Ostrava. The results acquired in both laboratories reached/showed a high correlation. Notwithstanding the fact that the sequences with low Spatial Information (SI) and Temporal Information (TI) values reached better Mean Opinion Score (MOS) score than the sequences with higher SI and TI values, these two parameters are not sufficient for scene description, and this domain should be the subject of further research. The evaluation results led us to the conclusion that it is unnecessary to use the H.265/HEVC codec for compression of Full HD sequences and the compression efficiency of the H.265 codec by the Ultra HD resolution reaches the compression efficiency of both codecs by the Full HD resolution. This paper also includes the recommendations for minimum bitrate thresholds at which the video sequences at both resolutions retain good and fair subjectively perceived quality. Full article
(This article belongs to the Special Issue Smart Sensor Technologies for IoT)
Show Figures

Figure 1

Figure 1
<p>Printscreens of used test sequences. Reprinted with permission from [<a href="#B60-sensors-21-02872" class="html-bibr">60</a>], Copyright 2021, Uhrina.</p>
Full article ">Figure 2
<p>Spatial Information (SI) and Temporal Information (TI) diagram of used test sequences. Reprinted with permission from [<a href="#B60-sensors-21-02872" class="html-bibr">60</a>], Copyright 2021, Uhrina.</p>
Full article ">Figure 3
<p>Process of preparing the test sequences: chroma subsampling, bit depth, and resolution changing.</p>
Full article ">Figure 4
<p>Complete process of coding and assessing the video quality.</p>
Full article ">Figure 5
<p>Comparison of Mean Opinion Score (MOS) values obtained from different laboratories. Each spot represents MOS values for corresponding codec, resolution, and test sequence</p>
Full article ">Figure 6
<p>Comparison of MOS values obtained from different laboratories. Each spot represents averaged MOS values from particular test sequences for corresponding codec and resolution.</p>
Full article ">Figure 7
<p>Bitrate impact on the perceived video quality (defined by the MOS score with associated Confidence Interval (CI)) depending on codec and resolution for both laboratories independently. Each curve represents MOS values for each type of used test sequence.</p>
Full article ">Figure 8
<p>Bitrate impact on the perceived video quality (defined by the MOS score with associated CI) depending on the codec and resolution for both laboratories jointly. Each curve represents averaged MOS values from both laboratories for each type of used test sequence.</p>
Full article ">Figure 9
<p>Bitrate impact on the perceived video quality (defined by the MOS score with associated CI) depending on used test sequence. Each curve represents averaged MOS values from both laboratories for corresponding codec and resolution.</p>
Full article ">Figure 10
<p>Bitrate impact on the perceived video quality (defined by the MOS score with associated CI). Each curve represents averaged MOS values from both laboratories for corresponding codec and resolution—average MOS score.</p>
Full article ">
32 pages, 5780 KiB  
Article
A Novel Runtime Algorithm for the Real-Time Analysis and Detection of Unexpected Changes in a Real-Size SHM Network with Quasi-Distributed FBG Sensors
by Felipe Isamu H. Sakiyama, Frank Lehmann and Harald Garrecht
Sensors 2021, 21(8), 2871; https://doi.org/10.3390/s21082871 - 19 Apr 2021
Cited by 8 | Viewed by 3287
Abstract
The ability to track the structural condition of existing structures is one of the main concerns of bridge owners and operators. In the context of bridge maintenance programs, visual inspection predominates nowadays as the primary source of information. Yet, visual inspections alone are [...] Read more.
The ability to track the structural condition of existing structures is one of the main concerns of bridge owners and operators. In the context of bridge maintenance programs, visual inspection predominates nowadays as the primary source of information. Yet, visual inspections alone are insufficient to satisfy the current needs for safety assessment. From this perspective, extensive research on structural health monitoring has been developed in recent decades. However, the transfer rate from laboratory experiments to real-case applications is still unsatisfactory. This paper addresses the main limitations that slow the deployment and the acceptance of real-size structural health monitoring systems (SHM) and presents a novel real-time analysis algorithm based on random variable correlation for condition monitoring. The proposed algorithm was designed to respond automatically to detect unexpected events, such as local structural failure, within a multitude of random dynamic loads. The results are part of a project on SHM, where a high sensor-count monitoring system based on long-gauge fiber Bragg grating sensors (LGFBG) was installed on a prestressed concrete bridge in Neckarsulm, Germany. The authors also present the data management system developed to handle a large amount of data, and demonstrate the results from one of the implemented post-processing methods, the principal component analysis (PCA). The results showed that the deployed SHM system successfully translates the massive raw data into meaningful information. The proposed real-time analysis algorithm delivers a reliable notification system that allows bridge managers to track unexpected events as a basis for decision-making. Full article
(This article belongs to the Special Issue Structural Health Monitoring for Smart Structures)
Show Figures

Figure 1

Figure 1
<p>Longitudinal view of the bridge (dimensions in centimeters).</p>
Full article ">Figure 2
<p>Bridge’s cross-section (dimensions in centimeters).</p>
Full article ">Figure 3
<p>Schema representing the positioning of the sensors (bottom view of the superstructure). The two intermediate supports are indicated by dashed circles, the superstructure’s depth change by two dashed lines. Sensors S01 to S27 are the longitudinal LGFBG sensor line underneath the driving lane in the northern direction. Sensors S28–S54 are the sensors line underneath the driving lane in the southern direction. Sensors S55–S79 are the transversal sensor lines. AC01 and AC02 indicate the location of the acceleration sensors, T01–T04 indicated the temperature sensors.</p>
Full article ">Figure 4
<p>Overview of the bridge and the monitoring system: (<b>a</b>) Overview of the bridge; (<b>b</b>) view of installed sensors; (<b>c</b>) sensor distribution; (<b>d</b>) an LGFBG sensor with 2.05 m gauge.</p>
Full article ">Figure 5
<p>The basic workflow of a Catman’s DAQ job. The left side shows the system pre-defined steps. The right side shows the user-defined tasks performed via scripting.</p>
Full article ">Figure 6
<p>Data storing workflow for the parallel recorders.</p>
Full article ">Figure 7
<p>SQL schema and tables.</p>
Full article ">Figure 8
<p>Statistical parameters: (<b>a</b>) Box plot of the correlation coefficient between sensors S01 and S02 measured from 16 July to 6 November 2020; (<b>b</b>) correlation between the strain mode and the temperature from sensor S01 measured from 14 July to 6 November 2020; (<b>c</b>) strain signal from S03 for a one-minute time window, showing the strain mode, and the peak-to-peak amplitude.</p>
Full article ">Figure 9
<p>Real-time analysis subprocess flowchart (corresponds to the data transfer cycle in <a href="#sensors-21-02871-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 10
<p>Correlation coefficient calculation subroutine flowchart (refer to <a href="#sensors-21-02871-f009" class="html-fig">Figure 9</a>).</p>
Full article ">Figure 11
<p>Three-step check subroutine for alarm triggering flowchart (refer to <a href="#sensors-21-02871-f009" class="html-fig">Figure 9</a>).</p>
Full article ">Figure 12
<p>Three-step real-time analysis demonstration—sensors S02, S03, and S04: (<b>a</b>) Correlation coefficients and points below the threshold after the first filter. The zoom window shows the amplified box plot for the correlation coefficients between sensors S03 and S04; (<b>b</b>) example of the correlation coefficient between sensors S02 and S03 during a one-minute time window with 12,000 samples; (<b>c</b>) correlation coefficients below the threshold after the second filter; (<b>d</b>) probability plot for the peak-to-peak amplitude <span class="html-italic">u</span> for sensor S04; (<b>e</b>) correlation coefficients below the threshold after the third filter; (<b>f</b>) correlation between the statistical strain mode <span class="html-italic">Mo</span> and the temperature for sensor S03. Analyzed data period: from 14 July to 6 November 2020.</p>
Full article ">Figure 13
<p>Strain signals for sensors S02 and S03 during an alarm triggering event. The zoom window at about 10:32:40 shows the strains caused due to the regular passing of vehicles.</p>
Full article ">Figure 14
<p>Box plot for the correlation coefficients from sensors S01–S27—1st filter: <span class="html-italic">ρ</span> &lt; 0.9.</p>
Full article ">Figure 15
<p>A closer look at the correlation coefficient CF_S14_S15 extracted from <a href="#sensors-21-02871-f014" class="html-fig">Figure 14</a>—density distribution and box plot.</p>
Full article ">Figure 16
<p>Box plot for the correlation coefficients from sensors S01–S27—2nd filter: <span class="html-italic">u</span> &gt; 60 µm/m, and 3rd filter: Δ<span class="html-italic">Mo</span> &gt; 25 µm/m and Δ<span class="html-italic">Mo</span> &gt; 15 × <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mover accent="true"> <mi>T</mi> <mo>¯</mo> </mover> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Strain lines for sensors S01 to S27 during the crossing of a heavy vehicle. The data has a length of 1000 observations (5 s). The abutments’ approximated position and the intermediary columns are represented with triangles, and the bridge’s main deck is depicted as a thick horizontal line. The strain lines for sensors S03 and S14 are highlighted. The other strain lines are shown in grey.</p>
Full article ">Figure 18
<p>Scree plot of the principal components that explain 95% of the total variance.</p>
Full article ">Figure 19
<p>Line-plots of the first four principal components loadings. The coefficients estimated from the SHM measured data are plotted with circle markers and full-line. In contrast, the coefficients calculated from a calibrated FE model are plotted with x-markers and dashed lines. The NRMSE is given for each plot: (<b>a</b>) First principal component; (<b>b</b>) second principal component; (<b>c</b>) third principal component; (<b>d</b>) fourth principal component.</p>
Full article ">Figure 20
<p>Bi-plot of the first and the second principal components. The vectors represent the contribution of each variable to the PCs, and the red dots are the observations’ scores about the PCs axis.</p>
Full article ">Figure 21
<p>Correlation coefficients from sensors S01–S27 during the crossing of the heavy vehicle defined in <a href="#sensors-21-02871-f017" class="html-fig">Figure 17</a>. The values obtained from the SHM data were calculated by the novel real-time analysis algorithm, while the estimated values from the FE model were extracted from the elements of the covariance matrix <math display="inline"><semantics> <mrow> <mi>C</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>τ</mi> <mrow> <mn>1000</mn> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> of the data matrix <math display="inline"><semantics> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>τ</mi> <mrow> <mn>1000</mn> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
33 pages, 5831 KiB  
Article
Theoretical and Experimental Investigation of the Effect of Pump Laser Frequency Fluctuations on Signal-to-Noise Ratio of Brillouin Dynamic Grating Measurement with Coherent FMCW Reflectometry
by Tatsuya Kikuchi, Ryohei Satoh, Iori Kurita and Kazumasa Takada
Sensors 2021, 21(8), 2870; https://doi.org/10.3390/s21082870 - 19 Apr 2021
Viewed by 2091
Abstract
Signal-dependent speckle-like noise has constituted a serious factor in Brillouin-grating based frequency-modulated continuous-wave (FMCW) reflectometry and it has been indispensable for improving the signal-to-noise ratio (S/N) of the Brillouin dynamic grating measurement to clarify the noise generation mechanism. In this paper we show [...] Read more.
Signal-dependent speckle-like noise has constituted a serious factor in Brillouin-grating based frequency-modulated continuous-wave (FMCW) reflectometry and it has been indispensable for improving the signal-to-noise ratio (S/N) of the Brillouin dynamic grating measurement to clarify the noise generation mechanism. In this paper we show theoretically and experimentally that the noise is generated by the frequency fluctuations of the pump light from a laser diode (LD). We could increase the S/N from 36 to 190 merely by driving the LD using a current source with reduced technical noise. On the basis of our experimental result, we derived the theoretical formula for S/N as a function of distance, which contained the second and fourth-order moments of the frequency fluctuations, by assuming that the pump light frequency was modulated by the technical noise. We calculated S/N along the 1.35 m long optical fiber numerically using the measured power spectral density of the frequency fluctuations, and the resulting distributions agreed with the measured values in the 10 to 190 range. Since higher performance levels are required if the pump light source is to maintain the S/N as the fiber length increases, we can use the formula to calculate the light source specifications including the spectral width and rms value of the frequency fluctuations to achieve a high S/N while testing a fiber of a given length. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of Brillouin grating-based coherent FMCW reflectometry setup [<a href="#B15-sensors-21-02870" class="html-bibr">15</a>], which consisted of the conventional coherent FMCW reflectometry setup for detecting the reflection from a device under test (DUT) and an optical fiber loop for generating a Brillouin dynamic grating in the DUT. DFB LD: Distributed feedback laser diode, CL1 and CL2: Optical circulators, CP1, CP2 and CP3: 3-dB optical fiber couplers, EDFA: Erbium-doped fiber amplifier, PC1 and PC2: Polarization controllers, PBS1 and PBS2: Polarization beam splitters, SSBM: Single-sideband modulator (T.SBXH1.5-20PD-ADC, Sumitomo Osaka Cement), FG: 2-channel function generator, PM1, PM2 and PM3: LiNbO<sub>3</sub> phase modulators (LN65S-FC, Thorlabs), LO: Local oscillator, TIA: Transimpedance amplifier, <span class="html-italic">ω</span><sub>p</sub>: Pump light frequency, Ω: Up-conversion frequency of the pump light which is equal to the down-conversion frequency of the LO light, <span class="html-italic">f</span><sub>0</sub> = 150 kHz, <span class="html-italic">f</span><sub>1</sub> = 190 kHz. Picosecond pulses were launched into the fiber loop to locate the position at <span class="html-italic">z</span><sub>c</sub> where the optical path lengths of the counter-propagating pump light waves were equal.</p>
Full article ">Figure 2
<p>Overwritten reflectograms from a 1.35 m long optical fiber that were obtained by 30 frequency sweeps of a tunable laser when the pump LD was driven with (<b>a</b>) current source A and (<b>b</b>) current source B. The up-conversion frequency was 10.861 GHz.</p>
Full article ">Figure 3
<p>Schematic of the propagation (highlighted in green) of counter-propagating pump light waves from the LD in the fiber loop. <span class="html-italic">t</span><sub>5</sub> and <span class="html-italic">t</span><sub>6</sub> are the propagation times of the pump light waves from the LD to optical circulators CL1 and CL2, respectively. <span class="html-italic">z</span> is a coordinate of the distance along the fiber under test highlighted in blue and the origin of the distance is positioned at the point where the produced reflection has the same optical path length as the LO light at the balanced mixer. The input and output ends of the fiber were assumed to be located at <span class="html-italic">z</span><sub>i</sub> and <span class="html-italic">z</span><sub>e</sub>, respectively. <span class="html-italic">τ</span> is the round-trip time from the origin to any position at <span class="html-italic">z</span> as defined by <span class="html-italic">τ</span> = 2<span class="html-italic">nz</span>/<span class="html-italic">c</span>, where <span class="html-italic">n</span> is the refractive index of the fiber and <span class="html-italic">c</span> is the velocity of light in a vacuum. Similarly, <span class="html-italic">τ</span><sub>i</sub> and <span class="html-italic">τ</span><sub>e</sub> are defined as <span class="html-italic">τ</span><sub>i</sub> = 2<span class="html-italic">nz</span><sub>i</sub>/<span class="html-italic">c</span> and <span class="html-italic">τ</span><sub>e</sub> = 2<span class="html-italic">nz</span><sub>e</sub>/<span class="html-italic">c</span>. <span class="html-italic">A</span><sub>p1</sub> and <span class="html-italic">A</span><sub>p2</sub> are the complex amplitudes of the electric fields of the pump light waves propagating in the clockwise and counterclockwise directions, respectively. The center of pumping is defined by the position where the optical path lengths of the two pump light waves are equal in the optical fiber loop. CL1 and CL2: Optical circulators, CP3: Fiber coupler, LD: Laser diode.</p>
Full article ">Figure 4
<p>Schematic of an unbalanced Mach-Zehnder interferometer (MZI) used to measure frequency fluctuations of the light that was incident from the input port of the MZI. AOM1 and AOM2: Acousto-optic frequency shifters, FG: Function generator, PC3: Polarization controller, TIA: Transimpedance amplifier. <span class="html-italic">f</span><sub>2</sub> and <span class="html-italic">f</span><sub>3</sub> are up-conversion frequencies by the AMO1 and AOM2, respectively. PC3 was adjusted for the amplitude of the beat signal to reach its maximum value. The inset shows a typical phase change waveform, which was retrieved from the acquired beat signal waveform.</p>
Full article ">Figure 5
<p>Power spectral density (shown in red) of the frequency fluctuations of the LD output when it was driven with (<b>a</b>) current source A and (<b>b</b>) current source B. The power spectral density obtained from a diode-pumped solid-state (DPSS) laser is also plotted in blue in each figure.</p>
Full article ">Figure 6
<p>Distributions of S/N along a 1.35 m long optical fiber that were measured and calculated at (<b>a</b>) <span class="html-italic">u</span><sub>c</sub> = −0.042, (<b>b</b>) <span class="html-italic">u</span><sub>c</sub> = 0.45 and (<b>c</b>) <span class="html-italic">u</span><sub>c</sub> = 0.93, where the DFB LD was driven with current source A whose power spectral density is shown in <a href="#sensors-21-02870-f005" class="html-fig">Figure 5</a>a. The scale of the horizontal axis is normalized in such a way that the output end of the fiber under test was unity. The rapid changes of S/N observed around <span class="html-italic">u</span> = 0.31, 0.53 and 0.78 were caused by fusion splicing and mating of the pieces of the optical fiber.</p>
Full article ">Figure 7
<p>Distributions of S/N along the 1.35 m long optical fiber, which were measured and calculated at (<b>a</b>) <span class="html-italic">u</span><sub>c</sub> = −0.042, (<b>b</b>) <span class="html-italic">u</span><sub>c</sub> = 0.45 and (<b>c</b>) <span class="html-italic">u</span><sub>c</sub> = 0.93, where the DFB LD was driven with current source B whose power spectral density is shown in <a href="#sensors-21-02870-f005" class="html-fig">Figure 5</a>b. The scale of the horizontal axis is normalized in such a way that the output end of the fiber under test was unity. The rapid changes of S/N observed around <span class="html-italic">u</span> = 0.31, 0.53 and 0.78 were caused by fusion splicing and mating of the pieces of the optical fiber.</p>
Full article ">Figure 8
<p>Difference between the calculated and measured S/N values in percent as a function of <span class="html-italic">u</span>. (<b>a</b>,<b>b</b>) were obtained from the data shown in <a href="#sensors-21-02870-f006" class="html-fig">Figure 6</a>b and <a href="#sensors-21-02870-f007" class="html-fig">Figure 7</a>b, respectively, where the center of pumping was located at <span class="html-italic">u</span><sub>c</sub> = 0.45. The range where the deviation was within 20% was highlighted in yellow in each figure. The rapid changes observed around <span class="html-italic">u</span> = 0.31, 0.53 and 0.78 were caused by fusion splicing and mating of the pieces of the single-mode fiber.</p>
Full article ">Figure 9
<p>Numerical calculation of the constituent terms <span class="html-italic">A</span> to <span class="html-italic">H</span> of the correction term <span class="html-italic">σ</span><sup>(4)</sup> together with <span class="html-italic">σ</span><sup>(2)</sup> when current source A was used, where the center of pumping was located at <span class="html-italic">u</span><sub>c</sub> = 0.45. We calculated <span class="html-italic">σ</span><sup>(4)</sup> to improve the discrepancy between the measured and calculated data observed in <a href="#sensors-21-02870-f006" class="html-fig">Figure 6</a>b.</p>
Full article ">Figure 10
<p>Comparison of the measured and calculated S/N along a 5 m long polarization-maintaining (PM) fiber, where the up-conversion frequency was 10.881 GHz and the DFB LD was driven with current source B whose power spectral density is shown in <a href="#sensors-21-02870-f005" class="html-fig">Figure 5</a>b. Both ends of the PM fiber were spliced to 250 µm buffered single-mode fibers with APC connectors which were mated to those of the fiber pigtails of optical circulators CL1 and CL2. <span class="html-italic">z</span><sub>c</sub> (= 3.8 m) is the distance to the center of pumping. The fast and slow axes of the PM fiber were excited by the respective probe light and counter-propagating pump lights by using in-line polarization controllers (PCs). CL1 and CL2 are optical circulators. <span class="html-italic">ω</span><sub>p</sub> and <span class="html-italic">ω</span><sub>p</sub> + Ω are the angular frequencies of the counter-propagating pump light waves. APC: Angled physical contact.</p>
Full article ">Figure 11
<p>(<b>a</b>) Ratio of S/N to (S/N) <sub>∞</sub> as functions of α and <span class="html-italic">u</span>. (<b>b</b>) Ratio of S/N to (S/N) <sub>∞</sub> as a function of <span class="html-italic">u</span> when α was changed from 4 to 14 in steps of 2. α = <span class="html-italic">βτ</span><sub>e</sub>/<span class="html-italic">δχ<sub>e</sub></span>, where <span class="html-italic">β</span> is the sweep rate of the angular frequency of the tunable laser output, <span class="html-italic">τ</span><sub>e</sub> is the round-trip time from the origin at <span class="html-italic">z</span> = 0 to the output end of the fiber under test at <span class="html-italic">z</span> = <span class="html-italic">z</span><sub>e</sub> and thus <span class="html-italic">u</span> = <span class="html-italic">z</span>/<span class="html-italic">z</span><sub>e</sub>. <span class="html-italic">δχ<sub>e</sub></span> is the spectral half width at 1/<span class="html-italic">e</span> maximum of the Gaussian spectrum <span class="html-italic">G</span>(<span class="html-italic">χ</span>).</p>
Full article ">Figure 12
<p>Domain highlighted in yellow where (<span class="html-italic">f</span><sub>h</sub>, <span class="html-italic">δν</span><sub>rms</sub>) satisfies the three conditions described by the relational expressions (21), (22) and (23) to achieve (<span class="html-italic">S</span>/<span class="html-italic">N</span>)<sub>tgt</sub> = 200 at <span class="html-italic">τ</span><sub>e</sub> = 48.4 ns, <span class="html-italic">γ</span> = 62.5 GHz. The segment highlighted in blue is the domain extended by increasing the sweep rate to <span class="html-italic">γ</span> = 625 GHz/s. The red and blue solid lines show the boundary curves defined by <span class="html-italic">δν</span><sub>rms</sub> = 0.0293/<span class="html-italic">f</span><sub>h</sub> and <span class="html-italic">δν</span><sub>rms</sub> = 0.293/<span class="html-italic">f</span><sub>h</sub> in a log-log scale to satisfy the third expression (23) at 0.5 nm/s (= 62.5 GHz/s) and at 5 nm/s (= 625 GHz/s), respectively.</p>
Full article ">Figure 13
<p>(<b>a</b>) Overwrite of 30 reflectograms from a 10 m long polarization-maintaining (PM) fiber. <span class="html-italic">z</span><sub>c</sub> (= 6.51 m) is the distance to the center of pumping. (<b>b</b>) Mean reflectogram (shown in black), which was derived from the 30 reflectograms shown in (<b>a</b>). &lt;<span class="html-italic">Z</span>&gt; is a plot of the Gaussian function defined by Equation (17) with Δ<sub>rms</sub> = 3.74 × 10<sup>6</sup> rad/s, <span class="html-italic">τ</span><sub>e</sub> = 115 ns and <span class="html-italic">u</span><sub>c</sub> = 0.545. &lt;<span class="html-italic">Z</span><sub>c</sub>&gt; is a plot of Equation (28), which is the product of Gaussian and exponential functions where <span class="html-italic">δν</span><sub>L</sub> = 2 MHz.</p>
Full article ">Figure 14
<p>Comparison of measured and calculated S/N along (<b>a</b>) 10 m long and (<b>b</b>) 40 m long polarization-maintaining (PM) fibers. In each figure, the S/N distribution shown in black was obtained from distributions acquired by sweeping the tunable laser 30 times. The ratio &lt;<span class="html-italic">Z</span><sub>c</sub>&gt;/√<span class="html-italic">σ</span><sub>c</sub><sup>(2)</sup> as a function of <span class="html-italic">u</span> was calculated and plotted with a red line. The correction term <span class="html-italic">σ</span><sub>c</sub><sup>(4)</sup> is expressed by the sum of the eight terms <span class="html-italic">A</span> to <span class="html-italic">H</span>, which were represented by Equations (A67) to (A74) in <a href="#app5-sensors-21-02870" class="html-app">Appendix E</a>, respectively. The ratio &lt;<span class="html-italic">Z</span><sub>c</sub>&gt;/√(<span class="html-italic">σ</span><sub>c</sub><sup>(2)</sup> + <span class="html-italic">σ</span><sub>c</sub><sup>(2)</sup>) as a function of <span class="html-italic">u</span> was calculated and plotted with a blue line. The employed parameters were (<b>a</b>) Δ<sub>rms</sub> = 3.74 × 10<sup>6</sup> rad/s, <span class="html-italic">τ</span><sub>e</sub> = 115 ns, <span class="html-italic">u</span><sub>c</sub> = 0.545. (<b>b</b>) Δ<sub>rms</sub> = 5.65 × 10<sup>6</sup> rad/s, <span class="html-italic">τ</span><sub>e</sub> = 402 ns, <span class="html-italic">u</span><sub>c</sub> = 0.515.</p>
Full article ">Figure A1
<p>Domains of integration highlighted in yellow and light blue where the conditions of <span class="html-italic">r</span>(<span class="html-italic">τ</span> − <span class="html-italic">χ</span>/<span class="html-italic">β</span>) = <span class="html-italic">r</span><sub>0</sub> and <span class="html-italic">r</span>(<span class="html-italic">τ</span> + <span class="html-italic">χ</span>/<span class="html-italic">β</span>) = <span class="html-italic">r</span><sub>0</sub> are satisfied, respectively. <span class="html-italic">ξ</span><sub>1</sub> = <span class="html-italic">β</span>(<span class="html-italic">τ</span> − <span class="html-italic">τ</span><sub>i</sub>) and <span class="html-italic">ξ</span><sub>2</sub> = <span class="html-italic">β</span>(<span class="html-italic">τ</span><sub>e</sub> − <span class="html-italic">τ</span>). Their overlapping domain is shaded in green. <span class="html-italic">β</span> is the sweep rate of the angular frequency of the employed tunable laser output, <span class="html-italic">τ</span><sub>i</sub> and <span class="html-italic">τ</span><sub>e</sub> are round trip times from the origin at <span class="html-italic">τ</span> = 0 to the input and output ends of the fiber under test, respectively.</p>
Full article ">Figure A2
<p>Domains highlighted in yellow for double integrals in (<b>a</b>) <span class="html-italic">G</span> for <span class="html-italic">u</span><sub>i</sub> &lt; <span class="html-italic">u</span> &lt; (<span class="html-italic">u</span><sub>i</sub> + 1)/2, (<b>b</b>) <span class="html-italic">G</span> for (<span class="html-italic">u</span><sub>i</sub> + 1)/2 &lt; <span class="html-italic">u</span> &lt; 1, and (<b>c</b>) <span class="html-italic">H</span> for <span class="html-italic">u</span><sub>i</sub> &lt; <span class="html-italic">u</span> &lt; 1. <span class="html-italic">u</span> = <span class="html-italic">τ</span>/<span class="html-italic">τ</span><sub>e</sub>, <span class="html-italic">u</span><sub>c</sub> = <span class="html-italic">τ</span><sub>c</sub>/<span class="html-italic">τ</span><sub>e</sub>, <span class="html-italic">u</span><sub>i</sub> = <span class="html-italic">τ</span><sub>i</sub>/<span class="html-italic">τ</span><sub>e</sub>.</p>
Full article ">
21 pages, 5601 KiB  
Article
An Intelligent In-Shoe System for Gait Monitoring and Analysis with Optimized Sampling and Real-Time Visualization Capabilities
by Jiaen Wu, Kiran Kuruvithadam, Alessandro Schaer, Richie Stoneham, George Chatzipirpiridis, Chris Awai Easthope, Gill Barry, James Martin, Salvador Pané, Bradley J. Nelson, Olgaç Ergeneman and Hamdi Torun
Sensors 2021, 21(8), 2869; https://doi.org/10.3390/s21082869 - 19 Apr 2021
Cited by 21 | Viewed by 6873
Abstract
The deterioration of gait can be used as a biomarker for ageing and neurological diseases. Continuous gait monitoring and analysis are essential for early deficit detection and personalized rehabilitation. The use of mobile and wearable inertial sensor systems for gait monitoring and analysis [...] Read more.
The deterioration of gait can be used as a biomarker for ageing and neurological diseases. Continuous gait monitoring and analysis are essential for early deficit detection and personalized rehabilitation. The use of mobile and wearable inertial sensor systems for gait monitoring and analysis have been well explored with promising results in the literature. However, most of these studies focus on technologies for the assessment of gait characteristics, few of them have considered the data acquisition bandwidth of the sensing system. Inadequate sampling frequency will sacrifice signal fidelity, thus leading to an inaccurate estimation especially for spatial gait parameters. In this work, we developed an inertial sensor based in-shoe gait analysis system for real-time gait monitoring and investigated the optimal sampling frequency to capture all the information on walking patterns. An exploratory validation study was performed using an optical motion capture system on four healthy adult subjects, where each person underwent five walking sessions, giving a total of 20 sessions. Percentage mean absolute errors (MAE%) obtained in stride time, stride length, stride velocity, and cadence while walking were 1.19%, 1.68%, 2.08%, and 1.23%, respectively. In addition, an eigenanalysis based graphical descriptor from raw gait cycle signals was proposed as a new gait metric that can be quantified by principal component analysis to differentiate gait patterns, which has great potential to be used as a powerful analytical tool for gait disorder diagnostics. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

Figure 1
<p>Graphical illustration of the Nushu system. (<b>A</b>) The sensor units are inserted in the outsole of the shoe; the upper part is glued so as to seal the shoe. (<b>B</b>) Gait phases during a full gait cycle. (<b>C</b>) Signal examples from the sensors. (<b>D</b>) Gait parameters generated by Nushu system.</p>
Full article ">Figure 2
<p>Aliasing frequency signals resulting from sampling 120, 125, and 130 Hz audio waves at 100 Hz.</p>
Full article ">Figure 3
<p>Example signals of motion signals, normalized acceleration, and angular velocity. The threshold for the recognition of the stationary events is calculated by considering the magnitude of the motion signals. The stationary regions are highlighted in blue.</p>
Full article ">Figure 4
<p>Block diagram of the orientation estimation through sensor fusion [<a href="#B34-sensors-21-02869" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>(<b>A</b>) Raw acceleration data measured in the sensor’s coordinate frame. By observing the initial stationary region, it can be deduced that the sensor’s resting position is not perfectly aligned with the ground (gravity has a slight projection onto the x and y axes). Gravity is affecting the readings of the linear accelerations and since the sensors are changing orientation it cannot be subtracted. (<b>B</b>) Transformed accelerations in the global coordinate frame. Since gravity is only projected in the global z axis, it can be directly subtracted.</p>
Full article ">Figure 6
<p>The signal from the gyroscope aligned with the medio-lateral axis of the foot is used for gait event detection. Each stride is characterized by a sequence of FF, TO, HS, and FF as indicated. The green and blue areas refer to the swing and stance phases. The blue and green vertical dashed lines define the starting time frame of the swing and stance phases, respectively.</p>
Full article ">Figure 7
<p>The flowchart of the data processing procedure.</p>
Full article ">Figure 8
<p>Subject lower limb skeleton model from 3DGA markers.</p>
Full article ">Figure 9
<p>Spectrum of acceleration signals from one normalized stride sampled at 5000 Hz (blue: DJB), 100 Hz (red: Nushu), and 100 Hz (yellow: Axivity).</p>
Full article ">Figure 10
<p>RMSE (%) between low-pass filtered signals and raw signals for a normalized stride as a function of LPF cut-off frequency.</p>
Full article ">Figure 11
<p>Histogram of error distribution of eight parameters for validation: stride time, stride length, swing time, stance time, velocity, cadence, swing phase, and stance phase.</p>
Full article ">Figure 12
<p>Hodographs of prominent walking signals <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>a</mi> <mi>x</mi> </msub> <mfenced> <mi>t</mi> </mfenced> <mo>,</mo> <mo> </mo> <msub> <mi>ω</mi> <mi>y</mi> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>a</mi> <mi>x</mi> </msub> <mfenced> <mi>t</mi> </mfenced> <mo>,</mo> <mo> </mo> <msub> <mi>a</mi> <mi>z</mi> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>a</mi> <mi>z</mi> </msub> <mfenced> <mi>t</mi> </mfenced> <mo>,</mo> <mo> </mo> <msub> <mi>ω</mi> <mi>y</mi> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </mfenced> <mo>,</mo> </mrow> </semantics></math> gait cycle trajectories for three healthy persons (left/right: <b>A</b>/<b>B</b>, <b>C</b>/<b>D</b>, <b>E</b>/<b>F</b>), and a stroke patient (less impaired/more impaired: <b>G</b>/<b>H</b>). Green star markers: HSs, blue circle markers: TOs, green solid lines: stance phases, blue solid lines: swing phases.</p>
Full article ">Figure 13
<p>Planar 2-D projection of secondary signals (<b>A</b>) <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>a</mi> <mi>x</mi> </msub> <mfenced> <mi>t</mi> </mfenced> <mo>,</mo> <mo> </mo> <msub> <mi>a</mi> <mi>y</mi> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </mfenced> <mo> </mo> </mrow> </semantics></math> and (<b>B</b>) <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>ω</mi> <mi>x</mi> </msub> <mfenced> <mi>t</mi> </mfenced> <mo>,</mo> <mo> </mo> <msub> <mi>ω</mi> <mi>z</mi> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </mfenced> </mrow> </semantics></math> from a healthy subject. Green star markers: HS. Blue circle markers: TO. Green line: stance phase. Blue line: swing phase.</p>
Full article ">Figure 14
<p>PCA results of <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>A</mi> <mo>˜</mo> </mover> <mo>,</mo> <mo> </mo> <mover accent="true"> <mi>W</mi> <mo>˜</mo> </mover> </mrow> </semantics></math> matrices for three healthy subjects and one stroke patient. The dotted lines represent the normalized median cycle of each person. The ellipses show 95% confidence region. The arrows are plotted by the eigenvectors, and its length equal corresponding eigenvalues. (<b>A</b>,<b>C</b>) For left foot and (<b>B</b>,<b>D</b>) for right foot.</p>
Full article ">
22 pages, 1634 KiB  
Article
Virtual Angle Boundary-Aware Particle Swarm Optimization to Maximize the Coverage of Directional Sensor Networks
by Gong Cheng and Huangfu Wei
Sensors 2021, 21(8), 2868; https://doi.org/10.3390/s21082868 - 19 Apr 2021
Cited by 5 | Viewed by 2339
Abstract
With the transition of the mobile communication networks, the network goal of the Internet of everything further promotes the development of the Internet of Things (IoT) and Wireless Sensor Networks (WSNs). Since the directional sensor has the performance advantage of long-term regional monitoring, [...] Read more.
With the transition of the mobile communication networks, the network goal of the Internet of everything further promotes the development of the Internet of Things (IoT) and Wireless Sensor Networks (WSNs). Since the directional sensor has the performance advantage of long-term regional monitoring, how to realize coverage optimization of Directional Sensor Networks (DSNs) becomes more important. The coverage optimization of DSNs is usually solved for one of the variables such as sensor azimuth, sensing radius, and time schedule. To reduce the computational complexity, we propose an optimization coverage scheme with a boundary constraint of eliminating redundancy for DSNs. Combined with Particle Swarm Optimization (PSO) algorithm, a Virtual Angle Boundary-aware Particle Swarm Optimization (VAB-PSO) is designed to reduce the computational burden of optimization problems effectively. The VAB-PSO algorithm generates the boundary constraint position between the sensors according to the relationship among the angles of different sensors, thus obtaining the boundary of particle search and restricting the search space of the algorithm. Meanwhile, different particles search in complementary space to improve the overall efficiency. Experimental results show that the proposed algorithm with a boundary constraint can effectively improve the coverage and convergence speed of the algorithm. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>A coverage system model of Directional Sensor Networks (DSNs).</p>
Full article ">Figure 2
<p>The coverage area of the directional sensor.</p>
Full article ">Figure 3
<p>The location relationship between sensor azimuths and boundary constraint of the same node.</p>
Full article ">Figure 4
<p>The initial coverage of an idea map.</p>
Full article ">Figure 5
<p>The mix coverage of the four algorithms under different swarm sizes in the ideal scene.</p>
Full article ">Figure 6
<p>The mix coverage of the four algorithms under different values of <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math> in the ideal scene.</p>
Full article ">Figure 7
<p>Performance comparison of four algorithms under the optimal parameters of the ideal scene.</p>
Full article ">Figure 8
<p>The coverage of the ideal scene after algorithm optimization.</p>
Full article ">Figure 9
<p>Initial coverage of a real DSN deployment scene.</p>
Full article ">Figure 10
<p>The mix coverage of the four algorithms under different swarm sizes in the real scene.</p>
Full article ">Figure 11
<p>The mix coverage of the four algorithms under different values of <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math> in the real scene.</p>
Full article ">Figure 12
<p>Performance comparison of four algorithms under the optimal parameters of the real scene.</p>
Full article ">Figure 13
<p>The coverage of the real scene after algorithm optimization.</p>
Full article ">Figure 14
<p>Performance comparison of four algorithms under the optimal parameters of the real scene.</p>
Full article ">Figure 15
<p>Performance comparison of four algorithms under the optimal parameters of the real scene with 4 sensors.</p>
Full article ">Figure 16
<p>Performance comparison of four algorithms under the optimal parameters of the large-scale random scene.</p>
Full article ">
14 pages, 3468 KiB  
Article
A New Cache Update Scheme Using Reinforcement Learning for Coded Video Streaming Systems
by Yu-Sin Kim, Jeong-Min Lee, Jong-Yeol Ryu and Tae-Won Ban
Sensors 2021, 21(8), 2867; https://doi.org/10.3390/s21082867 - 19 Apr 2021
Cited by 3 | Viewed by 2376
Abstract
As the demand for video streaming has been rapidly increasing recently, new technologies for improving the efficiency of video streaming have attracted much attention. In this paper, we thus investigate how to improve the efficiency of video streaming by using clients’ cache storage [...] Read more.
As the demand for video streaming has been rapidly increasing recently, new technologies for improving the efficiency of video streaming have attracted much attention. In this paper, we thus investigate how to improve the efficiency of video streaming by using clients’ cache storage considering exclusive OR (XOR) coding-based video streaming where multiple different video contents can be simultaneously transmitted in one transmission as long as prerequisite conditions are satisfied, and the efficiency of video streaming can be thus significantly enhanced. We also propose a new cache update scheme using reinforcement learning. The proposed scheme uses a K-actor-critic (K-AC) network that can mitigate the disadvantage of actor-critic networks by yielding K candidate outputs and by selecting the final output with the highest value out of the K candidates. The K-AC exists in each client, and each client can train it by using only locally available information without any feedback or signaling so that the proposed cache update scheme is a completely decentralized scheme. The performance of the proposed cache update scheme was analyzed in terms of the average number of transmissions for XOR coding-based video streaming and was compared to that of conventional cache update schemes. Our numerical results show that the proposed cache update scheme can reduce the number of transmissions up to 24% when the number of videos is 100, the number of clients is 50, and the cache size is 5. Full article
(This article belongs to the Collection Machine Learning for Multimedia Communications)
Show Figures

Figure 1

Figure 1
<p>System architecture.</p>
Full article ">Figure 2
<p>Overall procedures of XOR coding-based streaming.</p>
Full article ">Figure 3
<p>The proposed architecture of the <span class="html-italic">K</span>-AC.</p>
Full article ">Figure 4
<p>Illustration of the training process of the <span class="html-italic">K</span>-AC.</p>
Full article ">Figure 5
<p>The rewards of the proposed scheme earned during a training process. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Average number of required transmissions for various <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>’s. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Average number of required transmissions for various <span class="html-italic">C</span>’s. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Average number of required transmissions for various <span class="html-italic">N</span>’s. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Average number of required transmissions for various <span class="html-italic">V</span> values. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Average number of required transmissions for various <math display="inline"><semantics> <mi>β</mi> </semantics></math>’s. <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 1951 KiB  
Communication
A Lightweight Attention-Based CNN Model for Efficient Gait Recognition with Wearable IMU Sensors
by Haohua Huang, Pan Zhou, Ye Li and Fangmin Sun
Sensors 2021, 21(8), 2866; https://doi.org/10.3390/s21082866 - 19 Apr 2021
Cited by 41 | Viewed by 5788
Abstract
Wearable sensors-based gait recognition is an effective method to recognize people’s identity by recognizing the unique way they walk. Recently, the adoption of deep learning networks for gait recognition has achieved significant performance improvement and become a new promising trend. However, most of [...] Read more.
Wearable sensors-based gait recognition is an effective method to recognize people’s identity by recognizing the unique way they walk. Recently, the adoption of deep learning networks for gait recognition has achieved significant performance improvement and become a new promising trend. However, most of the existing studies mainly focused on improving the gait recognition accuracy while ignored model complexity, which make them unsuitable for wearable devices. In this study, we proposed a lightweight attention-based Convolutional Neural Networks (CNN) model for wearable gait recognition. Specifically, a four-layer lightweight CNN was first employed to extract gait features. Then, a novel attention module based on contextual encoding information and depthwise separable convolution was designed and integrated into the lightweight CNN to enhance the extracted gait features and simplify the complexity of the model. Finally, the Softmax classifier was used for classification to realize gait recognition. We conducted comprehensive experiments to evaluate the performance of the proposed model on whuGait and OU-ISIR datasets. The effect of the proposed attention mechanisms, different data segmentation methods, and different attention mechanisms on gait recognition performance were studied and analyzed. The comparison results with the existing similar researches in terms of recognition accuracy and number of model parameters shown that our proposed model not only achieved a higher recognition performance but also reduced the model complexity by 86.5% on average. Full article
(This article belongs to the Special Issue AI and IoT Enabled Solutions for Healthcare)
Show Figures

Figure 1

Figure 1
<p>One input sample. A<sub>x</sub>, A<sub>y</sub>, A<sub>z</sub> are the 3-axis acceleration data, and G<sub>x</sub>, G<sub>y</sub>, G<sub>z</sub> are the 3-axis angular velocity data. The 6-axis sensor data are combined into a matrix with the shape of 6 × 128.</p>
Full article ">Figure 2
<p>The architecture of gait identification network.</p>
Full article ">Figure 3
<p>Structure of proposed attention mechanism.</p>
Full article ">Figure 4
<p>Gait features visualization of the 20 subjects in Dataset #2 (<b>a</b>) features extracted by CNN and (<b>b</b>) features extracted by our proposed CNN+CEDS. The dots of different colors represent the extracted gait features of different subjects.</p>
Full article ">
10 pages, 1758 KiB  
Communication
BSF-EHR: Blockchain Security Framework for Electronic Health Records of Patients
by Ibrahim Abunadi and Ramasamy Lakshmana Kumar
Sensors 2021, 21(8), 2865; https://doi.org/10.3390/s21082865 - 19 Apr 2021
Cited by 49 | Viewed by 6422
Abstract
In the current epoch of smart homes and cities, personal data such as patients’ names, diseases and addresses are often violated. This is frequently associated with the safety of the electronic health records (EHRs) of patients. EHRs have numerous benefits worldwide, but at [...] Read more.
In the current epoch of smart homes and cities, personal data such as patients’ names, diseases and addresses are often violated. This is frequently associated with the safety of the electronic health records (EHRs) of patients. EHRs have numerous benefits worldwide, but at present, EHR information is subject to considerable security and privacy issues. This paper proposes a way to provide a secure solution to these issues. Previous sophisticated techniques dealing with the protection of EHRs usually make data inaccessible to patients. These techniques struggle to balance data confidentiality, patient demand and constant interaction with provider data. Blockchain technology solves the above problems since it distributes information in a transactional and decentralized manner. The usage of blockchain technology could help the health sector to balance the accessibility and privacy of EHRs. This paper proposes a blockchain security framework (BSF) to effectively and securely store and keep EHRs. It presents a safe and proficient means of acquiring medical information for doctors, patients and insurance agents while protecting the patient’s data. This work aims to examine how our proposed framework meets the security needs of doctors, patients and third parties and how the structure addresses safety and confidentiality concerns in the healthcare sector. Simulation outcomes show that this framework efficiently protects EHR data. Full article
Show Figures

Figure 1

Figure 1
<p>Novelty of BSF-EHR system.</p>
Full article ">Figure 2
<p>Patient Pat1 blockchain.</p>
Full article ">Figure 3
<p>Doctor Doc1 blockchain.</p>
Full article ">Figure 4
<p>Insurance Agent IS1 blockchain.</p>
Full article ">Figure 5
<p>EHRs as a result of unauthorized access vs. authorized access.</p>
Full article ">Figure 6
<p>Time consumption comparison based on centralized storage vs. BSF-EHR.</p>
Full article ">
24 pages, 5061 KiB  
Article
A Hybrid Visual Tracking Algorithm Based on SOM Network and Correlation Filter
by Yuanping Zhang, Xiumei Huang and Ming Yang
Sensors 2021, 21(8), 2864; https://doi.org/10.3390/s21082864 - 19 Apr 2021
Cited by 2 | Viewed by 2439
Abstract
To meet the challenge of video target tracking, based on a self-organization mapping network (SOM) and correlation filter, a long-term visual tracking algorithm is proposed. Objects in different videos or images often have completely different appearance, therefore, the self-organization mapping neural network with [...] Read more.
To meet the challenge of video target tracking, based on a self-organization mapping network (SOM) and correlation filter, a long-term visual tracking algorithm is proposed. Objects in different videos or images often have completely different appearance, therefore, the self-organization mapping neural network with the characteristics of signal processing mechanism of human brain neurons is used to perform adaptive and unsupervised features learning. A reliable method of robust target tracking is proposed, based on multiple adaptive correlation filters with a memory function of target appearance at the same time. Filters in our method have different updating strategies and can carry out long-term tracking cooperatively. The first is the displacement filter, a kernelized correlation filter that combines contextual characteristics to precisely locate and track targets. Secondly, the scale filters are used to predict the changing scale of a target. Finally, the memory filter is used to maintain the appearance of the target in long-term memory and judge whether the target has failed to track. If the tracking fails, the incremental learning detector is used to recover the target tracking in the way of sliding window. Several experiments show that our method can effectively solve the tracking problems such as severe occlusion, target loss and scale change, and is superior to the state-of-the-art methods in the aspects of efficiency, accuracy and robustness. Full article
(This article belongs to the Special Issue Sensor Fusion for Object Detection, Classification and Tracking)
Show Figures

Figure 1

Figure 1
<p>Conclusions of example tracking on the lemming sequence by ACFLST [<a href="#B23-sensors-21-02864" class="html-bibr">23</a>], MUSTer [<a href="#B27-sensors-21-02864" class="html-bibr">27</a>], KCF [<a href="#B5-sensors-21-02864" class="html-bibr">5</a>], STC [<a href="#B24-sensors-21-02864" class="html-bibr">24</a>], Struck [<a href="#B4-sensors-21-02864" class="html-bibr">4</a>] and TLD [<a href="#B28-sensors-21-02864" class="html-bibr">28</a>] (X: no tracking output).</p>
Full article ">Figure 2
<p>SOM feature extraction and correlation filters. The translation filter <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>T</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mtext> </mtext> <msub> <mi>A</mi> <mrow> <mi>T</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mi>A</mi> <mrow> <mi>T</mi> <mn>3</mn> </mrow> </msub> </semantics></math> with short-time memory adapts to changing appearance of the target and its surrounding context. The long-time memory filter <math display="inline"><semantics> <msub> <mi>A</mi> <mi>L</mi> </msub> </semantics></math> is conservatively learned to maintain the long-time memory of the target appearance.</p>
Full article ">Figure 3
<p>The proposed algorithm diagram.</p>
Full article ">Figure 4
<p>Illustration of common state estimation methods in object tracking. Symbols <math display="inline"><semantics> <mi mathvariant="normal">O</mi> </semantics></math>, × and ⊗ denote the current, sampled and optimal states, respectively.</p>
Full article ">Figure 5
<p>Performance of distance precision and overlap success plots on the test dataset.</p>
Full article ">Figure 6
<p>Tacking results on 10 challenging sequences using our algorithm, the ARCF, PriDiMP, D3S and SiamBAN.</p>
Full article ">Figure 7
<p>Fames comparison result of central location errors (in pixels) on four challenging sequence.</p>
Full article ">Figure 8
<p>Success and precision plots on LaSOT.</p>
Full article ">Figure 9
<p>Success and precision plots on UAV123.</p>
Full article ">
27 pages, 2091 KiB  
Article
EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
by Pankaj Chejara, Luis P. Prieto, Adolfo Ruiz-Calleja, María Jesús Rodríguez-Triana, Shashi Kant Shankar and Reet Kasepalu
Sensors 2021, 21(8), 2863; https://doi.org/10.3390/s21082863 - 19 Apr 2021
Cited by 19 | Viewed by 4029
Abstract
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account [...] Read more.
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

Figure 1
<p>ML model evaluation methods.</p>
Full article ">Figure 2
<p>EFAR-MMLA: Evaluation Framework for Assessing and Reporting Generalizability of ML models in MMLA.</p>
Full article ">Figure 3
<p>Leave-one group out and leave-one context out evaluation methods.</p>
Full article ">Figure 4
<p>Process to compute frames of reference in EFAR-MMLA.</p>
Full article ">Figure 5
<p>Learning context.</p>
Full article ">Figure 6
<p>Distribution of collaboration quality scores.</p>
Full article ">Figure 7
<p>Evaluation of regression models estimating collaboration quality using basic and linguistic features in terms of RMSE (smaller is better). The red dashed line represents the theoretical average (i.e., no-information) predictor’s performance, and the blue dashed line represents the human performance level.</p>
Full article ">Figure 8
<p>RMSE of regression models estimating the dimension of collaboration using linguistic features.</p>
Full article ">
30 pages, 2884 KiB  
Article
A Dimensional Comparison between Evolutionary Algorithm and Deep Reinforcement Learning Methodologies for Autonomous Surface Vehicles with Water Quality Sensors
by Samuel Yanes Luis, Daniel Gutiérrez-Reina and Sergio Toral Marín
Sensors 2021, 21(8), 2862; https://doi.org/10.3390/s21082862 - 19 Apr 2021
Cited by 14 | Viewed by 3286
Abstract
The monitoring of water resources using Autonomous Surface Vehicles with water-quality sensors has been a recent approach due to the advances in unmanned transportation technology. The Ypacaraí Lake, the biggest water resource in Paraguay, suffers from a major contamination problem because of cyanobacteria [...] Read more.
The monitoring of water resources using Autonomous Surface Vehicles with water-quality sensors has been a recent approach due to the advances in unmanned transportation technology. The Ypacaraí Lake, the biggest water resource in Paraguay, suffers from a major contamination problem because of cyanobacteria blooms. In order to supervise the blooms using these on-board sensor modules, a Non-Homogeneous Patrolling Problem (a NP-hard problem) must be solved in a feasible amount of time. A dimensionality study is addressed to compare the most common methodologies, Evolutionary Algorithm and Deep Reinforcement Learning, in different map scales and fleet sizes with changes in the environmental conditions. The results determined that Deep Q-Learning overcomes the evolutionary method in terms of sample-efficiency by 50–70% in higher resolutions. Furthermore, it reacts better than the Evolutionary Algorithm in high space-state actions. In contrast, the evolutionary approach shows a better efficiency in lower resolutions and needs fewer parameters to synthesize robust solutions. This study reveals that Deep Q-learning approaches exceed in efficiency for the Non-Homogeneous Patrolling Problem but with many hyper-parameters involved in the stability and convergence. Full article
(This article belongs to the Collection Robotics, Sensors and Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Cyanobacteria effects on the Lake shore. The contaminated water has a characteristic intense green color and gives off an unpleasant smell.</p>
Full article ">Figure 2
<p>Autonomous Vehicles prototype used in [<a href="#B3-sensors-21-02862" class="html-bibr">3</a>] for the monitoring of Ypacaraí Lake. The vehicle has two separate motors and a battery that allows a stable speed of 2 m/s during 4 h of travel.</p>
Full article ">Figure 3
<p>The left image shows a satellite image of Ypacaraí Lake. In red, the initial deploy zone for the ASV. The right image shows the importance map <math display="inline"><semantics> <mrow> <mi mathvariant="script">I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> that weights the idle index of every zone. The higher the value of <math display="inline"><semantics> <mrow> <mi mathvariant="script">I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> the most important is to cover this area, either because it is a very contaminated area or holds a high biological interest.</p>
Full article ">Figure 4
<p>Example of graph <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>E</mi> <mo>,</mo> <mi>V</mi> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> </semantics></math> for the Patrolling Problem. There is a metric assumption and every cell will hold its idleness <span class="html-italic">W</span> as the number of time steps since the last visit. In the beginning, the value of <span class="html-italic">W</span> in every cell is <math display="inline"><semantics> <msub> <mi>W</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Movements considered collisions in the multi-agent scenario. Every collision is rewarded with a negative penalization. If the collision involves more than one agent, each agent is independently penalized. (<b>a</b>) A same-place collision is depicted. (<b>b</b>) A in-transit collision. (<b>c</b>) A off-water illegal movement can be seen.</p>
Full article ">Figure 6
<p>Individual representation. Every individual is a list of <span class="html-italic">T</span> actions. One for every timestep <span class="html-italic">t</span> of the trajectory.</p>
Full article ">Figure 7
<p>Two point crossing operation of two individuals.</p>
Full article ">Figure 8
<p>1-bit mutation of an individual.</p>
Full article ">Figure 9
<p>The multi-agent individual representation for <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> agents. Actions are grouped three by three for every timestep <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mi>T</mi> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>In the left, the state of the environment. In red, the agent position. In brown the land cells. In gray scale, the visitable zones. In the right, the Convolutional Neural Network. The convolutional layers converge in dense layers using the Re-LU function as an activation layer for every neuron.</p>
Full article ">Figure 11
<p>Centralized-learning decoupled-execution network for the multi-agent approach in [<a href="#B25-sensors-21-02862" class="html-bibr">25</a>]. Note that the state (left) indicates with vivid colors the different agents positions and an extra channel is added for a better positioning in the collision avoidance task.</p>
Full article ">Figure 12
<p>Different discretized maps of Ypacaraí Lake with N = ×1, ×2, ×3 and ×4 resolution. In red, the initial point of the agent.</p>
Full article ">Figure 13
<p>Max. reward obtained for the greedy parametrization in the DDQL approach.</p>
Full article ">Figure 14
<p>Accumulated reward along the training process in the EA approach (<b>up</b>) and in the DDQL approach (<b>down</b>). Note in the EA, every data corresponds to a generation whereas in the DDQL corresponds to an episode.</p>
Full article ">Figure 15
<p>ISER (<b>up</b>) and CT (<b>down</b>) for each approach and every resolution N.</p>
Full article ">Figure 16
<p>Transformation of the interest map. The green arrows represents the movements of the peaks of interest.</p>
Full article ">Figure 17
<p>Result of the optimization with a change in the interest map for DDQL (<b>up</b>) and EA (<b>down</b>).</p>
Full article ">Figure 18
<p>Different initial points (red) in the N = ×2 resolution map. Each ASV is assigned with one specific initial point, so the order in the EA chromosomes and each output layer of DRL will be the same from a generation/episode to another.</p>
Full article ">Figure 19
<p>Training progression of the max. AER in the multi-agent case for DDQL (<b>up</b>) and EA (<b>down</b>) for 1500 episodes and a max. value of generations of 150 generations.</p>
Full article ">Figure 20
<p>ISER (<b>up</b>) and computation time (<b>down</b>) for different fleet sizes. When the number of agents increases, the EA cannot find good solutions with the same amount of samples as the DDQL.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop