[go: up one dir, main page]

Next Issue
Volume 21, April-1
Previous Issue
Volume 21, March-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 6 (March-2 2021) – 328 articles

Cover Story (view full-size image): Ionospheric models calculated by GNSS observations provide a powerful method to study spatial and temporal ionospheric TEC variations, as well as monitor ionospheric disturbances before earthquakes. Our observation and analysis of TEC variations and disturbances over Japan showed that GNSS observing seems to be very effective in finding ionospheric characteristics and forecasting natural disasters. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 7445 KiB  
Article
Evaluating the Effects of Environmental Conditions on Sensed Parameters for Green Areas Monitoring and Smart Irrigation Systems
by Pedro V. Mauri, Lorena Parra, Salima Yousfi, Jaime Lloret and Jose F. Marin
Sensors 2021, 21(6), 2255; https://doi.org/10.3390/s21062255 - 23 Mar 2021
Cited by 1 | Viewed by 2673
Abstract
The irrigation of green areas in cities should be managed appropriately to ensure its sustainability. In large cities, not all green areas might be monitored simultaneously, and the data acquisition time can skew the gathered value. Our purpose is to evaluate which parameter [...] Read more.
The irrigation of green areas in cities should be managed appropriately to ensure its sustainability. In large cities, not all green areas might be monitored simultaneously, and the data acquisition time can skew the gathered value. Our purpose is to evaluate which parameter has a lower hourly variation. We included soil parameters (soil temperature and moisture) and plant parameters (canopy temperature and vegetation indexes). Data were gathered at 5 different hours in 11 different experimental plots with variable irrigation and with different grass composition. The results indicate that soil moisture and Normalized Difference Vegetation Index are the sole parameters not affected by the data acquisition time. For soil moisture, the maximum difference was in experimental plot 4, with values of 21% at 10:45 AM and 27% at 8:45 AM. On the other hand, canopy temperature is the most affected parameter with a mean variation of 15 °C in the morning. The maximum variation was in experimental plot 8 with a 19 °C at 8:45 AM and 39 °C at 12:45 PM. Data acquisition time affected the correlation between soil moisture and canopy temperature. We can affirm that data acquisition time has to be included as a variability source. Finally, our conclusion indicates that it is vital to consider data acquisition time to ensure water distribution for irrigation in cities. Full article
Show Figures

Figure 1

Figure 1
<p>Pictures of (<b>a</b>) used devices and (<b>b</b>) data gathering process.</p>
Full article ">Figure 2
<p>Mean values of SM for different plot numbers and DATs.</p>
Full article ">Figure 3
<p>Mean values of SM and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 4
<p>Mean values of SM and LSD intervals for multiple range tests of ANOVA for factor IR.</p>
Full article ">Figure 5
<p>Mean values of ST for different plot numbers and DATs.</p>
Full article ">Figure 6
<p>Mean values of SM and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 7
<p>Mean values of SM and LSD intervals for multiple range tests of ANOVA for factor DAT.</p>
Full article ">Figure 8
<p>Mean values of CT for different plot numbers and DATs.</p>
Full article ">Figure 9
<p>Mean values of CT and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 10
<p>Mean values of CT and LSD intervals for multiple range tests of ANOVA for factor IR.</p>
Full article ">Figure 11
<p>Mean values of SM and LSD intervals for multiple range tests of ANOVA for factor DAT.</p>
Full article ">Figure 12
<p>Mean values of NDVI for different plot numbers and DATs.</p>
Full article ">Figure 13
<p>Mean values of NDVI and LSD intervals for multiple range tests of ANOVA for factor IR.</p>
Full article ">Figure 14
<p>Mean values of NDVI and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 15
<p>Mean values of GA for different plot numbers and DATs.</p>
Full article ">Figure 16
<p>Mean values of GA and LSD intervals for multiple range tests of ANOVA for factor IR.</p>
Full article ">Figure 17
<p>Mean values of GA and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 18
<p>Mean values of GA and LSD intervals for multiple range tests of ANOVA for factor DAT.</p>
Full article ">Figure 19
<p>Mean values of GGA for different plot numbers and DATs.</p>
Full article ">Figure 20
<p>Mean values of GGA and LSD intervals for multiple range tests of ANOVA for factor IR.</p>
Full article ">Figure 21
<p>Mean values of GGA and LSD intervals for multiple range tests of ANOVA for factor GM.</p>
Full article ">Figure 22
<p>Mean values of GGA and LSD intervals for multiple range tests of ANOVA for factor DAT.</p>
Full article ">Figure 23
<p>Correlation NDVI and GA indexes with data gathered at different DAT.</p>
Full article ">Figure 24
<p>Correlation NDVI and GGA indexes with data gathered at different DAT.</p>
Full article ">Figure 25
<p>Correlation of SM and CT indexes with data gathered at different DAT.</p>
Full article ">
21 pages, 2133 KiB  
Article
A Feasibility Study of the Use of Smartwatches in Wearable Fall Detection Systems
by Francisco Javier González-Cañete and Eduardo Casilari
Sensors 2021, 21(6), 2254; https://doi.org/10.3390/s21062254 - 23 Mar 2021
Cited by 24 | Viewed by 10700
Abstract
Over the last few years, the use of smartwatches in automatic Fall Detection Systems (FDSs) has aroused great interest in the research of new wearable telemonitoring systems for the elderly. In contrast with other approaches to the problem of fall detection, smartwatch-based FDSs [...] Read more.
Over the last few years, the use of smartwatches in automatic Fall Detection Systems (FDSs) has aroused great interest in the research of new wearable telemonitoring systems for the elderly. In contrast with other approaches to the problem of fall detection, smartwatch-based FDSs can benefit from the widespread acceptance, ergonomics, low cost, networking interfaces, and sensors that these devices provide. However, the scientific literature has shown that, due to the freedom of movement of the arms, the wrist is usually not the most appropriate position to unambiguously characterize the dynamics of the human body during falls, as many conventional activities of daily living that involve a vigorous motion of the hands may be easily misinterpreted as falls. As also stated by the literature, sensor-fusion and multi-point measurements are required to define a robust and reliable method for a wearable FDS. Thus, to avoid false alarms, it may be necessary to combine the analysis of the signals captured by the smartwatch with those collected by some other low-power sensor placed at a point closer to the body’s center of gravity (e.g., on the waist). Under this architecture of Body Area Network (BAN), these external sensing nodes must be wirelessly connected to the smartwatch to transmit their measurements. Nonetheless, the deployment of this networking solution, in which the smartwatch is in charge of processing the sensed data and generating the alarm in case of detecting a fall, may severely impact on the performance of the wearable. Unlike many other works (which often neglect the operational aspects of real fall detectors), this paper analyzes the actual feasibility of putting into effect a BAN intended for fall detection on present commercial smartwatches. In particular, the study is focused on evaluating the reduction of the battery life may cause in the watch that works as the core of the BAN. To this end, we thoroughly assess the energy drain in a prototype of an FDS consisting of a smartwatch and several external Bluetooth-enabled sensing units. In order to identify those scenarios in which the use of the smartwatch could be viable from a practical point of view, the testbed is studied with diverse commercial devices and under different configurations of those elements that may significantly hamper the battery lifetime. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

Figure 1
<p>Performance metrics of the smartwatches under test vs. data sampling period (ms): (<b>a</b>) Absolute battery lifetime (in hours), (<b>b</b>) relative battery duration (lifespan per depleted energy unit), (<b>c</b>) number of messages received per depleted energy unit, (<b>d</b>) number of lost messages per depleted energy unit.</p>
Full article ">Figure 2
<p>Snapshot of the evolution of the current drained for the Skagen Falster 2 (<b>a</b>), Huawei Watch 2 (<b>b</b>) and Mobvoi TicWatch Pro (<b>c</b>) during three particular minutes (from second 120 to 300) of the experiment with a frequency sample of 50 Hz and one connected sensing node.</p>
Full article ">Figure 3
<p>Performance metrics as a function of the number of connected sensors for a sampling rate of 50 Hz: (<b>a</b>) Battery duration, (<b>b</b>) relative battery lifetime (lifespan per consumed mAh), (<b>c</b>) number of messages received per consumed mAh, and (<b>d</b>) number of lost messages per consumed mAh.</p>
Full article ">Figure 4
<p>Measured battery duration as a function of the number of (external) connected sensing nodes for a sampling rate of 50 Hz when the internal sensors of the smartwatch (inertial measurement unit (IMU) and heart rate monitor) are all connected (ON), all disconnected (OFF) and when only the inertial sensors (IMU) are connected: (<b>a</b>) Results for Skagen Falster 2, (<b>b</b>) Results for Huawei Watch 2, (<b>c</b>) Results for Mobvoi TicWatch Pro.</p>
Full article ">Figure 4 Cont.
<p>Measured battery duration as a function of the number of (external) connected sensing nodes for a sampling rate of 50 Hz when the internal sensors of the smartwatch (inertial measurement unit (IMU) and heart rate monitor) are all connected (ON), all disconnected (OFF) and when only the inertial sensors (IMU) are connected: (<b>a</b>) Results for Skagen Falster 2, (<b>b</b>) Results for Huawei Watch 2, (<b>c</b>) Results for Mobvoi TicWatch Pro.</p>
Full article ">Figure 5
<p>Battery duration as a function of the number of connected sensors for a sampling period of 20 ms (sampling rate of 50 Hz) depending on the use of the localization services for the different smartwatches: (<b>a</b>) Skagen Falster 2, (<b>b</b>) Huawei Watch 2, and (<b>c</b>) Mobvoi TicWatch Pro.</p>
Full article ">Figure 6
<p>Battery life as a function of the period utilized to retransmit the inertial data to the external server via Wi-Fi (the 0 period indicates that the data is sent continuously): (<b>a</b>) Battery duration (<b>b</b>) relative battery lifetime (lifespan per consumed mAh).</p>
Full article ">
18 pages, 5840 KiB  
Article
Evidence of Negative Capacitance and Capacitance Modulation by Light and Mechanical Stimuli in Pt/ZnO/Pt Schottky Junctions
by Raoul Joly, Stéphanie Girod, Noureddine Adjeroud, Patrick Grysan and Jérôme Polesel-Maris
Sensors 2021, 21(6), 2253; https://doi.org/10.3390/s21062253 - 23 Mar 2021
Cited by 13 | Viewed by 3207
Abstract
We report on the evidence of negative capacitance values in a system consisting of metal-semiconductor-metal (MSM) structures, with Schottky junctions made of zinc oxide thin films deposited by Atomic Layer Deposition (ALD) on top of platinum interdigitated electrodes (IDE). The MSM structures were [...] Read more.
We report on the evidence of negative capacitance values in a system consisting of metal-semiconductor-metal (MSM) structures, with Schottky junctions made of zinc oxide thin films deposited by Atomic Layer Deposition (ALD) on top of platinum interdigitated electrodes (IDE). The MSM structures were studied over a wide frequency range, between 20 Hz and 1 MHz. Light and mechanical strain applied to the device modulate positive or negative capacitance and conductance characteristics by tuning the flow of electrons involved in the conduction mechanisms. A complete study was carried out by measuring the capacitance and conductance characteristics under the influence of both dark and light conditions, over an extended range of applied bias voltage and frequency. An impact-loss process linked to the injection of hot electrons at the interface trap states of the metal-semiconductor junction is proposed to be at the origin of the apparition of the negative capacitance values. These negative values are preceded by a local increase of the capacitance associated with the accumulation of trapped electrons at the interface trap states. Thus, we propose a simple device where the capacitance values can be modulated over a wide frequency range via the action of light and strain, while using cleanroom-compatible materials for fabrication. These results open up new perspectives and applications for the miniaturization of highly sensitive and low power consumption environmental sensors, as well as for broadband impedance matching in radio frequency applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Top view representation of a piezotronic strain microsensor. The black dashed line boxes represent a detailed view of the area of the interdigitated electrodes; (<b>b</b>) Piezotronic strain microsensor mounted and bonded on a printed circuit board (PCB), contacted with tungsten tips. The polyimide cantilevers are bent upwards, leading to the generation of a compressive strain; (<b>c</b>) Equivalent circuit model of the metal-semiconductor-metal structure with interdigitated electrodes; (<b>d</b>) Constitutive capacitance and conductance contributions from each Schottky diode.</p>
Full article ">Figure 2
<p>Characteristics of the microscope light between a wavelength of 400 nm and 1000 nm inducing light conditions during the electrical measurements. The Pt/ZnO/Pt cantilevered chip is placed at a distance of 10 cm from the light source.</p>
Full article ">Figure 3
<p>Growth rate per cycle (Å/cycle) of ZnO thin films by ALD for different deposition temperatures.</p>
Full article ">Figure 4
<p>SEM cross-sectional images of ZnO thin films grown on Si substrates at (<b>a</b>) 100 °C, (<b>b</b>) 80 °C and (<b>c</b>) 60 °C. Each ZnO thin film was obtained with 1000 Atomic Layer Deposition (ALD) loops. The scale bar corresponds to 200 nm.</p>
Full article ">Figure 5
<p>SEM top view images and associated GI-XRD diffraction patterns (ω = 0.3°) of ZnO thin films grown on Si substrates at a deposition temperature of (<b>a</b>) 100 °C, (<b>b</b>) 80 °C and (<b>c</b>) 60 °C. The obtained ZnO thin films were deposited with the same number of ALD loops (1000). The scale bar corresponds to 300 nm.</p>
Full article ">Figure 6
<p>GI-XRD diffraction patterns (ω = 0.3°) of ZnO thin films grown on top of 75 µm thick polyimide substrates at 100 °C, 80 °C and 60 °C.</p>
Full article ">Figure 7
<p>GI-XRD diffraction patterns (ω = 0.3°) of ZnO thin films grown on Si substrates coated with a 200 nm thick Pt layer at 100 °C, 80 °C and 60 °C.</p>
Full article ">Figure 8
<p>Cross-section showing the conformality of the ZnO thin film deposited by ALD on the polyimide substrate and the platinum metal electrodes. A SU8 resin top layer is deposited to protect the ZnO/Pt junction. The scale bar corresponds to 500 nm.</p>
Full article ">Figure 9
<p>Evolution of the resistivity as a function of the deposition temperature for 150 nm thick ZnO thin films deposited at 60 °C, 80 °C and 100 °C on glass substrates, measured by the four-points probe method.</p>
Full article ">Figure 10
<p>(C-V) characteristics under dark and light conditions, for different fixed frequencies of the AC modulation superimposed to the DC bias and ranging between 20 Hz and 1 MHz. The voltage was swept between −10 V and 10 V, with a step voltage of 100 mV; (<b>a</b>) log scale under dark conditions; (<b>b</b>) linear scale under dark conditions; (<b>c</b>) log scale under light conditions; (<b>d</b>) linear scale under light conditions. Only the positive capacitance values are displayed in the graphs (<b>a</b>,<b>c</b>) with a log scale.</p>
Full article ">Figure 11
<p>(C-f) characteristics under dark and light conditions for different fixed bias voltages ranging between 0 V and 10 V. The frequency was varied with a logarithmic sweep between 20 Hz and 1 MHz; (<b>a</b>) log scale under dark conditions; (<b>b</b>) linear scale under dark conditions; (<b>c</b>) log scale under light conditions; (<b>d</b>) linear scale under light conditions. Only the positive capacitance values are displayed in the graphs (<b>a</b>,<b>c</b>) with a log scale.</p>
Full article ">Figure 12
<p>(G-f) characteristics under dark and light conditions for different fixed bias voltages ranging between 0 V and 10 V. The frequency was varied with a logarithmic sweep between 20 Hz and 1 MHz; (<b>a</b>) log scale under dark conditions; (<b>b</b>) log scale under light conditions.</p>
Full article ">Figure 13
<p>(I-V) characteristics under both dark and light conditions, measured with DC bias modulation. The current is represented on a logarithmic scale with absolute values. The current measurements were performed with an integration time of 200 ms, a step length of 100 mV, a sweep speed of 200 mV·s<sup>−1</sup> and a delay of 300 ms between the step and the measurement. The voltage was swept for dark characteristics between −10 V and 10 V. The voltage was swept for light characteristics between −6 V and 6 V. The arrows indicate the parts of the curves corresponding to the forward and backward sweeps.</p>
Full article ">Figure 14
<p>(C-V) and (G-V) characteristics under light conditions, for a fixed frequency of 20 Hz, with controlled compressive strain steps imposed on the junctions. The voltage was swept between −10 V and 10 V, with a step voltage of 100 mV; (<b>a</b>) (C-V) characteristics, linear scale; (<b>b</b>) (G-V) characteristics, log scale.</p>
Full article ">
37 pages, 2333 KiB  
Article
Double-Scale Adaptive Transmission in Time-Varying Channel for Underwater Acoustic Sensor Networks
by Yi Cen, Mingliu Liu, Deshi Li, Kaitao Meng and Huihui Xu
Sensors 2021, 21(6), 2252; https://doi.org/10.3390/s21062252 - 23 Mar 2021
Cited by 4 | Viewed by 2856
Abstract
The communication channel in underwater acoustic sensor networks (UASNs) is time-varying due to the dynamic environmental factors, such as ocean current, wind speed, and temperature profile. Generally, these phenomena occur with a certain regularity, resulting in a similar variation pattern inherited in the [...] Read more.
The communication channel in underwater acoustic sensor networks (UASNs) is time-varying due to the dynamic environmental factors, such as ocean current, wind speed, and temperature profile. Generally, these phenomena occur with a certain regularity, resulting in a similar variation pattern inherited in the communication channels. Based on these observations, the energy efficiency of data transmission can be improved by controlling the modulation method, coding rate, and transmission power according to the channel dynamics. Given the limited computational capacity and energy in underwater nodes, we propose a double-scale adaptive transmission mechanism for the UASNs, where the transmission configuration will be determined by the predicted channel states adaptively. In particular, the historical channel state series will first be decomposed into large-scale and small-scale series and then be predicted by a novel k-nearest neighbor search algorithm with sliding window. Next, an energy-efficient transmission algorithm is designed to solve the problem of long-term modulation and coding optimization. In particular, a quantitative model is constructed to describe the relationship between data transmission and the buffer threshold used in this mechanism, which can then analyze the influence of buffer threshold under different channel states or data arrival rates theoretically. Finally, numerical simulations are conducted to verify the proposed schemes, and results show that they can achieve good performance in terms of channel prediction and energy consumption with moderate buffer length. Full article
(This article belongs to the Special Issue Underwater Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Keweenaw Waterway experiment: Average signal-to-noise rates (SNR) at the receiver.</p>
Full article ">Figure 2
<p>Multi-hop clustered underwater acoustic sensor network.</p>
Full article ">Figure 3
<p>Double-scale adaptive transmission mechanism.</p>
Full article ">Figure 4
<p>Large-scale epoch and small-scale slot.</p>
Full article ">Figure 5
<p>Channel state and large-scale channel state prediction method. (<b>a</b>) Original channel state. (<b>b</b>) Large-scale channel state and diagram of <span class="html-italic">k</span>-nearest neighbor algorithm with sliding window. <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mn>3</mn> </msub> </semantics></math> is the test vector. <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mn>2</mn> </msub> </semantics></math> are nearest neighbors chosen from training vectors.</p>
Full article ">Figure 6
<p>Channel state series decomposition.</p>
Full article ">Figure 7
<p>Rearrangement process for a transmission modes chromosome.</p>
Full article ">Figure 8
<p>A linearly varying channel state series.</p>
Full article ">Figure 9
<p>Transmission action when buffer threshold is long enough (<math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mi>c</mi> </msub> <mo>≥</mo> <mn>2</mn> <msub> <mi>B</mi> <mn>1</mn> </msub> </mrow> </semantics></math>), for case 1.</p>
Full article ">Figure 10
<p>Transmission action when <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>λ</mi> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>≤</mo> <msub> <mi>B</mi> <mi>c</mi> </msub> <mo>&lt;</mo> <mn>2</mn> <msub> <mi>B</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, for case 2.</p>
Full article ">Figure 11
<p>Transmission action when <math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mi>c</mi> </msub> <mo>&lt;</mo> <mn>2</mn> <mi>λ</mi> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, for case 3.</p>
Full article ">Figure 12
<p>Predicted large-scale channel state and real large-scale channel state of Data 1.</p>
Full article ">Figure 13
<p>Predicted large-scale channel state and real large-scale channel state of Data 2.</p>
Full article ">Figure 14
<p>Root mean square error (RMSE) of 1-step ahead prediction with different length of sliding window and stored series.</p>
Full article ">Figure 15
<p>RMSE of 5-step ahead prediction with different length of sliding window and stored series.</p>
Full article ">Figure 16
<p>RMSE of 15-step ahead prediction with different length of sliding window and stored series.</p>
Full article ">Figure 17
<p>RMSE of 25-step ahead prediction with different length of sliding window and stored series.</p>
Full article ">Figure 18
<p>Predicted small-scale channel state of Data 1 by the decomposition-based prediction model and auto-regressive (AR) prediction.</p>
Full article ">Figure 19
<p>Predicted small-scale channel state of Data 2 by the decomposition-based prediction model and AR prediction.</p>
Full article ">Figure 20
<p>RMSE of small-scale channel state prediction with different input vector length.</p>
Full article ">Figure 21
<p>Scheduled transmission mode according to predicted large-scale channel state for Data 1.</p>
Full article ">Figure 22
<p>Scheduled transmission mode according to predicted large-scale channel state for Data 2.</p>
Full article ">Figure 23
<p>Average energy cost per kb of for comparative strategies (Data 1).</p>
Full article ">Figure 24
<p>Average energy cost per kb for comparative strategies (Data 2).</p>
Full article ">Figure 25
<p>Average buffer length for comparative strategies (Data 1).</p>
Full article ">Figure 26
<p>Average buffer length for comparative strategies (Data 2).</p>
Full article ">Figure 27
<p>Average transmission delay for comparative strategies (Data 1).</p>
Full article ">Figure 28
<p>Average transmission delay for comparative strategies (Data 2).</p>
Full article ">Figure 29
<p>Simulation results of the impact of buffer threshold and data arrival rate on the average energy cost.</p>
Full article ">Figure 30
<p>Theoretical results of the impact of buffer threshold and data arrival rate on the average energy cost.</p>
Full article ">Figure 31
<p>Simulation results of the impact of buffer threshold and data arrival rate on the average buffer length.</p>
Full article ">Figure 32
<p>Theoretical results of the impact of buffer threshold and data arrival rate on the average buffer length.</p>
Full article ">Figure 33
<p>Simulation results of the impact of buffer threshold and data arrival rate on the average transmission delay.</p>
Full article ">Figure 34
<p>Theoretical results of the impact of buffer threshold and data arrival rate on the average transmission delay.</p>
Full article ">
14 pages, 6779 KiB  
Article
Development of Nationwide Road Quality Map: Remote Sensing Meets Field Sensing
by Sadra Karimzadeh and Masashi Matsuoka
Sensors 2021, 21(6), 2251; https://doi.org/10.3390/s21062251 - 23 Mar 2021
Cited by 6 | Viewed by 3321
Abstract
In this study, we measured the in situ international roughness index (IRI) for first-degree roads spanning more than 1300 km in East Azerbaijan Province, Iran, using a quarter car (QC). Since road quality mapping with in situ measurements is a costly and time-consuming [...] Read more.
In this study, we measured the in situ international roughness index (IRI) for first-degree roads spanning more than 1300 km in East Azerbaijan Province, Iran, using a quarter car (QC). Since road quality mapping with in situ measurements is a costly and time-consuming task, we also developed new equations for constructing a road quality proxy map (RQPM) using discriminant analysis and multispectral information from high-resolution Sentinel-2 images, which we calibrated using the in situ data on the basis of geographic information system (GIS) data. The developed equations using optimum index factor (OIF) and norm R provide a valuable tool for creating proxy maps and mitigating hazards at the network scale, not only for primary roads but also for secondary roads, and for reducing the costs of road quality monitoring. The overall accuracy and kappa coefficient of the norm R equation for road classification in East Azerbaijan province are 65.0% and 0.59, respectively. Full article
(This article belongs to the Special Issue On-Board and Remote Sensors in Intelligent Vehicles)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Road network map of Iran. The black box denotes the study area. (<b>b</b>) Road network of East Azerbaijan Province, Iran, the study area.</p>
Full article ">Figure 2
<p>(<b>a</b>) Setup for conducting international roughness index (IRI) measurements in East Azerbaijan Province using a quarter car (QC) and a smartphone. (<b>b</b>) The resulting back-and-forth in situ IRI measurements in East Azerbaijan Province of Iran from Tabriz along the primary roads leading to the other counties. Black boxes show the areas covered by Sentinel-2 images of the province. Dashed boxes indicate the images acquired from Sentinel-2A, and solid black boxes indicate the images acquired from Sentinel-2B.</p>
Full article ">Figure 3
<p>(<b>a</b>) Histogram of IRI measurements for the study area. (<b>b</b>) Empirical cumulative distribution of IRI measurements. (<b>c</b>) Boxplot of IRI measurements. The “+” sign indicates the mean IRI value. (<b>d</b>) Probability–probability (P-P) plot of the empirical cumulative distribution versus the theoretical cumulative distribution.</p>
Full article ">Figure 4
<p>Workflow of road quality mapping based on in situ and satellite datasets.</p>
Full article ">Figure 5
<p>Binary road quality map based on a Jenks classification for IRI measurements.</p>
Full article ">Figure 6
<p>Overall accuracy of the OIF and norm R results using different window sizes.</p>
Full article ">Figure 7
<p>(<b>a</b>) Binary road quality proxy map (RQPM) deduced from the OIF and discriminant analysis. (<b>b</b>) Binary RQPM deduced from the norm R and discriminant analysis.</p>
Full article ">Figure 8
<p>Nationwide RQPM deduced from the discriminant model of norm R.</p>
Full article ">
18 pages, 1545 KiB  
Article
Self-Difference Convolutional Neural Network for Facial Expression Recognition
by Leyuan Liu, Rubin Jiang, Jiao Huo and Jingying Chen
Sensors 2021, 21(6), 2250; https://doi.org/10.3390/s21062250 - 23 Mar 2021
Cited by 9 | Viewed by 2966
Abstract
Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial [...] Read more.
Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Visualization of deep features output by the fine-tuned VGG-face [<a href="#B8-sensors-21-02250" class="html-bibr">8</a>] (<b>a</b>) and the self-difference features extracted by the proposed self-difference convolutional network (SD-CNN) (<b>b</b>). Each dot represents the deep feature extracted from a face image. Different shapes denote different subject identities, and different colors represent different facial expressions.</p>
Full article ">Figure 2
<p>Framework of the proposed facial expression recognition method. The proposed method which at its core consists of two modules, i.e., the facial expression generator and the facial expression classifier. The facial expression generator synthesizes photo-realistic images with the six typical facial expressions from an input face image under an arbitrary facial expression. The facial expression classifier uses six DiffNets for facial expression classification.</p>
Full article ">Figure 3
<p>Network structure of the facial expression generator.</p>
Full article ">Figure 4
<p>Network structure of the DiffNet for facial expression classification.</p>
Full article ">Figure 5
<p>Example samples randomly selected from the CK+ [<a href="#B7-sensors-21-02250" class="html-bibr">7</a>] and Oulu-CASIA [<a href="#B14-sensors-21-02250" class="html-bibr">14</a>] datasets.</p>
Full article ">Figure 6
<p>Confusion matrix for the CK+ dataset.</p>
Full article ">Figure 7
<p>Testing samples in CK+ that are misclassified by our method.</p>
Full article ">Figure 8
<p>Visualization of deep features output by the VGG-face [<a href="#B8-sensors-21-02250" class="html-bibr">8</a>] (<b>left</b>) and our method (<b>right</b>) on the CK+ dataset. Each dot represents the deep feature extracted from a testing sample.</p>
Full article ">Figure 9
<p>Confusion matrix for the Oulu-CASIA dataset.</p>
Full article ">Figure 10
<p>Testing samples in Oulu-CASIA that are misclassified by our method.</p>
Full article ">Figure 11
<p>Visualization of deep features output by the VGG-face [<a href="#B8-sensors-21-02250" class="html-bibr">8</a>] (<b>a</b>) and our method (<b>b</b>) on the Oulu-CASIA dataset. Each dot represents the deep feature extracted from a testing sample.</p>
Full article ">
18 pages, 6224 KiB  
Article
Advantages of IoT-Based Geotechnical Monitoring Systems Integrating Automatic Procedures for Data Acquisition and Elaboration
by Andrea Carri, Alessandro Valletta, Edoardo Cavalca, Roberto Savi and Andrea Segalini
Sensors 2021, 21(6), 2249; https://doi.org/10.3390/s21062249 - 23 Mar 2021
Cited by 17 | Viewed by 4079
Abstract
Monitoring instrumentation plays a major role in the study of natural phenomena and analysis for risk prevention purposes, especially when facing the management of critical events. Within the geotechnical field, data collection has traditionally been performed with a manual approach characterized by time-expensive [...] Read more.
Monitoring instrumentation plays a major role in the study of natural phenomena and analysis for risk prevention purposes, especially when facing the management of critical events. Within the geotechnical field, data collection has traditionally been performed with a manual approach characterized by time-expensive on-site investigations and monitoring devices activated by an operator. Due to these reasons, innovative instruments have been developed in recent years in order to provide a complete and more efficient system thanks to technological improvements. This paper aims to illustrate the advantages deriving from the application of a monitoring approach, named Internet of natural hazards, relying on the Internet of things principles applied to monitoring technologies. One of the main features of the system is the ability of automatic tools to acquire and elaborate data independently, which has led to the development of dedicated software and web-based visualization platforms for faster, more efficient and accessible data management. Additionally, automatic procedures play a key role in the implementation of early warning systems with a near-real-time approach, providing a valuable tool to the decision-makers and authorities responsible for emergency management. Moreover, the possibility of recording a large number of different parameters and physical quantities with high sampling frequency allows to perform meaningful statistical analyses and identify cause–effect relationships. A series of examples deriving from different case studies are reported in this paper in order to present the practical implications of the IoNH approach application to geotechnical monitoring. Full article
(This article belongs to the Special Issue Sensors and Measurements in Geotechnical Engineering)
Show Figures

Figure 1

Figure 1
<p>Internet of natural hazards (IoNH) approach applied to modular underground monitoring system (MUMS) system, with data collection, transmission, database storage, automatic elaboration, results representation, alarms activation.</p>
Full article ">Figure 2
<p>IoNH approach applied to a rockfall event and its impact on protection barriers, followed by trigger activation, data collection and transmission, database storage, data processing, and activation of warning procedures.</p>
Full article ">Figure 3
<p>Battery voltage monitoring over time, with three different charge levels.</p>
Full article ">Figure 4
<p>Raw data analysis, focusing on a spike event (<b>a</b>) and an actual displacement (<b>b</b>), was performed by using an 11-element data window centered on the continuous red line points, ranging between the green line and the orange line. Data transmissions are represented by red, dashed blue and dashed red lines [<a href="#B27-sensors-21-02249" class="html-bibr">27</a>].</p>
Full article ">Figure 5
<p>Tilt Link HR 3D sensor, equipped with 3D microelectromechanical system (MEMS) and 2D electrolytic cell, placed on the same electronic board, with instrumental axes aligned on the horizontal plane.</p>
Full article ">Figure 6
<p>Comparison between local differential displacements recorded by (<b>a</b>) MEMS and (<b>b</b>) electrolytic cells along maximum grade direction and their evolution along time at a depth of 13 m (<b>c</b>,<b>d</b>), respectively).</p>
Full article ">Figure 7
<p>Comparison between rainfall height, water level variations and displacements recorded by a MUMS-based automatic inclinometer on a landslide in northern Italy.</p>
Full article ">Figure 8
<p>Trigger activations and related steel post tilt values recorded by MEMS and electro-level sensors, together with the mountain brace load identified by load cell sensor.</p>
Full article ">Figure 9
<p>Comparison between tilt data and temperature recorded by MEMS sensor placed in a wall-mounted tiltmeter.</p>
Full article ">
22 pages, 7904 KiB  
Article
Automatic Measurement of Morphological Traits of Typical Leaf Samples
by Xia Huang, Shunyi Zheng and Li Gui
Sensors 2021, 21(6), 2247; https://doi.org/10.3390/s21062247 - 23 Mar 2021
Cited by 2 | Viewed by 2553
Abstract
It is still a challenging task to automatically measure plants. A novel method for automatic plant measurement based on a hand-held three-dimensional (3D) laser scanner is proposed. The objective of this method is to automatically select typical leaf samples and estimate their morphological [...] Read more.
It is still a challenging task to automatically measure plants. A novel method for automatic plant measurement based on a hand-held three-dimensional (3D) laser scanner is proposed. The objective of this method is to automatically select typical leaf samples and estimate their morphological traits from different occluded live plants. The method mainly includes data acquisition and processing. Data acquisition is to obtain the high-precision 3D mesh model of the plant that is reconstructed in real-time during data scanning by a hand-held 3D laser scanner (ZGScan 717, made in Zhongguan Automation Technology, Wuhan, China). Data processing mainly includes typical leaf sample extraction and morphological trait estimation based on a multi-level region growing segmentation method using two leaf shape models. Four scale-related traits and six corresponding scale-invariant traits can be automatically estimated. Experiments on four groups of different canopy-occluded plants are conducted. Experiment results show that for plants with different canopy occlusions, 94.02% of typical leaf samples can be scanned well and 87.61% of typical leaf samples can be automatically extracted. The automatically estimated morphological traits are correlated with the manually measured values EF (the modeling efficiency) above 0.8919 for scale-related traits and EF above 0.7434 for scale-invariant traits). It takes an average of 196.37 seconds (186.08 seconds for data scanning, 5.95 seconds for 3D plant model output, and 4.36 seconds for data processing) for a plant measurement. The robustness and low time cost of the proposed method for different canopy-occluded plants show potential applications for real-time plant measurement and high-throughput plant phenotype. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Data acquisition process. (<b>a</b>) ZGScan 717; (<b>b</b>) The generated 3D mesh model of the plant; (<b>c</b>) The 3D mesh model of a leaf.</p>
Full article ">Figure 2
<p>The removal of non-plant. (<b>a</b>) An example of the generated 3D mesh model of the plant; (<b>b</b>) the detection of the table; (<b>c</b>) the filtering of non-plant; (<b>d</b>) the plant after the non-plant removal.</p>
Full article ">Figure 3
<p>Visualization of four scale-related morphological traits (leaf area, perimeter, length, and width) on the 3D-triangle mesh of a leaf.</p>
Full article ">Figure 4
<p>The multi-level region growing segmentation method based on the shape model.</p>
Full article ">Figure 5
<p>Schematic diagram of the multi-level region growing segmentation. (<b>a</b>) Initial segmentation results (<span class="html-italic">ε</span><sub>a</sub> = 1.5ς and <span class="html-italic">ε</span><sub>b</sub> = 1.5ρ); (<b>b</b>) the data for the second segmentation; (<b>c</b>) the data for the third segmentation; (<b>d</b>) the automatically selected leaves after the initial segmentation based on the initial shape models; (<b>e</b>) the automatically selected leaves after the second segmentation (<span class="html-italic">ε</span><sub>a</sub> = 1.45ς and <span class="html-italic">ε</span><sub>b</sub> = 1.45ρ) based on the second shape models; (<b>f</b>) the automatically selected typical leaf samples after the final segmentation based on the final shape models. The red boxes mark a cluster considered as a typical leaf sample during the multi-level segmentation process but removed at last.</p>
Full article ">Figure 6
<p>The automatic segmentation results of typical leaf samples of different canopy-occluded plants. (<b>a</b>) No canopy occlusion (group 1); (<b>b</b>) a little canopy occlusion (group 2); (<b>c</b>) medium canopy occlusion (group 3); (<b>d</b>) heavy canopy occlusion (group 4). The red boxes mark the new-born leaves, and the yellow boxes mark the leaves with incomplete scanning data.</p>
Full article ">Figure 7
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with no occlusion (plants in group 1).</p>
Full article ">Figure 8
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with a little occlusion (plants in group 2).</p>
Full article ">Figure 9
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with medium occlusion (plants in group 3).</p>
Full article ">Figure 10
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with heavy occlusion (plants in group 4).</p>
Full article ">Figure 10 Cont.
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with heavy occlusion (plants in group 4).</p>
Full article ">Figure 11
<p>Regression analyses between automatic and manual measurements of scale-related traits of plants with different occlusions (plants in all groups).</p>
Full article ">Figure 12
<p>Segmentation results of two segmentation methods and our proposed method. Method A is the Euclidean Clustering method in papers [<a href="#B24-sensors-21-02247" class="html-bibr">24</a>,<a href="#B29-sensors-21-02247" class="html-bibr">29</a>] and B is the Facet Region Growing method in paper [<a href="#B27-sensors-21-02247" class="html-bibr">27</a>]. Plants in (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) are plants with no, a little, medium, and heavy canopy occlusion. The yellow boxes mark some segmented results of leaves with stems using the Euclidean Clustering method. The red boxes mark some segmented results of attached and overlapped leaves using the Facet Region Growing method.</p>
Full article ">
14 pages, 1156 KiB  
Article
Early Detection of Freezing of Gait during Walking Using Inertial Measurement Unit and Plantar Pressure Distribution Data
by Scott Pardoel, Gaurav Shalin, Julie Nantel, Edward D. Lemaire and Jonathan Kofman
Sensors 2021, 21(6), 2246; https://doi.org/10.3390/s21062246 - 23 Mar 2021
Cited by 38 | Viewed by 4697
Abstract
Freezing of gait (FOG) is a sudden and highly disruptive gait dysfunction that appears in mid to late-stage Parkinson’s disease (PD) and can lead to falling and injury. A system that predicts freezing before it occurs or detects freezing immediately after onset would [...] Read more.
Freezing of gait (FOG) is a sudden and highly disruptive gait dysfunction that appears in mid to late-stage Parkinson’s disease (PD) and can lead to falling and injury. A system that predicts freezing before it occurs or detects freezing immediately after onset would generate an opportunity for FOG prevention or mitigation and thus enhance safe mobility and quality of life. This research used accelerometer, gyroscope, and plantar pressure sensors to extract 861 features from walking data collected from 11 people with FOG. Minimum-redundancy maximum-relevance and Relief-F feature selection were performed prior to training boosted ensembles of decision trees. The binary classification models identified Total-FOG or No FOG states, wherein the Total-FOG class included data windows from 2 s before the FOG onset until the end of the FOG episode. Three feature sets were compared: plantar pressure, inertial measurement unit (IMU), and both plantar pressure and IMU features. The plantar-pressure-only model had the greatest sensitivity and the IMU-only model had the greatest specificity. The best overall model used the combination of plantar pressure and IMU features, achieving 76.4% sensitivity and 86.2% specificity. Next, the Total-FOG class components were evaluated individually (i.e., Pre-FOG windows, Freeze windows, transition windows between Pre-FOG and Freeze). The best model detected windows that contained both Pre-FOG and FOG data with 85.2% sensitivity, which is equivalent to detecting FOG less than 1 s after the freeze began. Windows of FOG data were detected with 93.4% sensitivity. The IMU and plantar pressure feature-based model slightly outperformed models that used data from a single sensor type. The model achieved early detection by identifying the transition from Pre-FOG to FOG while maintaining excellent FOG detection performance (93.4% sensitivity). Therefore, if used as part of an intelligent, real-time FOG identification and cueing system, even if the Pre-FOG state were missed, the model would perform well as a freeze detection and cueing system that could improve the mobility and independence of people with PD during their daily activities. Full article
(This article belongs to the Special Issue Sensors and Sensing Technology Applied in Parkinson Disease)
Show Figures

Figure 1

Figure 1
<p>Experiment walking path.</p>
Full article ">Figure 2
<p>Sensor systems used in data collection: (<b>a</b>) FScan pressure-sensing insole, (<b>b</b>) Shimmer3 inertial measurement unit (IMU) sensor, (<b>c</b>) diagram of IMU placement, and (<b>d</b>) photograph of insole and IMU systems worn on body.</p>
Full article ">Figure 3
<p>Freezing of gait (FOG) episode windowing scheme example. Windows (W) 1–6 are “No-FOG”, Windows 7–11 are “Pre-FOG”, Windows 12–16 overlap the Pre-FOG and FOG segments and are thus “Pre-FOG-Transition”, Window 17 is entirely in the FOG segment and is “FOG”, and Windows 18–23 extend or entirely occur beyond the end of the FOG episode and are “No-FOG”.</p>
Full article ">
16 pages, 7229 KiB  
Communication
Bearing Fault Diagnosis Based on Energy Spectrum Statistics and Modified Mayfly Optimization Algorithm
by Yuhu Liu, Yi Chai, Bowen Liu and Yiming Wang
Sensors 2021, 21(6), 2245; https://doi.org/10.3390/s21062245 - 23 Mar 2021
Cited by 27 | Viewed by 2579
Abstract
This study proposes a novel resonance demodulation frequency band selection method named the initial center frequency-guided filter (ICFGF) to diagnose the bearing fault. The proposed technology has a better performance on resisting the interference from the random impulses. More explicitly, the ICFGF can [...] Read more.
This study proposes a novel resonance demodulation frequency band selection method named the initial center frequency-guided filter (ICFGF) to diagnose the bearing fault. The proposed technology has a better performance on resisting the interference from the random impulses. More explicitly, the ICFGF can be summarized as two steps. In the first step, a variance statistic index is applied to evaluate the energy spectrum distribution, which can adaptively determine the center frequency of the fault impulse and suppress the interference from random impulse effectively. In the second step, a modified mayfly optimization algorithm (MMA) is applied to search the optimal resonance demodulation frequency band based on the center frequency from the first step, which has faster convergence. Finally, the filtered signal is processed by the squared envelope spectrum technology. Results of the proposed method for signals from an outer fault bearing and a ball fault bearing indicate that the ICFGF works well to extract bearing fault feature. Furthermore, compared with some other methods, including fast kurtogram, ensemble empirical mode decomposition, and conditional variance-based selector technology, the ICFGF can extract the fault characteristic more accurately. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Sample for 20/60/20: area left (A<sub>L</sub>), area middle (A<sub>M</sub>), and area right (A<sub>R</sub>) corresponding to 20%, 60%, and 20%, respectively.</p>
Full article ">Figure 2
<p>Sample for seven parts: A<sub>1</sub>, A<sub>2</sub>, A<sub>3</sub>, A<sub>4</sub>, A<sub>5</sub>, A<sub>6</sub>, and A<sub>7</sub> corresponding to 0.4%, 5.8%, 24.6%, 38.4%, 24.6%, 5.8%, and 0.4%, respectively.</p>
Full article ">Figure 3
<p>The results for the simulation signal: (<b>a</b>) time waveform, (<b>b</b>) FFT spectrum, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> <mrow> <mo> </mo> <mi>vs</mi> </mrow> <mo>.</mo> <mo> </mo> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3 Cont.
<p>The results for the simulation signal: (<b>a</b>) time waveform, (<b>b</b>) FFT spectrum, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> <mrow> <mo> </mo> <mi>vs</mi> </mrow> <mo>.</mo> <mo> </mo> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The results of <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </semantics></math> index for the 1000 simulation signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and (<b>b</b>) the probability density estimate for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4 Cont.
<p>The results of <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </semantics></math> index for the 1000 simulation signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and (<b>b</b>) the probability density estimate for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The initial center frequency-guided filter (ICFGF) flowchart.</p>
Full article ">Figure 6
<p>The test rig of the outer fault bearing.</p>
Full article ">Figure 7
<p>The raw signal for the outer fault bearing: (<b>a</b>) time waveform and (<b>b</b>) FFT spectrum.</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> for the outer fault signal.</p>
Full article ">Figure 9
<p>The results by ICFGF for the outer fault signal: (<b>a</b>) filtered signal and (<b>b</b>) SES.</p>
Full article ">Figure 10
<p>The results of every iteration for the outer fault signal: red line for the MMA and blue line for the raw MA.</p>
Full article ">Figure 11
<p>The results by FK for the outer fault signal: (<b>a</b>) spectrum kurtosis and (<b>b</b>) the filtered signal and its SES.</p>
Full article ">Figure 12
<p>The results by EEMD for the outer fault signal: (<b>a</b>) intrinsic mode function (IMF) and (<b>b</b>) the SES of the second IMF.</p>
Full article ">Figure 13
<p>The results by CVB technology for the outer fault signal: (<b>a</b>) time–frequency spectrum, (<b>b</b>) <span class="html-italic">CVB</span> vs. frequency, and (<b>c</b>) SES.</p>
Full article ">Figure 14
<p>The test rig for the ball fault bearing.</p>
Full article ">Figure 15
<p>The raw signal of the ball fault bearing: (<b>a</b>) time waveform and (<b>b</b>) FFT spectrum.</p>
Full article ">Figure 16
<p><math display="inline"><semantics> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> for the ball fault signal.</p>
Full article ">Figure 17
<p>The results by the ICFGF for the ball fault signal: (<b>a</b>) filtered signal and (<b>b</b>) SES.</p>
Full article ">Figure 18
<p>The results of each iteration for the ball fault signal: red line for the MMA and blue line for the raw MA.</p>
Full article ">Figure 19
<p>The results by FK for the ball fault signal: (<b>a</b>) spectrum kurtosis and (<b>b</b>) the filtered signal and its SES.</p>
Full article ">Figure 20
<p>The results by EEMD for the ball fault signal: (<b>a</b>) IMFs and (<b>b</b>) the SES of the first IMF.</p>
Full article ">Figure 21
<p>The results by CVB technology for the outer fault signal: (<b>a</b>) time–frequency spectrum, (<b>b</b>) <span class="html-italic">CVB</span> vs. frequency, and (<b>c</b>) SES for the filtered signal.</p>
Full article ">Figure 21 Cont.
<p>The results by CVB technology for the outer fault signal: (<b>a</b>) time–frequency spectrum, (<b>b</b>) <span class="html-italic">CVB</span> vs. frequency, and (<b>c</b>) SES for the filtered signal.</p>
Full article ">
14 pages, 2278 KiB  
Communication
Development of an Improved Rapidly Exploring Random Trees Algorithm for Static Obstacle Avoidance in Autonomous Vehicles
by S. M. Yang and Y. A. Lin
Sensors 2021, 21(6), 2244; https://doi.org/10.3390/s21062244 - 23 Mar 2021
Cited by 21 | Viewed by 3828
Abstract
Safe path planning for obstacle avoidance in autonomous vehicles has been developed. Based on the Rapidly Exploring Random Trees (RRT) algorithm, an improved algorithm integrating path pruning, smoothing, and optimization with geometric collision detection is shown to improve planning efficiency. Path pruning, a [...] Read more.
Safe path planning for obstacle avoidance in autonomous vehicles has been developed. Based on the Rapidly Exploring Random Trees (RRT) algorithm, an improved algorithm integrating path pruning, smoothing, and optimization with geometric collision detection is shown to improve planning efficiency. Path pruning, a prerequisite to path smoothing, is performed to remove the redundant points generated by the random trees for a new path, without colliding with the obstacles. Path smoothing is performed to modify the path so that it becomes continuously differentiable with curvature implementable by the vehicle. Optimization is performed to select a “near”-optimal path of the shortest distance among the feasible paths for motion efficiency. In the experimental verification, both a pure pursuit steering controller and a proportional–integral speed controller are applied to keep an autonomous vehicle tracking the planned path predicted by the improved RRT algorithm. It is shown that the vehicle can successfully track the path efficiently and reach the destination safely, with an average tracking control deviation of 5.2% of the vehicle width. The path planning is also applied to lane changes, and the average deviation from the lane during and after lane changes remains within 8.3% of the vehicle width. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Illustration of the RRT algorithm with the tree expanding from the start <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>r</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> on the upper left to the destination <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> on the upper right. The thin lines and points are the roots of the searching tree and the thick line is the path predicted by the algorithm. Note that the predicted path has many redundant points, unnecessary turns, and small curvatures infeasible for vehicle operation.</p>
Full article ">Figure 2
<p>(<b>a</b>) Illustration of the pruning process by taking three consecutive path points <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>3</mn> </msub> </mrow> </semantics></math> to check if the new connection <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>3</mn> </msub> </mrow> </semantics></math> is safe. If so, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> </semantics></math> is redundant; if not, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> </semantics></math> remains the path point. (<b>b</b>) The path after the pruning process (dash line) compared with that of the RRT algorithm (dot line).</p>
Full article ">Figure 3
<p>(<b>a</b>) The obstacle definition by a rectangle boundary with safety <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>s</mi> </msub> </mrow> </semantics></math>. Illustration of collision detection when two checkpoints <span class="html-italic">P<sub>1</sub></span> and <span class="html-italic">P<sub>3</sub></span> are (<b>b</b>) at the same side or (<b>c</b>) at different sides of the obstacle. The latter then requires calculation of the angle <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>c</mi> <mi>o</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>c</mi> <mi>o</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>c</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in a collision check.</p>
Full article ">Figure 4
<p>(<b>a</b>) Illustration of using two control points for smoothing each turning point in the path by a Bézier curve and (<b>b</b>) comparison of the paths predicted by the RRT algorithm (the zigzag) and by the improved RRT algorithm with the pruning and smoothing (the smooth) of this work.</p>
Full article ">Figure 5
<p>(<b>a</b>) The result of the improved RRT algorithm in planning of 10 paths during optimization, and (<b>b</b>) among the 10 paths, the one in the shortest path (solid line) is much more efficient than the path taken by the RRT algorithm (dot line) in reaching the destination.</p>
Full article ">Figure 6
<p>The results of path planning in four different obstacle avoidance environments (<b>a</b>–<b>d</b>) validate the effectiveness and efficiency of the improved RRT algorithm.</p>
Full article ">Figure 7
<p>(<b>a</b>) The kinematic model of vehicle dynamics with the steering angle <math display="inline"><semantics> <mi>φ</mi> </semantics></math>, the wheelbase <math display="inline"><semantics> <mi>L</mi> </semantics></math>, the distance from the rear wheel axle to the forward anchor point <math display="inline"><semantics> <mi>l</mi> </semantics></math>, the forward drive look-ahead distance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>f</mi> </msub> </mrow> </semantics></math> and the heading of the look-ahead point from the forward anchor point <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> on a path with a radius of curvature <span class="html-italic">R</span>. (<b>b</b>) Experimental results of the autonomous vehicle tracking the planned path in the obstacle 1 environment (left) and the obstacle 2 environment (right), where the vehicle trajectory of the tracking control (thin line) successfully follows the path predicted by the improved RRT algorithm (solid line) in both obstacle environments.</p>
Full article ">Figure 8
<p>(<b>a</b>) Illustration of the trajectory error defined by the distance between the path point and the vehicle’s trajectory point in the experiment, and (<b>b</b>) the tracking error in the obstacle 1 environment (above) and the obstacle 2 environment (below).</p>
Full article ">Figure 9
<p>(<b>a</b>) Experimental verification of the improved RRT algorithm in the lane change and lane keeping of an autonomous vehicle, (<b>b</b>) the vehicle trajectory in lane change and lane keeping, and (<b>c</b>) the trajectory discrepancy within 8.3% of vehicle width after the lane change.</p>
Full article ">
18 pages, 1549 KiB  
Article
Recovery of Distal Arm Movements in Spinal Cord Injured Patients with a Body-Machine Interface: A Proof-of-Concept Study
by Camilla Pierella, Elisa Galofaro, Alice De Luca, Luca Losio, Simona Gamba, Antonino Massone, Ferdinando A. Mussa-Ivaldi and Maura Casadio
Sensors 2021, 21(6), 2243; https://doi.org/10.3390/s21062243 - 23 Mar 2021
Cited by 6 | Viewed by 3113
Abstract
Background: The recovery of upper limb mobility and functions is essential for people with cervical spinal cord injuries (cSCI) to maximize independence in daily activities and ensure a successful return to normality. The rehabilitative path should include a thorough neuromotor evaluation and personalized [...] Read more.
Background: The recovery of upper limb mobility and functions is essential for people with cervical spinal cord injuries (cSCI) to maximize independence in daily activities and ensure a successful return to normality. The rehabilitative path should include a thorough neuromotor evaluation and personalized treatments aimed at recovering motor functions. Body-machine interfaces (BoMI) have been proven to be capable of harnessing residual joint motions to control objects like computer cursors and virtual or physical wheelchairs and to promote motor recovery. However, their therapeutic application has still been limited to shoulder movements. Here, we expanded the use of BoMI to promote the whole arm’s mobility, with a special focus on elbow movements. We also developed an instrumented evaluation test and a set of kinematic indicators for assessing residual abilities and recovery. Methods: Five inpatient cSCI subjects (four acute, one chronic) participated in a BoMI treatment complementary to their standard rehabilitative routine. The subjects wore a BoMI with sensors placed on both proximal and distal arm districts and practiced for 5 weeks. The BoMI was programmed to promote symmetry between right and left arms use and the forearms’ mobility while playing games. To evaluate the effectiveness of the treatment, the subjects’ kinematics were recorded while performing an evaluation test that involved functional bilateral arms movements, before, at the end, and three months after training. Results: At the end of the training, all subjects learned to efficiently use the interface despite being compelled by it to engage their most impaired movements. The subjects completed the training with bilateral symmetry in body recruitment, already present at the end of the familiarization, and they increased the forearm activity. The instrumental evaluation confirmed this. The elbow motion’s angular amplitude improved for all subjects, and other kinematic parameters showed a trend towards the normality range. Conclusion: The outcomes are preliminary evidence supporting the efficacy of the proposed BoMI as a rehabilitation tool to be considered for clinical practice. It also suggests an instrumental evaluation protocol and a set of indicators to assess and evaluate motor impairment and recovery in cSCI. Full article
(This article belongs to the Special Issue Impact of Sensors in Biomechanics, Health Disease and Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>Experimental setup and protocol. (<b>A</b>) The subject sits in front of a computer wearing four IMUs that communicate wireless with the computer. With upper body movements the user is controlling the movements of a virtual cursor. (<b>B</b>) The subject had evaluation sessions before (ET0), at the end (ET1) and three months after the end of the training with the BoMI (ET2). The practice with the BoMI consisted of 15 sessions with increasing difficulty, which can be grouped into four main blocks (block1: familiarization and blocks 2–4: training blocks), in which subjects performed a set of different tasks.</p>
Full article ">Figure 2
<p>(<b>A</b>) The poses of the arm <span class="html-italic">stabilization task</span> from the Van Lieshout Test (VLT) manual that were evaluated in this study, from the easiest (Pose 1) to the most difficult one (Pose 4). (<b>B</b>) Markers placement on the anatomical landmarks of acromion (A), elbow (E), wrist (W) and C7 (C) used in the kinematic analysis. The marker on C7 is displayed in the figure, despite the frontal view, to simplify the visualization.</p>
Full article ">Figure 3
<p>Performance metrics. Reaching tasks: linearity index (<b>A</b>), movement time (<b>B</b>) and number of peaks in the velocity profile (<b>C</b>). First and last sessions of the familiarization phase, first and last sessions after the 1st, 2nd and 3rd map modification of the training phase. Pong hits rate during vertical and horizontal pong (<b>D</b>). For the Pong tasks we are reporting only values at the beginning (respectively session 3 for vertical pong and session 4 for the horizontal pong) and end of the familiarization and at the beginning and end of the last change of the interface. We reported the mean values and standard error of the subjects.</p>
Full article ">Figure 4
<p>Symmetry and distality indices for body contribution to cursor movement (<b>A</b>) and for body mobility (<b>B</b>) at the end of the familiarization phase (white bars) and end of training (black bars). The indices are calculated for representing the symmetry between right and left upper body (<math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) and for representing the usage of more distal body parts (<math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Manual muscle test. Each row presents the results of the MMT for the left (shades of red) and right (shades of blue) body parts of each of the 5 subjects recruited in the study performed before (dark shade), at end (medium shade) and 3 months after the end of the training (light shade). The scores are divided by body districts (rows) and the dashed horizontal red lines correspond to the maximum score that could be assigned to each district.</p>
Full article ">Figure 6
<p>Range of motion. Results of the Range of Motion (ROM) for each subject before (dark shade), at the end (medium shade) and 3 months after the end of the training (light shade) for the left (shades of red) and right (shades of blue) body parts. The results are presented divided by upper-body districts: scapulae (<b>A</b>), shoulders (<b>B</b>) and elbows (<b>C</b>).</p>
Full article ">Figure 7
<p>Normalized scores assigned to each body side (right in blue and left in red) averaged across the four poses performed by each subject (columns). The evaluation was performed before the BoMI treatment (T0, dark shades), at the end of the BoMI treatment (T1, medium shades) and three months after (T2, light shades).</p>
Full article ">Figure 8
<p>Kinematic parameters of the stabilization task for an example subject, SCI1. In each panel each row indicates the parameters relative to pose 1, pose 2, pose 3 and pose 4. In the shades of red the parameters extracted from the left body parts while in the shades of blue the one from the right body parts at T0 (dark shades), T1 (medium shades) and T2 (light shades). The grey area in each graph represents mean and standard error of each parameter for the control subjects (mean ± SE). (<b>A</b>) Elbow Angle—EA. (<b>B</b>) On the left a schematic description, for each pose, of the computed angle and on the right the results of the Shoulder Angle on the Frontal plane—SAF. (<b>C</b>) Schematic description, for each pose, of the computed angle on the left side and Shoulder Angle on the Sagittal/Transverse plane -SAST- graphs on the right.</p>
Full article ">Figure 9
<p>(<b>A</b>) Kinematic parameters of the stabilization task for the cSCI population normalized with respect to the control population. Elbow Angle, (EA, first row), Shoulder Angle on Frontal plane (SAF, second row) and on Sagittal/Transverse plane (SAST, third row) were averaged across poses for each subject, mean and standard error are reported in the figure. Shades of red represent the parameters extracted from the left body parts while in the shades of blue the ones from the right body parts at T0 (dark shades), T1 (medium shades) and T2 (light shades). (<b>B</b>) Overall kinematic symmetry parameter, <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>, computed for each cSCI subject at T0 (white bars), T1 (black bars) and T2 (patterned bars).</p>
Full article ">
19 pages, 464 KiB  
Article
Keystroke Dynamics-Based Authentication Using Unique Keypad
by Maro Choi, Shincheol Lee, Minjae Jo and Ji Sun Shin
Sensors 2021, 21(6), 2242; https://doi.org/10.3390/s21062242 - 23 Mar 2021
Cited by 13 | Viewed by 4634
Abstract
Authentication methods using personal identification number (PIN) and unlock patterns are widely used in smartphone user authentication. However, these authentication methods are vulnerable to shoulder-surfing attacks, and PIN authentication, in particular, is poor in terms of security because PINs are short in length [...] Read more.
Authentication methods using personal identification number (PIN) and unlock patterns are widely used in smartphone user authentication. However, these authentication methods are vulnerable to shoulder-surfing attacks, and PIN authentication, in particular, is poor in terms of security because PINs are short in length with just four to six digits. A wide range of research is currently underway to examine various biometric authentication methods, for example, using the user’s face, fingerprint, or iris information. However, such authentication methods provide PIN-based authentication as a type of backup authentication to prepare for when the maximum set number of authentication failures is exceeded during the authentication process such that the security of biometric authentication equates to the security of PIN-based authentication. In order to overcome this limitation, research has been conducted on keystroke dynamics-based authentication, where users are classified by analyzing their typing patterns while they are entering their PIN. As a result, a wide range of methods for improving the ability to distinguish the normal user from abnormal ones have been proposed, using the typing patterns captured during the user’s PIN input. In this paper, we propose unique keypads that are assigned to and used by only normal users of smartphones to improve the user classification performance capabilities of existing keypads. The proposed keypads are formed by randomly generated numbers based on the Mersenne Twister algorithm. In an attempt to demonstrate the superior classification performance of the proposed unique keypad compared to existing keypads, all tests except for the keypad type were conducted under the same conditions in earlier work, including collection-related features and feature selection methods. Our experimental results show that when the filtering rates are 10%, 20%, 30%, 40%, and 50%, the corresponding equal error rates (EERs) for the proposed keypads are improved by 4.15%, 3.11%, 2.77%, 3.37% and 3.53% on average compared to the classification performance outcomes in earlier work. Full article
(This article belongs to the Special Issue Data Security and Privacy in the IoT)
Show Figures

Figure 1

Figure 1
<p>Time feature structure.</p>
Full article ">Figure 2
<p>Smartphone reference axis for motion data.</p>
Full article ">Figure 3
<p>Initial keypad.</p>
Full article ">Figure 4
<p>Unique keypad.</p>
Full article ">Figure 5
<p>Equal Error Rate (EER).</p>
Full article ">Figure 6
<p>EER for removal of the lower 10% features.</p>
Full article ">Figure 7
<p>EER for removal of the lower 20% features.</p>
Full article ">Figure 8
<p>EER for removal of the lower 30% features.</p>
Full article ">Figure 9
<p>EER for removal of the lower 40% features.</p>
Full article ">Figure 10
<p>EER for removal of the lower 50% features.</p>
Full article ">
18 pages, 8437 KiB  
Article
Energy-Efficient Ultrasonic Water Level Detection System with Dual-Target Monitoring
by Sanggoo Kang, Dafnik Saril Kumar David, Muil Yang, Yin Chao Yu and Suyun Ham
Sensors 2021, 21(6), 2241; https://doi.org/10.3390/s21062241 - 23 Mar 2021
Cited by 5 | Viewed by 4660
Abstract
This study presents a developed ultrasonic water level detection (UWLD) system with an energy-efficient design and dual-target monitoring. The water level monitoring system with a non-contact sensor is one of the suitable methods since it is not directly exposed to water. In addition, [...] Read more.
This study presents a developed ultrasonic water level detection (UWLD) system with an energy-efficient design and dual-target monitoring. The water level monitoring system with a non-contact sensor is one of the suitable methods since it is not directly exposed to water. In addition, a web-based monitoring system using a cloud computing platform is a well-known technique to provide real-time water level monitoring. However, the long-term stable operation of remotely communicating units is an issue for real-time water level monitoring. Therefore, this paper proposes a UWLD unit using a low-power consumption design for renewable energy harvesting (e.g., solar) by controlling the unit with dual microcontrollers (MCUs) to improve the energy efficiency of the system. In addition, dual targeting to the pavement and streamside is uniquely designed to monitor both the urban inundation and stream overflow. The real-time water level monitoring data obtained from the proposed UWLD system is analyzed with water level changing rate (WLCR) and water level index. The quantified WLCR and water level index with various sampling rates present a different sensitivity to heavy rain. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>System representation of the basic UWLD system: real-time user interface platform plotting real-time temperature and water level (<b>left</b>) and UWLD-1 unit representation (<b>right</b>). It implies the data from the remote UWLD unit can be monitored in real-time on the website.</p>
Full article ">Figure 2
<p>Flowchart of basic low-cost UWLD system with a single MCU and an ultrasonic sensor.</p>
Full article ">Figure 3
<p>Flow chart (Programming layout) of the data processing of the UWLD system.</p>
Full article ">Figure 4
<p>Installation of UWLD-2 system to monitor the water level on (<b>left</b>) the pavement side and streamside and (<b>right</b>) the streamside.</p>
Full article ">Figure 5
<p>System representation of UWLD-2 with dual-target sensing. The additional ultrasonic sensor is deployed for monitoring the pavement side water level. The moving median filtered distance data from the pavement side sensor is also transferred to the cloud server.</p>
Full article ">Figure 6
<p>Battery consumption of operating mode and power-saving mode. The UWLD system with sleep mode by the COP timer (<b>left</b>) and an ideal power-saving mode (<b>right</b>).</p>
Full article ">Figure 7
<p>System representation of UWLD-3 system with dual-target sensing and dual MCUs.</p>
Full article ">Figure 8
<p>Flowchart of the UWLD system with dual-target sensing and dual MCUs (UWLD-3).</p>
Full article ">Figure 9
<p>Installed units and locations: Node 1 (<b>left</b>), Node 2 (<b>middle</b>), and location map of two nodes (<b>right</b>).</p>
Full article ">Figure 10
<p>Ultrasonic sensor calibration test (<b>left</b>) and measured data (<b>right</b>).</p>
Full article ">Figure 11
<p>Thirty-six h monitoring of battery percentage record of the UWLD system with the single-MCU system (UWLD-1) in black and dual-MCU system (UWLD-3) in red under similar sunlight conditions. A, B, and C regions indicate battery consumption at the nighttime and daytime and battery replacement time (battery replacement is for only UWLD-1 unit). The result implies the dual-MCU system improves the energy efficiency of the UWLD unit without changing the battery for longer operation period.</p>
Full article ">Figure 12
<p>Battery consumption by components under the operating and power-saving mode in the single MCU system (UWLD-1). The results imply most of the battery power is consumed by MCU operation in both operating mode and power-saving mode.</p>
Full article ">Figure 13
<p>Power consumption of single-MCU and dual-MCU system, presenting the energy efficiency improvement with the dual-MCU system in both operating mode and power-saving mode. A 30% and 70% improved energy efficiency is shown under the operating mode and power-saving mode, respectively.</p>
Full article ">Figure 14
<p>Map of the water level monitoring locations by NOAA and UWLD system. Lake Arlington is the closest gauge location, which is operated by NOAA, located in north Texas.</p>
Full article ">Figure 15
<p>Water level changes measured by Node 1 streamside and Lake Arlington NOAA for 7 days (15–21 March 2020).</p>
Full article ">Figure 16
<p>Monitored data in three rainfall events from Node 1 and 2 streamside compared with dry day’s reference water level in yellow.</p>
Full article ">Figure 17
<p>Pavement-side water level changes on 6 July Node 1 (<b>left</b>) and Node 2 (<b>right</b>).</p>
Full article ">Figure 18
<p>WLCR of 5 days rainfall (14–18 March): (<b>top</b>) water level change, (<b>middle</b>) WLCR by the 5-min sampling rate, and (<b>bottom</b>) WLCR by the 1-h sampling rate. The three graphs showing the WLCR by the 5-min sampling rate show the more sensitive behavior at the higher WLCR case than the 1-h sampling rate.</p>
Full article ">Figure 19
<p>Normalized maximum water level, WLCR, and area data with the 16 rainfall events are presented. The results indicate a similar tendency by the rainfall events.</p>
Full article ">Figure 20
<p>Water level and WLI changes; (<b>top</b>) water level change and 4 different WLI curves based on 0.5-, 1-, 2-, and 3-h time windows; (<b>middle</b>) the zoomed-in plot of the E1 rainfall event, region A of the top figure; and (<b>bottom</b>) zoomed-in plot of the E4 rainfall event, region B of the top figure. The WLIs calculated by different time windows show the more sensitive change in the rapid water level change case (B region) than the relatively moderate water level change case (A region).</p>
Full article ">
16 pages, 10246 KiB  
Article
Autonomous Learning of New Environments with a Robotic Team Employing Hyper-Spectral Remote Sensing, Comprehensive In-Situ Sensing and Machine Learning
by David J. Lary, David Schaefer, John Waczak, Adam Aker, Aaron Barbosa, Lakitha O. H. Wijeratne, Shawhin Talebi, Bharana Fernando, John Sadler, Tatiana Lary and Matthew D. Lary
Sensors 2021, 21(6), 2240; https://doi.org/10.3390/s21062240 - 23 Mar 2021
Cited by 5 | Viewed by 5081
Abstract
This paper describes and demonstrates an autonomous robotic team that can rapidly learn the characteristics of environments that it has never seen before. The flexible paradigm is easily scalable to multi-robot, multi-sensor autonomous teams, and it is relevant to satellite calibration/validation and the [...] Read more.
This paper describes and demonstrates an autonomous robotic team that can rapidly learn the characteristics of environments that it has never seen before. The flexible paradigm is easily scalable to multi-robot, multi-sensor autonomous teams, and it is relevant to satellite calibration/validation and the creation of new remote sensing data products. A case study is described for the rapid characterisation of the aquatic environment, over a period of just a few minutes we acquired thousands of training data points. This training data allowed for our machine learning algorithms to rapidly learn by example and provide wide area maps of the composition of the environment. Along side these larger autonomous robots two smaller robots that can be deployed by a single individual were also deployed (a walking robot and a robotic hover-board), observing significant small scale spatial variability. Full article
(This article belongs to the Special Issue Sensors: 20th Anniversary)
Show Figures

Figure 1

Figure 1
<p>Photographs of the robot team during a Fall 2020 deployment in North Texas.</p>
Full article ">Figure 2
<p>Panel (<b>a</b>) Trichromatic cone cells in the eye respond to one of three wavelength ranges (RGB). Panel (<b>b</b>) shows a comparison between a hyper-spectral data-cube and RGB images.</p>
Full article ">Figure 3
<p>Panel (<b>a</b>) Chemicals absorb light in a characteristic way. Their absorption spectra is a function of their chemical structure. For every pixel we measure an entire spectrum with a hyper-spectral camera so we can identify chemicals within the scene. Panel (<b>b</b>) shows an example hyper-spectral data cube collected in North Texas on 23 November 2020. This particular data cube includes a simulant release, Rhodamine WT. The top layer of the hyper-spectral data cube shows the regular RGB image, the 462 stacked layers below show the reflectivity (on a log-scale) for each wavelength band between 391 and 1011 nm.</p>
Full article ">Figure 4
<p>The Cyber Physical Observatory is a collection of sentinels that provide real-time data. A Sentinel is a Software Defined Sensor that is mounted on a Platform. A Platform supplies the Software Defined Sensor with power, timestamps for all observations, communication, and mobility where applicable. A Software Defined Sensor is a smart sensor package that combines a physical sensing system with machine learning providing a variety of calibrated data products that can be updated via an app store.</p>
Full article ">Figure 5
<p>The autonomous robotic team operates in two modes. <b>Mode 1</b>: Coordinated robots using onboard Machine Learning for specific data products. <b>Mode 2</b>: Unsupervised classification.</p>
Full article ">Figure 6
<p>Machine learning performance quantified by both scatter diagrams and quantile-quantile plots utilising data collected autonomously by the robot team during three exercises during November and December 2020 in North Texas. The three examples shown here are for CDOM, Na<sup>+</sup>, and Cl<sup>−</sup>. The scatter diagrams show the actual observations (mg/L) on the <span class="html-italic">x</span>-axis and the machine learning estimate on the <span class="html-italic">y</span>-axis. The green curves are for the training data, the red for the independent validation. The legend shows the number of points in the training and validation datasets and their associated correlation coefficients. The quantile-quantile plots show the observation quantiles on the <span class="html-italic">x</span>-axis and the machine learning estimate quantiles on the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 7
<p>Example crude oil and colored dissolved organic mater (CDOM) data collected autonomously by the robot team on November 23, 2020 in North Texas. The maps show the CDOM and crude oil estimated from the hyper-spectral imager using machine learning as the background colors and the actual in-situ boat observations as the overlaid color filled squares. Note that the isolated part of the pond, which has now fresh water in-flux, has higher levels of CDOM and crude oil with a sharp gradient across the inlet in both the estimates using the hyper-spectral image and the boat observations.</p>
Full article ">Figure 8
<p>Schematics illustrating the traditional approach to creating remote sensing data products (<b>left</b>) and that used in this study (<b>right</b>).</p>
Full article ">Figure 9
<p>Photographs of the smaller walking robot (from Ghost Robotics) and a robotic hover-board (conceived and built by Aaron Barbosa). For illustrative purposes both of these small robots carried exactly the same payload of sensors measuring the size spectrum of airborne particulates in the size range 0.3–43 microns and the abundance of a selection of gases. The laser scanner onboad the walking robot acquired a map of the vicinity, while also measuring in-situ the atmospheric composition, finding very localized changes in the abundance of the airborne particulates of various sizes.</p>
Full article ">
25 pages, 525 KiB  
Article
IoT Data Qualification for a Logistic Chain Traceability Smart Contract
by Mohamed Ahmed, Chantal Taconet, Mohamed Ould, Sophie Chabridon and Amel Bouzeghoub
Sensors 2021, 21(6), 2239; https://doi.org/10.3390/s21062239 - 23 Mar 2021
Cited by 15 | Viewed by 3554
Abstract
In the logistic chain domain, the traceability of shipments in their entire delivery process from the shipper to the consignee involves many stakeholders. From the traceability data, contractual decisions may be taken such as incident detection, validation of the delivery or billing. The [...] Read more.
In the logistic chain domain, the traceability of shipments in their entire delivery process from the shipper to the consignee involves many stakeholders. From the traceability data, contractual decisions may be taken such as incident detection, validation of the delivery or billing. The stakeholders require transparency in the whole process. The combination of the Internet of Things (IoT) and the blockchain paradigms helps in the development of automated and trusted systems. In this context, ensuring the quality of the IoT data is an absolute requirement for the adoption of those technologies. In this article, we propose an approach to assess the data quality (DQ) of IoT data sources using a logistic traceability smart contract developed on top of a blockchain. We select the quality dimensions relevant to our context, namely accuracy, completeness, consistency and currentness, with a proposition of their corresponding measurement methods. We also propose a data quality model specific to the logistic chain domain and a distributed traceability architecture. The evaluation of the proposal shows the capacity of the proposed method to assess the IoT data quality and ensure the user agreement on the data qualification rules. The proposed solution opens new opportunities in the development of automated logistic traceability systems. Full article
Show Figures

Figure 1

Figure 1
<p>IoT Data Quality Entity class diagram.</p>
Full article ">Figure 2
<p>Distributed architecture of the traceability system.</p>
Full article ">Figure 3
<p>Intel Berkeley sensors arrangement diagram.</p>
Full article ">
14 pages, 4563 KiB  
Communication
The Experimental Registration of the Evanescent Acoustic Wave in YX LiNbO3 Plate
by Andrey Smirnov, Boris Zaitsev, Andrey Teplykh, Ilya Nedospasov, Egor Golovanov, Zheng-hua Qian, Bin Wang and Iren Kuznetsova
Sensors 2021, 21(6), 2238; https://doi.org/10.3390/s21062238 - 23 Mar 2021
Cited by 3 | Viewed by 2416
Abstract
Evanescent acoustic waves are characterized by purely imaginary or complex wavenumbers. Earlier, in 2019 by using a three dimensional (3D) finite element method (FEM) the possibility of the excitation and registration of such waves in the piezoelectric plates was theoretically shown. In this [...] Read more.
Evanescent acoustic waves are characterized by purely imaginary or complex wavenumbers. Earlier, in 2019 by using a three dimensional (3D) finite element method (FEM) the possibility of the excitation and registration of such waves in the piezoelectric plates was theoretically shown. In this paper the set of the acoustically isolated interdigital transducers (IDTs) with the different spatial periods for excitation and registration of the evanescent acoustic wave in Y-cut X-propagation direction of lithium niobate (LiNbO3) plate was specifically calculated and produced. As a result, the possibility to excite and register the evanescent acoustic wave in the piezoelectric plates was experimentally proved for the first time. The evanescent nature of the registered wave has been established. The theoretical results turned out to be in a good agreement with the experimental ones. The influence of an infinitely thin layer with arbitrary conductivity placed on a plate surface was also investigated. It has been shown that the frequency region of an evanescent acoustic wave existence is very sensitive to the changes of the electrical boundary conditions. The results obtained may be used for the development of the method of the analysis of thin films electric properties based on the study of evanescent waves. Full article
(This article belongs to the Special Issue Development, Investigation and Application of Acoustic Sensors)
Show Figures

Figure 1

Figure 1
<p>Geometry of the problem.</p>
Full article ">Figure 2
<p>3D Model of the resonator used in the finite element method (FEM) calculation including: (<b>a</b>) aluminum electrodes on the surface of the resonator, (<b>b</b>) perfectly matching layers, (<b>c</b>) meshes used in modeling.</p>
Full article ">Figure 3
<p>Scheme of the experimental set up: (1) YX LiNbO<sub>3</sub> plate; (2) system of IDTs; (3) layer of absorbing varnish; (4) support made of textolite; (5) guide shaft; (6) movable dielectric holders; (7) flat contact legs; (8) impedance analyzer; (9) gold wires.</p>
Full article ">Figure 4
<p>Dependencies of the (<b>a</b>) real (Re) and imaginary (Im) parts of the complex phase velocities <span class="html-italic">V<sub>ph</sub></span> and (<b>b</b>) the group velocities <span class="html-italic">V<sub>gr</sub></span> on parameter <span class="html-italic">hf</span> for the forward (black), backward (red), and evanescent (blue) waves in YX LiNbO<sub>3</sub> plate. The grey lines are the auxiliary lines for different values of <span class="html-italic">λ</span>: 1.46 mm (1), 1.44 mm (2), 1.42 mm (3), and 1.40 mm (4).</p>
Full article ">Figure 5
<p>Dispersion curves for real (Re) and imaginary (Im) parts of wave numbers <span class="html-italic">k</span> of the forward (black), backward (red), and evanescent waves (blue) in YX LiNbO<sub>3</sub> plate.</p>
Full article ">Figure 6
<p>Dependencies of the normalized electric potential value Ф and mechanical displacements <span class="html-italic">U</span><sub>1</sub>, <span class="html-italic">U</span><sub>2</sub>, and <span class="html-italic">U</span><sub>3</sub> belonging to the forward (<b>a</b>), backward (<b>b</b>), and evanescent (<b>c</b>) waves on the structure thickness <span class="html-italic">x</span><sub>3</sub>/<span class="html-italic">h</span> in the regions near (thick lines) and far (thin lines) from ZGV point.</p>
Full article ">Figure 7
<p>Dependencies of (<b>a</b>) the real parts of the phase velocities <span class="html-italic">V<sub>ph</sub></span> and (<b>b</b>) attenuation per wavelength on the sheet conductivity of the infinitely thin layer placed in plane <span class="html-italic">x</span><sub>3</sub> = 0 for the forward (black), backward (red), and evanescent (blue) waves in Y-X LiNbO<sub>3</sub> plate in the regions near (thick lines) and far (thin lines) from ZGV point.</p>
Full article ">Figure 8
<p>Dependencies of the real and imaginary parts of the phase velocities <span class="html-italic">V<sub>ph</sub></span> of the forward (black), backward (red), and evanescent (blue) waves in Y-X LiNbO<sub>3</sub> plate on parameter <span class="html-italic">hf</span> for various values of sheet conductivity <span class="html-italic">σ<sub>s</sub></span> of the infinitely thin layer placed in plane <span class="html-italic">x<sub>3</sub></span> = 0: (<b>a</b>)—10<sup>−7</sup> S/m, (<b>b</b>)—5 × 10<sup>−7</sup> S/m, (<b>c</b>)—10<sup>−6</sup> S/m, and (<b>d</b>)—10<sup>−5</sup> S/m. The grey lines are the auxiliary lines for different values of <span class="html-italic">λ</span>: 1.46 mm (1) and 1.4 mm (2). A<sub>1</sub><sup>e+</sup> and A<sub>1</sub><sup>e−</sup> correspond to the evanescent wave branches with the positive and negative values of an imaginary part of the complex wave number <span class="html-italic">k</span>, respectively.</p>
Full article ">Figure 9
<p>The dependencies of the real Re (left columns) and imaginary Im (right columns) parts of the electrical impedance <span class="html-italic">Z</span> of IDTs with the spatial periods λ of (<b>a</b>) 1.4 mm, (<b>b</b>) 1.42 mm, (<b>c</b>) 1.44 mm, (<b>d</b>) 1.46 mm on the parameter <span class="html-italic">hf</span> obtained by 3D FEM modeling A<sub>1</sub><sup>e</sup> and A<sub>1</sub><sup>f</sup> show the positions of the peaks of the corresponding evanescent and forward waves, respectively.</p>
Full article ">Figure 10
<p>The experimental dependencies of the real Re (left columns) and imaginary Im (right columns) parts of the electrical impedance Z of the IDTs with the spatial periods <span class="html-italic">λ</span> of (<b>a</b>) 1.4 mm; (<b>b</b>) 1.42 mm; (<b>c</b>) 1.44 mm; (<b>d</b>) 1.46 mm on the parameter <span class="html-italic">hf.</span> A<sub>1</sub><sup>e</sup> and A<sub>1</sub><sup>f</sup> show the positions of the peaks of the corresponding evanescent and forward waves, respectively.</p>
Full article ">
15 pages, 6757 KiB  
Article
Camera-Based Monitoring of Neck Movements for Cervical Rehabilitation Mobile Applications
by Iosune Salinas-Bueno, Maria Francesca Roig-Maimó, Pau Martínez-Bueso, Katia San-Sebastián-Fernández, Javier Varona and Ramon Mas-Sansó
Sensors 2021, 21(6), 2237; https://doi.org/10.3390/s21062237 - 23 Mar 2021
Cited by 7 | Viewed by 4634
Abstract
Vision-based interfaces are used for monitoring human motion. In particular, camera-based head-trackers interpret the movement of the user’s head for interacting with devices. Neck pain is one of the most important musculoskeletal conditions in prevalence and years lived with disability. A common treatment [...] Read more.
Vision-based interfaces are used for monitoring human motion. In particular, camera-based head-trackers interpret the movement of the user’s head for interacting with devices. Neck pain is one of the most important musculoskeletal conditions in prevalence and years lived with disability. A common treatment is therapeutic exercise, which requires high motivation and adherence to treatment. In this work, we conduct an exploratory experiment to validate the use of a non-invasive camera-based head-tracker monitoring neck movements. We do it by means of an exergame for performing the rehabilitation exercises using a mobile device. The experiments performed in order to explore its feasibility were: (1) validate neck’s range of motion (ROM) that the camera-based head-tracker was able to detect; (2) ensure safety application in terms of neck ROM solicitation by the mobile application. Results not only confirmed safety, in terms of ROM requirements for different preset patient profiles, according with the safety parameters previously established, but also determined the effectiveness of the camera-based head-tracker to monitor the neck movements for rehabilitation purposes. Full article
(This article belongs to the Special Issue Mobile Sensors for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Movements of the neck.</p>
Full article ">Figure 2
<p>Screenshot of the experiment software with annotations on an Apple iPad Air (device in portrait orientation).</p>
Full article ">Figure 3
<p>Simulated stages of the experiment procedure for the left rotation movement on the apparatus: (<b>a</b>) participant placed in the initial position with the nose point returned by the head-tracker marked with a blue circle, (<b>b</b>) the participant starts a rotation movement slowly (the nose point returned by the head-tracker is marked with a blue circle), (<b>c</b>) moment when the head-tracker loses the position of the nose (an acoustic alert is triggered) and the participant stops its movement.</p>
Full article ">Figure 4
<p>Stages of the experiment procedure: (<b>a</b>) participant placed in the initial position, (<b>b</b>) participant once the acoustic alert sounded and the participant stopped the movement, (<b>c</b>) measurement of the neck ROM of the participant using the goniometer.</p>
Full article ">Figure 5
<p>Dual-arm universal goniometer.</p>
Full article ">Figure 6
<p>Mean values of the neck range of motion (ROM) tracked by the head-tracker interface compared to normal maximum mobility.</p>
Full article ">Figure 7
<p>Examples of exercises for every preset profile: (<b>a</b>) dartboards set to induce a simple horizontal movement (rotation) with big targets, (<b>b</b>) dartboards set to induce a simple vertical descending movement (flexion) with small targets, (<b>c</b>) dartboards set in diagonal to induce a combined movement with small targets, and (<b>d</b>) dartboards set randomly.</p>
Full article ">Figure 8
<p>Screenshots of the dart game with two dwell-time criterion: (<b>a</b>) 0-ms dwell-time criterion: the target was selected immediately when the center of the cursor entered inside the target, and (<b>b</b>) 200-ms dwell-time criterion.</p>
Full article ">Figure 9
<p>Placement of the inertial sensors: (<b>a</b>) sensor 1 (<b>b</b>) sensor 2 (<b>c</b>) sensor 1 and sensor 2.</p>
Full article ">Figure 10
<p>(<b>a</b>) Subject of the experiment playing the exergame in (<b>b</b>) the experiment room.</p>
Full article ">
18 pages, 1308 KiB  
Article
Unscented Particle Filter Algorithm Based on Divide-and-Conquer Sampling for Target Tracking
by Sichun Du and Qing Deng
Sensors 2021, 21(6), 2236; https://doi.org/10.3390/s21062236 - 23 Mar 2021
Cited by 5 | Viewed by 3266
Abstract
Unscented particle filter (UPF) struggles to completely cover the target state space when handling the maneuvering target tracing problem, and the tracking performance can be affected by the low sample diversity and algorithm redundancy. In order to solve this problem, the method of [...] Read more.
Unscented particle filter (UPF) struggles to completely cover the target state space when handling the maneuvering target tracing problem, and the tracking performance can be affected by the low sample diversity and algorithm redundancy. In order to solve this problem, the method of divide-and-conquer sampling is applied to the UPF tracking algorithm. By decomposing the state space, the descending dimension processing of the target maneuver is realized. When dealing with the maneuvering target, particles are sampled separately in each subspace, which directly prevents particles from degeneracy. Experiments and a comparative analysis were carried out to comprehensively analyze the performance of the divide-and-conquer sampling unscented particle filter (DCS-UPF). The simulation result demonstrates that the proposed algorithm can improve the diversity of particles and obtain higher tracking accuracy in less time than the particle swarm algorithm and intelligent adaptive filtering algorithm. This algorithm can be used in complex maneuvering conditions. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion for Object Detection and Tracking)
Show Figures

Figure 1

Figure 1
<p>The flowchart of spatial divide-and-conquer sampling method.</p>
Full article ">Figure 2
<p>The flowchart of the divide-and-conquer sampling unscented particle filter.</p>
Full article ">Figure 3
<p>Target trajectory in X-direction.</p>
Full article ">Figure 4
<p>Target trajectory in Y-direction.</p>
Full article ">Figure 5
<p>The tracking errors of three filters in X-direction.</p>
Full article ">Figure 6
<p>The tracking errors of three filters in Y-direction.</p>
Full article ">Figure 7
<p>Target estimation in X-direction by changing R and Q.</p>
Full article ">Figure 8
<p>The tracking errors of three methods in X-direction by changing R and Q.</p>
Full article ">Figure 9
<p>Target estimation in Y-direction by changing R and Q.</p>
Full article ">Figure 10
<p>The tracking errors of three methods in Y-direction by changing R and Q.</p>
Full article ">Figure 11
<p>The effective particles numbers of three filters in X-direction.</p>
Full article ">Figure 12
<p>The effective particles numbers of three filters in Y-direction.</p>
Full article ">Figure 13
<p>The target trajectory of UPF, PSO-UPF, IA-UPF, DSC-UPF in X-direction.</p>
Full article ">Figure 14
<p>The target trajectory of UPF, PSO-UPF, IA-UPF, DSC-UPF in Y-direction.</p>
Full article ">Figure 15
<p>The one-step running time of PF, UPF, PSO-UPF, IA-UPF and DCS-UPF.</p>
Full article ">
8 pages, 206 KiB  
Editorial
Intelligent Transportation Related Complex Systems and Sensors
by Kyandoghere Kyamakya, Jean Chamberlain Chedjou, Fadi Al-Machot, Ahmad Haj Mosa and Antoine Bagula
Sensors 2021, 21(6), 2235; https://doi.org/10.3390/s21062235 - 23 Mar 2021
Cited by 5 | Viewed by 2673
Abstract
Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITSs) are being widely adopted worldwide to improve the efficiency and safety of the transportation system [...] Full article
(This article belongs to the Special Issue Intelligent Transportation Related Complex Systems and Sensors)
18 pages, 2185 KiB  
Article
ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays
by Sebastian Kapp, Michael Barz, Sergey Mukhametov, Daniel Sonntag and Jochen Kuhn
Sensors 2021, 21(6), 2234; https://doi.org/10.3390/s21062234 - 23 Mar 2021
Cited by 70 | Viewed by 9763
Abstract
Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR [...] Read more.
Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers. Full article
(This article belongs to the Special Issue Wearable Technologies and Applications for Eye Tracking)
Show Figures

Figure 1

Figure 1
<p>A diagram visualizing the components of the toolkit and their interaction.</p>
Full article ">Figure 2
<p>Screenshot of the control interface accessible over the network.</p>
Full article ">Figure 3
<p>Mixed reality photo of our HoloLens 2 applications for all three settings which are presented to the participants. The fixation grid for settings I and II is displayed at a fixed distance from the user and resized such that the angular size is identical for all distances (<b>a</b>). The sphere in setting III is positioned 15 cm above the table and stays fixed on top of the visual marker when the participant moves (<b>b</b>). These screenshots are 2D projections which do not reflect the field-of-view and depth perception of a participant in augmented reality (AR).</p>
Full article ">Figure 4
<p>Example of setting I and II in our study with the participant wearing a Microsoft HoloLens 2 and the supervisor controlling the recording using our toolkit.</p>
Full article ">Figure 5
<p>Plot of the mean accuracy at each distance for each target in setting I—resting. The accuracy angle for all targets is smaller than 1.5 degrees.</p>
Full article ">Figure 6
<p>Recorded gaze point of one participant in relation to the upper left target in setting I—resting. The red dot represents the mean gaze position with each cross being one recorded gaze point.</p>
Full article ">Figure 7
<p>Plot of the mean accuracy at each distance for each target in setting II—walking.</p>
Full article ">Figure 8
<p>Recorded gaze point of one participant in relation to the upper left target in setting II—walking. The red dot represents the mean gaze position with each cross being one recorded gaze point.</p>
Full article ">Figure 9
<p>Recorded gaze point of one participant in setting III—stationary target. The distance angle for all gaze points is smaller than 3 degrees.</p>
Full article ">
25 pages, 6548 KiB  
Article
A UAV Maneuver Decision-Making Algorithm for Autonomous Airdrop Based on Deep Reinforcement Learning
by Ke Li, Kun Zhang, Zhenchong Zhang, Zekun Liu, Shuai Hua and Jianliang He
Sensors 2021, 21(6), 2233; https://doi.org/10.3390/s21062233 - 23 Mar 2021
Cited by 5 | Viewed by 3397
Abstract
How to operate an unmanned aerial vehicle (UAV) safely and efficiently in an interactive environment is challenging. A large amount of research has been devoted to improve the intelligence of a UAV while performing a mission, where finding an optimal maneuver decision-making policy [...] Read more.
How to operate an unmanned aerial vehicle (UAV) safely and efficiently in an interactive environment is challenging. A large amount of research has been devoted to improve the intelligence of a UAV while performing a mission, where finding an optimal maneuver decision-making policy of the UAV has become one of the key issues when we attempt to enable the UAV autonomy. In this paper, we propose a maneuver decision-making algorithm based on deep reinforcement learning, which generates efficient maneuvers for a UAV agent to execute the airdrop mission autonomously in an interactive environment. Particularly, the training set of the learning algorithm by the Prioritized Experience Replay is constructed, that can accelerate the convergence speed of decision network training in the algorithm. It is shown that a desirable and effective maneuver decision-making policy can be found by extensive experimental results. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Control, Networks, System and Application)
Show Figures

Figure 1

Figure 1
<p>The structure of unmanned aerial vehicle (UAV) Maneuver Decision-Making Algorithm.</p>
Full article ">Figure 2
<p>The structure of finite Markov Decision Processes (MDPs).</p>
Full article ">Figure 3
<p>Problems Description among UAV Airdrop Task: (<b>a</b>) the UAV should turn its nose towards target area firstly; (<b>b</b>) after adjustment of azimuth, the UAV could follow the guidance of UAV maneuver decision-making policy until reaching target position.</p>
Full article ">Figure 4
<p>The diagram of Prioritized Experience Replay-Deep Deterministic Policy Gradient (PER-DDPG)’s framework.</p>
Full article ">Figure 5
<p>The structure of actor network.</p>
Full article ">Figure 6
<p>The structure of critic network.</p>
Full article ">Figure 7
<p>The comparison of critic networks’ training loss involved in Uniform Experience Replay (UER)-DDPG and PER-DDPG over training episode. (<b>a</b>) The training loss of critic network of UER-DDPG. (<b>b</b>) The training loss of critic network of PER-DDPG.</p>
Full article ">Figure 8
<p>The winning rate of algorithms based on UER-DDPG and PER-DDPG over simulation episode. Winning rate means a rate of finishing mission successfully. (<b>a</b>) The comparison of winning rate based on UER-DDPG and PER-DDPG. (<b>b</b>) The top figure is the winning rate of algorithm based on UER-DDPG, and the figure at the bottom is the winning rate of algorithm based on PER-DDPG.</p>
Full article ">Figure 9
<p>The episode rewards of algorithms based on UER-DDPG and PER-DDPG over simulation episode. (<b>a</b>) The comparison of episode rewards based on UER-DDPG and PER-DDPG. (<b>b</b>) The top figure is the episode rewards of algorithm based on UER-DDPG, and the bottom figure is the episode rewards of algorithm based on PER-DDPG.</p>
Full article ">Figure 10
<p>The flight trajectory from Monte-Carlo (MC) experiments for the trained result of UER-DDPG: (<b>a</b>) the flight trajectory of 1st experiment; (<b>b</b>) the flight trajectory of 2nd experiment; (<b>c</b>) the flight trajectory of 3rd experiment; (<b>d</b>) the flight trajectory of 4th experiment.</p>
Full article ">Figure 11
<p>The parameters curve from MC experiments for the trained result of UER-DDPG. (<b>a</b>) The parameters curve of 1st experiment; (<b>b</b>) the parameters curve of 2nd experiment; (<b>c</b>) the parameters curve of 3rd experiment; (<b>d</b>) the parameters curve of 4th experiment.</p>
Full article ">Figure 12
<p>The flight trajectory from MC experiments for the trained result of PER-DDPG: (<b>a</b>) the flight trajectory of 1st experiment; (<b>b</b>) the flight trajectory of 2nd experiment; (<b>c</b>) the flight trajectory of 3rd experiment; (<b>d</b>) the flight trajectory of 4th experiment.</p>
Full article ">Figure 12 Cont.
<p>The flight trajectory from MC experiments for the trained result of PER-DDPG: (<b>a</b>) the flight trajectory of 1st experiment; (<b>b</b>) the flight trajectory of 2nd experiment; (<b>c</b>) the flight trajectory of 3rd experiment; (<b>d</b>) the flight trajectory of 4th experiment.</p>
Full article ">Figure 13
<p>The parameters curve from MC experiments for the trained result of PER-DDPG: (<b>a</b>) The parameters curve of 1st experiment. (<b>b</b>) The parameters curve of 2nd experiment. (<b>c</b>) The parameters curve of 3rd experiment. (<b>d</b>) The parameters curve of 4th experiment.</p>
Full article ">Figure 14
<p>The comparison of critic networks’ training loss involved in UER-DDPG and PER-DDPG over training episode. (<b>a</b>) The training loss of critic network of UER-DDPG. (<b>b</b>) The training loss of critic network of PER-DDPG.</p>
Full article ">Figure 15
<p>The winning rate of algorithms based on UER-DDPG and PER-DDPG over simulation episode. Winning rate means a rate of finishing mission successfully. (<b>a</b>) The comparison of winning rate based on UER-DDPG and PER-DDPG. (<b>b</b>) The figure in 1st row is the winning rate of algorithm based on UER-DDPG, and the figure in 2nd row is the winning rate of algorithm based on PER-DDPG.</p>
Full article ">Figure 16
<p>The episode rewards of algorithms based on UER-DDPG and PER-DDPG over simulation episode. The legends of figure are the same to <a href="#sensors-21-02233-f009" class="html-fig">Figure 9</a>. (<b>a</b>) The comparison of episode rewards based on UER-DDPG and PER-DDPG. (<b>b</b>) The figure in 1st row is the episode rewards of algorithm based on UER-DDPG, and the figure in 2nd row is the episode rewards of algorithm based on PER-DDPG.</p>
Full article ">Figure 17
<p>The flight trajectory from MC experiments for the trained result of UER-DDPG: (<b>a</b>) The flight trajectory of 1st experiment. (<b>b</b>) The flight trajectory of 2nd experiment. (<b>c</b>) The flight trajectory of 3rd experiment. (<b>d</b>) The flight trajectory of 4th experiment.</p>
Full article ">Figure 18
<p>The parameters curve from MC experiments for the trained result of UER-DDPG. (<b>a</b>) The parameters curve of 1st experiment. (<b>b</b>) The parameters curve of 2nd experiment. (<b>c</b>) The parameters curve of 3rd experiment. (<b>d</b>) The parameters curve of 4th experiment.</p>
Full article ">Figure 18 Cont.
<p>The parameters curve from MC experiments for the trained result of UER-DDPG. (<b>a</b>) The parameters curve of 1st experiment. (<b>b</b>) The parameters curve of 2nd experiment. (<b>c</b>) The parameters curve of 3rd experiment. (<b>d</b>) The parameters curve of 4th experiment.</p>
Full article ">Figure 19
<p>The flight trajectory from MC experiments for the trained result of PER-DDPG: (<b>a</b>) The flight trajectory of 1st experiment. (<b>b</b>) The flight trajectory of 2nd experiment. (<b>c</b>) The flight trajectory of 3rd experiment. (<b>d</b>) The flight trajectory of 4th experiment.</p>
Full article ">Figure 20
<p>The parameters curve from MC experiments for the trained result of PER-DDPG: (<b>a</b>) The parameters curve of 1st experiment. (<b>b</b>) The parameters curve of 2nd experiment. (<b>c</b>) The parameters curve of 3rd experiment. (<b>d</b>) The parameters curve of 4th experiment.</p>
Full article ">
24 pages, 14217 KiB  
Article
Fast 3D Rotation Estimation of Fruits Using Spheroid Models
by Antonio Albiol, Alberto Albiol and Carlos Sánchez de Merás
Sensors 2021, 21(6), 2232; https://doi.org/10.3390/s21062232 - 23 Mar 2021
Cited by 4 | Viewed by 3748
Abstract
Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit [...] Read more.
Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Roller conveyor unit used to obtain different views of the rotated fruits.</p>
Full article ">Figure 2
<p>Four consecutive camera frames.</p>
Full article ">Figure 3
<p>Set of views of the fruit highlighted in <a href="#sensors-21-02232-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Example of green tomato with no texture. No 3D motion can be estimated in this case.</p>
Full article ">Figure 5
<p><b>Left</b>: oblate spheroid model; <b>Center</b>: sphere model; <b>Right</b>: prolate spheroid model.</p>
Full article ">Figure 6
<p>Samples of different fruit shapes. <b>Left</b>: oblate; <b>Center</b>: spherical; <b>Right</b>: prolate.</p>
Full article ">Figure 7
<p>Relation of variances and semi-principal axes for an axis-aligned ellipse (circle).</p>
Full article ">Figure 8
<p>Relation between the variances and principal axes in the case of a rotated ellipse. <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> are the eigenvalues of the covariance matrix.</p>
Full article ">Figure 9
<p>Views of an oblate fruit. The major principal axes are very similar in all views (<math display="inline"><semantics> <mrow> <mn>2</mn> <mi>a</mi> <mo>≈</mo> <mn>2</mn> <mi>A</mi> </mrow> </semantics></math>). The range of the minor principal axis in each view is <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>B</mi> <mo>&lt;</mo> <mn>2</mn> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>&lt;</mo> <mn>2</mn> <mi>A</mi> </mrow> </semantics></math>. The minor principal axis of the spheroid is visible in the fourth view starting from the left (<math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mn>4</mn> </msub> <mo>≈</mo> <mi>B</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Sample camera view of an oblate object. The red axis is oriented as the eigenvector corresponding to the largest eigenvalue of <math display="inline"><semantics> <mo>Σ</mo> </semantics></math>. Its length is the same as the major spheroid semi-axis, <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>A</mi> </mrow> </semantics></math>. The yellow circle is located at the center of mass of the fruit/view.</p>
Full article ">Figure 11
<p>Cross section of fruit across 3D plane <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> in <a href="#sensors-21-02232-f008" class="html-fig">Figure 8</a>. Camera position above the fruit is shown.</p>
Full article ">Figure 12
<p>Ambiguity in the estimation of the elevation angle. The perceived shape from the camera is the same in both possibilities.</p>
Full article ">Figure 13
<p>Sequence of views. Blue and red arrows indicate local minima and maxima respectively, of the sequence <math display="inline"><semantics> <mrow> <mi mathvariant="script">B</mi> <mo>=</mo> <mo>{</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </semantics></math> of the semi-minor axis. If the sequence <math display="inline"><semantics> <mi mathvariant="script">B</mi> </semantics></math> is increasing at instant <span class="html-italic">i</span>, then <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>. This means (for this direction of rotation) that the part below the fruit center in the view is higher than the part above the center.</p>
Full article ">Figure 14
<p>Search grid of rotation vectors.</p>
Full article ">Figure 15
<p>Error map for all rotations in the grid search.</p>
Full article ">Figure 16
<p>This figure illustrates the local increase in resolution of the error map around the local minimum. The initial local minimum obtained with <math display="inline"><semantics> <mi>γ</mi> </semantics></math> step is shown as a yellow filled circle.</p>
Full article ">Figure 17
<p>Result of image pre-processing.</p>
Full article ">Figure 18
<p>Area from which relevant points are obtained.</p>
Full article ">Figure 19
<p><b>Top</b>: Sample Images from Fruits-360 data set; from left to right kiwi, peach, apple golden, coconut, and watermelon; <b>Bottom</b>: Sample Images from FruitRot3D dataset; from left to right, orange, tomato, and mandarin.</p>
Full article ">Figure 20
<p>Sequence of estimated rotations for the coconut sequence with <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Mean Rotations as a function of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> </mrow> </semantics></math> for different fruit types.</p>
Full article ">Figure 22
<p>Mean rotation speed as a function of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> </mrow> </semantics></math> for different fruit types.</p>
Full article ">Figure 23
<p>Standard deviation of rotations as a function of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> </mrow> </semantics></math> for different fruit types.</p>
Full article ">Figure 24
<p>This figure illustrates the idea of the interface to annotate ground-truth for estimating reprojection error. The user is requested to select corresponding points in both views.</p>
Full article ">Figure 25
<p>Example of point tracking in the case of two different tomatoes. The initial tracked point is white. Green circles mean predicted visible positions. The geometry model is set to oblate in this case. The sequence has been truncated to the views where the tracked point remains visible.</p>
Full article ">Figure 26
<p>Example of point tracking in the case of two different oranges. The geometry model in this case is sphere. The sequence has been truncated to the views where the tracked point remains visible.</p>
Full article ">Figure 27
<p>Example of point tracking in the case of three different mandarins. Dark circles mean predicted occluded positions of the initial point. The third row is the same fruit as the second, but a different point is tracked. Oblate geometry has been used.</p>
Full article ">Figure 28
<p>Examples of point tracking of fruits in the Fruit-360 dataset.</p>
Full article ">
23 pages, 4666 KiB  
Article
A Smart and Secure Logistics System Based on IoT and Cloud Technologies
by Ilaria Sergi, Teodoro Montanaro, Fabrizio Luca Benvenuto and Luigi Patrono
Sensors 2021, 21(6), 2231; https://doi.org/10.3390/s21062231 - 23 Mar 2021
Cited by 33 | Viewed by 6865
Abstract
Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions [...] Read more.
Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions to guarantee quality and freshness of food through the whole cold chain. In this regard, the use of Internet of Things (IoT)-enabling technologies and its specific branch called edge computing is bringing different enhancements thereby achieving easy remote and real-time monitoring of transported goods. Due to the fast changes of the requirements and the difficulties that researchers can encounter in proposing new solutions, the fast prototype approach could contribute to rapidly enhance both the research and the commercial sector. In order to make easy the fast prototyping of solutions, different platforms and tools have been proposed in the last years, however it is difficult to guarantee end-to-end security at all the levels through such platforms. For this reason, based on the experiments reported in literature and aiming at providing support for fast-prototyping, end-to-end security in the logistics sector, the current work presents a solution that demonstrates how the advantages offered by the Azure Sphere platform, a dedicated hardware (i.e., microcontroller unit, the MT3620) device and Azure Sphere Security Service can be used to realize a fast prototype to trace fresh food conditions through its transportation. The proposed solution guarantees end-to-end security and can be exploited by future similar works also in other sectors. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Typical edge computing architecture.</p>
Full article ">Figure 2
<p>Azure Sphere MT3620 development kit by Seeed Studio (<b>a</b>), Azure Sphere MT3620 starter kit by Avnet (<b>b</b>).</p>
Full article ">Figure 3
<p>Grove Temperature &amp; Humidity Sensor (SHT31) (<b>a</b>), Grove Light Sensor v1.2 (<b>b</b>).</p>
Full article ">Figure 4
<p>MT3620 Grove Shield by Seeed Studio.</p>
Full article ">Figure 5
<p>Block diagram of the proposed system architecture.</p>
Full article ">Figure 6
<p>Connection of the two configured Azure Sphere boards.</p>
Full article ">Figure 7
<p>Device output in Visual Studio.</p>
Full article ">Figure 8
<p>Device twin.</p>
Full article ">Figure 9
<p>Azure Stream Analytics Job metrics.</p>
Full article ">Figure 10
<p>Query result on the database.</p>
Full article ">Figure 11
<p>Element added to buffer when no Wi-Fi connection is available.</p>
Full article ">Figure 12
<p>Console showing the buffer emptying when the Wi-Fi connection becomes available again.</p>
Full article ">
13 pages, 4273 KiB  
Communication
A Miniature Bio-Photonics Companion Diagnostics Platform for Reliable Cancer Treatment Monitoring in Blood Fluids
by Marianneza Chatzipetrou, Lefteris Gounaridis, George Tsekenis, Maria Dimadi, Rachel Vestering-Stenger, Erik F. Schreuder, Anke Trilling, Geert Besselink, Luc Scheres, Adriaan van der Meer, Ernst Lindhout, Rene G. Heideman, Henk Leeuwis, Siegfried Graf, Tormod Volden, Michael Ningler, Christos Kouloumentas, Claudia Strehle, Vincent Revol, Apostolos Klinakis, Hercules Avramopoulos and Ioanna Zergiotiadd Show full author list remove Hide full author list
Sensors 2021, 21(6), 2230; https://doi.org/10.3390/s21062230 - 23 Mar 2021
Cited by 10 | Viewed by 5936
Abstract
In this paper, we present the development of a photonic biosensor device for cancer treatment monitoring as a complementary diagnostics tool. The proposed device combines multidisciplinary concepts from the photonic, nano-biochemical, micro-fluidic and reader/packaging platforms aiming to overcome limitations related to detection reliability, [...] Read more.
In this paper, we present the development of a photonic biosensor device for cancer treatment monitoring as a complementary diagnostics tool. The proposed device combines multidisciplinary concepts from the photonic, nano-biochemical, micro-fluidic and reader/packaging platforms aiming to overcome limitations related to detection reliability, sensitivity, specificity, compactness and cost issues. The photonic sensor is based on an array of six asymmetric Mach Zender Interferometer (aMZI) waveguides on silicon nitride substrates and the sensing is performed by measuring the phase shift of the output signal, caused by the binding of the analyte on the functionalized aMZI surface. According to the morphological design of the waveguides, an improved sensitivity is achieved in comparison to the current technologies (<5000 nm/RIU). This platform is combined with a novel biofunctionalization methodology that involves material-selective surface chemistries and the high-resolution laser printing of biomaterials resulting in the development of an integrated photonics biosensor device that employs disposable microfluidics cartridges. The device is tested with cancer patient blood serum samples. The detection of periostin (POSTN) and transforming growth factor beta-induced protein (TGFBI), two circulating biomarkers overexpressed by cancer stem cells, is achieved in cancer patient serum with the use of the device. Full article
(This article belongs to the Special Issue Nanosensors for Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Layout of a typical asymmetric Mach Zender Interferometer (aMZI); (<b>B</b>) waveguiding structure of the aMZI in this sensor based on the TriPleX platform.</p>
Full article ">Figure 2
<p>aMZI chip with hybrid integration of vertical-cavity surface-emitting laser (VCSEL) and Photodiodes.</p>
Full article ">Figure 3
<p>Optical microscopy image of the Laser Induced Forward Transfer (LIFT) printed biomaterials on the sensing aMZI of the photonic chip.</p>
Full article ">Figure 4
<p>(<b>A</b>) Picture of the microfluidic cartridge used during the validation process. Syringe 1 (air) is used to drive the patient sample through the blood filter. Syringe 2 (<b>B1</b>) contains priming/washing buffer. Syringe 3 (<b>B2</b>) contains a second buffer specific to the application. Light sensors are used to detect the arrival of liquid fronts. Two waste chambers collect the liquids used in the experiment, and only air escapes through the vents. An alternating valve actuation on the vents steers the flow through the sensor area or a bypass channel, respectively. (<b>B</b>) Picture of the final injection molded cartridge revision also shown inserted in the instrument in <a href="#sensors-21-02230-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>BIOCDx photonic biosensor device with inserted cartridge, (<b>A</b>) 3D model, and (<b>B</b>) final instrument.</p>
Full article ">Figure 6
<p>Sensorgram of the multiplex binding of recombinant TGFBI (<b>A</b>) or recombinant POSTN (<b>B</b>) during incubation with spiked buffer sample employing differently modified (348506 or Stiny-1 and mouse IgG as negative control) aMZI sensors. The differential wavelength shift refers to the signal on the antibody spotted aMZI minus a specific signal on the mouse IgG modified aMZI. Those results are recorded with the optical measurement experimental setup.</p>
Full article ">Figure 7
<p>Detection of recombinant POSTN and TGFBI in 10% patient serum sample. These results are recorded with the optical measurement experimental setup.</p>
Full article ">Figure 8
<p>Basic grating coupler working principle.</p>
Full article ">Figure 9
<p>(<b>A</b>) “two-port” and (<b>B</b>) “single port” grating coupler for in and out coupling of the light, respectively.</p>
Full article ">Figure 10
<p>Refractive index shift (in radians) during the alternating flow sequence of PBS and PBS with reduced NaCl concentration (−40 mM). The results were recorded with the integrated photonic biosensor device for cancer treatment monitoring.</p>
Full article ">
22 pages, 1003 KiB  
Article
Behaviour Classification on Giraffes (Giraffa camelopardalis) Using Machine Learning Algorithms on Triaxial Acceleration Data of Two Commonly Used GPS Devices and Its Possible Application for Their Management and Conservation
by Stefanie Brandes, Florian Sicks and Anne Berger
Sensors 2021, 21(6), 2229; https://doi.org/10.3390/s21062229 - 23 Mar 2021
Cited by 11 | Viewed by 8875
Abstract
Averting today’s loss of biodiversity and ecosystem services can be achieved through conservation efforts, especially of keystone species. Giraffes (Giraffa camelopardalis) play an important role in sustaining Africa’s ecosystems, but are ‘vulnerable’ according to the IUCN Red List since 2016. Monitoring [...] Read more.
Averting today’s loss of biodiversity and ecosystem services can be achieved through conservation efforts, especially of keystone species. Giraffes (Giraffa camelopardalis) play an important role in sustaining Africa’s ecosystems, but are ‘vulnerable’ according to the IUCN Red List since 2016. Monitoring an animal’s behavior in the wild helps to develop and assess their conservation management. One mechanism for remote tracking of wildlife behavior is to attach accelerometers to animals to record their body movement. We tested two different commercially available high-resolution accelerometers, e-obs and Africa Wildlife Tracking (AWT), attached to the top of the heads of three captive giraffes and analyzed the accuracy of automatic behavior classifications, focused on the Random Forests algorithm. For both accelerometers, behaviors of lower variety in head and neck movements could be better predicted (i.e., feeding above eye level, mean prediction accuracy e-obs/AWT: 97.6%/99.7%; drinking: 96.7%/97.0%) than those with a higher variety of body postures (such as standing: 90.7–91.0%/75.2–76.7%; rumination: 89.6–91.6%/53.5–86.5%). Nonetheless both devices come with limitations and especially the AWT needs technological adaptations before applying it on animals in the wild. Nevertheless, looking at the prediction results, both are promising accelerometers for behavioral classification of giraffes. Therefore, these devices when applied to free-ranging animals, in combination with GPS tracking, can contribute greatly to the conservation of giraffes. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

Figure 1
<p>Attachment of the accelerometers, shown on the female giraffes, with the axes’ orientation for the (<b>a</b>) e-obs and (<b>b</b>) AWT accelerometer; red: sway (lateral movement), green: surge (anterior-posterior movement), blue: heave (dorso-ventral movement).</p>
Full article ">Figure A1
<p>Levels for the giraffes’ head and neck positions; o (above eye level): neck normal or up to 90°, head above 0° (lower jaw relative to the ground parallel); a (at eye level): head and neck at normal position (neck ~45°) or neck up to 90°, head bowed; Mi (middle level): neck 0° to ~44°; Ti (deep level): neck below 0°, head/snout above the ground; Bo (ground level): neck below 0°, head/snout on ground, front legs splayed out laterally; all angle values, if not indicated otherwise, are relative to the ground parallel; picture adapted from: <a href="https://de.dreamstime.com/lizenzfreies-stockfoto-giraffe-image36015115" target="_blank">https://de.dreamstime.com/lizenzfreies-stockfoto-giraffe-image36015115</a>, accessed in 2017.</p>
Full article ">
22 pages, 6062 KiB  
Article
Wind Turbine Main Bearing Fault Prognosis Based Solely on SCADA Data
by Ángel Encalada-Dávila, Bryan Puruncajas, Christian Tutivén and Yolanda Vidal
Sensors 2021, 21(6), 2228; https://doi.org/10.3390/s21062228 - 23 Mar 2021
Cited by 53 | Viewed by 9849
Abstract
As stated by the European Academy of Wind Energy (EAWE), the wind industry has identified main bearing failures as a critical issue in terms of increasing wind turbine reliability and availability. This is owing to major repairs with high replacement costs and long [...] Read more.
As stated by the European Academy of Wind Energy (EAWE), the wind industry has identified main bearing failures as a critical issue in terms of increasing wind turbine reliability and availability. This is owing to major repairs with high replacement costs and long downtime periods associated with main bearing failures. Thus, the main bearing fault prognosis has become an economically relevant topic and is a technical challenge. In this work, a data-based methodology for fault prognosis is presented. The main contributions of this work are as follows: (i) Prognosis is achieved by using only supervisory control and data acquisition (SCADA) data, which is already available in all industrial-sized wind turbines; thus, no extra sensors that are designed for a specific purpose need to be installed. (ii) The proposed method only requires healthy data to be collected; thus, it can be applied to any wind farm even when no faulty data has been recorded. (iii) The proposed algorithm works under different and varying operating and environmental conditions. (iv) The validity and performance of the established methodology is demonstrated on a real underproduction wind farm consisting of 12 wind turbines. The obtained results show that advanced prognostic systems based solely on SCADA data can predict failures several months prior to their occurrence and allow wind turbine operators to plan their operations. Full article
(This article belongs to the Special Issue Sensors for Wind Turbine Fault Diagnosis and Prognosis)
Show Figures

Figure 1

Figure 1
<p>Main components of the wind turbine (WT) [<a href="#B24-sensors-21-02228" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>Spherical roller main bearing used in WTs. Courtesy of SKF.</p>
Full article ">Figure 3
<p>Fatigue failure. Subsurface-initiated (<b>left</b>) and surface-initiated (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 4
<p>Wear failure. Abrasive wear (<b>left</b>) and adhesive wear (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 5
<p>Corrosion failures. Moisture (<b>left</b>), fretting (<b>middle</b>), and brinelling (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 6
<p>Electrical erosion failures. Excessive current (<b>left</b>) and current leakage (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 7
<p>Plastic deformation failure. Overload (<b>left</b>) and indentation (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 8
<p>Fracture failure. Forced (<b>left</b>) and fatigue (<b>right</b>) [<a href="#B27-sensors-21-02228" class="html-bibr">27</a>]. Courtesy of SKF.</p>
Full article ">Figure 9
<p>Plot example of the selected supervisory control and data acquisition (SCADA) variables used to develop the normality model. All of them are related to the mean value over a 10-min period.</p>
Full article ">Figure 10
<p>Out-of-range values are detected as outliers (red crosses) and assigned as missing values from the raw signal.</p>
Full article ">Figure 11
<p>Low-speed shaft temperature raw data (without outliers) versus imputed data (<b>top</b>) and zoom in of the imputed data (<b>bottom</b>).</p>
Full article ">Figure 11 Cont.
<p>Low-speed shaft temperature raw data (without outliers) versus imputed data (<b>top</b>) and zoom in of the imputed data (<b>bottom</b>).</p>
Full article ">Figure 12
<p>WT2 (WT number 2 in the wind farm) data for training and test.</p>
Full article ">Figure 13
<p>ANN model with 14 inputs, 72 neurons in the hidden layer, and 1 output.</p>
Full article ">Figure 14
<p>Minimization of the MSE, <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> </semantics></math>, during training of WT1 (<b>left</b>). Error histogram with 20 bins of final training error over all training samples for WT1 (<b>right</b>).</p>
Full article ">Figure 15
<p>Values at each training epoch iteration for the gradient, <math display="inline"><semantics> <mrow> <msup> <mi>J</mi> <mi>T</mi> </msup> <mi>r</mi> <mrow> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, damping parameter, <math display="inline"><semantics> <mi>μ</mi> </semantics></math>, and effective number of parameters, <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, for WT1.</p>
Full article ">Figure 16
<p>(<b>a</b>) ANN predicted value (<math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> </semantics></math>) and target (<span class="html-italic">T</span>) value for WT1 over the train dataset. (<b>b</b>) ANN predicted value (<math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> </semantics></math>) and target (<span class="html-italic">T</span>) value for WT1 over the test dataset. (<b>c</b>) Absolute difference value between the prediction and estimation, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>T</mi> <mo>−</mo> </mrow> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, for WT1 over the train dataset. (<b>d</b>) Absolute difference value between the prediction and estimation, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>T</mi> <mo>−</mo> </mrow> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, for WT1 over the test dataset. (<b>e</b>) ANN predicted value (<math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> </semantics></math>) and target (<span class="html-italic">T</span>) value for WT2 over the train dataset. (<b>f</b>) ANN predicted value (<math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> </semantics></math>) and target (<span class="html-italic">T</span>) value for WT2 over the test dataset. (<b>g</b>) Absolute difference value between the prediction and estimation, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>T</mi> <mo>−</mo> </mrow> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, for WT2 over the train dataset. (<b>h</b>) Absolute difference value between the prediction and estimation, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>T</mi> <mo>−</mo> </mrow> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, for WT2 over the test dataset.</p>
Full article ">Figure 17
<p>ANN indicator values (blue line) for test data, and threshold (red line).</p>
Full article ">
20 pages, 2881 KiB  
Article
A Deep Learning Framework for Recognizing Both Static and Dynamic Gestures
by Osama Mazhar, Sofiane Ramdani and Andrea Cherubini
Sensors 2021, 21(6), 2227; https://doi.org/10.3390/s21062227 - 23 Mar 2021
Cited by 12 | Viewed by 4470
Abstract
Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot [...] Read more.
Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network—StaDNet. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in StaDNet is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the Praxis gestures dataset, and the obtained results also outscore the state-of-the-art on this dataset. Full article
(This article belongs to the Collection Sensors and Data Processing in Robotics)
Show Figures

Figure 1

Figure 1
<p>Illustration of our proposed framework. In Spatial Attention Module, we mainly have learning-based depth estimators (grey boxes), Focus on Hands (FOH) Module and Pose Pre-Processing (PP) Module. 2D skeleton is extracted by <span class="html-italic">OpenPose</span>. FOH exploits hand coordinates obtained from the skeleton and crops hand images with the help of hand depth estimators, while PP performs scale and position normalization of the skeleton with the help of skeleton depth estimator. The features from the normalized pose are extracted by Pose Augmentation and Dynamic Features Extraction Module and are fed to <span class="html-italic">StaDNet</span> together with the cropped hand images. <span class="html-italic">StaDNet</span> detects frame-wise static gestures as well as dynamic gestures in each video.</p>
Full article ">Figure 2
<p>The <span class="html-italic">Skeleton Filter</span> described in <a href="#sec5dot1dot1-sensors-21-02227" class="html-sec">Section 5.1.1</a>. Images are arranged from left to right in chronological order. The central image shows the skeleton output by the filter. The six other images show the raw skeletons output by <span class="html-italic">OpenPose</span>. Observe that—thanks to Equation (<a href="#FD1-sensors-21-02227" class="html-disp-formula">1</a>)—our filter has added the right wrist coordinates (shown only in the central image). These are obtained from the <span class="html-italic">K</span>-th frame, while they were missing in all raw skeletons from frame 1 to <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Features augmentation of the upper body. In the left image, we show 8 upper-body joint coordinates (red), vectors connecting these joints (black) and angles between these vectors (green). From all upper-body joints, we compute a <span class="html-italic">line of best fit</span> (blue). In the right image, we show all the vectors (purple) between unique pairs of upper-body joints. We also compute the angles (not shown) between the vectors and the line of best fit. From 8 upper-body joints, we obtain 97 components of the <span class="html-italic">augmented pose vector</span>.</p>
Full article ">Figure 4
<p>Illustration of <span class="html-italic">StaDNet</span> for static and dynamic gestures recognition. We perform intermediate fusion to combine hand image embeddings and augmented pose vector.</p>
Full article ">Figure 5
<p>Training curves of the proposed Convolutional Neural Network (CNN)–Long Short term Memory (LSTM) network for all 249 gestures of the Chalearn 2016. The network is trained in four phases, distinguished by the vertical lines.</p>
Full article ">Figure 6
<p>Illustration of the confusion matrix/heat-map of <span class="html-italic">StaDNet</span> evaluated on test set of the Chalearn 2016 isolated gestures recognition dataset. It is evident that most samples in the test set are recognized with high accuracy for all 249 gestures (diagonal entries, 86.75% overall).</p>
Full article ">Figure 7
<p>Training curves of <span class="html-italic">StaDNet</span> on the Praxis gesture dataset.</p>
Full article ">Figure 8
<p>Normalized confusion matrix of the proposed model evaluated on test set of the Praxis dataset.</p>
Full article ">Figure 9
<p>Normalized confusion matrix for our static hand gesture detector quantified on test-set of <span class="html-italic">OpenSign</span>. This figure is taken from [<a href="#B11-sensors-21-02227" class="html-bibr">11</a>] with the authors’ permission.</p>
Full article ">Figure 10
<p>Snapshots of our gesture-controlled safe human-robot interaction experiment taken from [<a href="#B11-sensors-21-02227" class="html-bibr">11</a>] with the authors’ permission. The human operator manually guides the robot to waypoints in the workspace then asks the robot to <span class="html-italic">record</span> them through a gesture. The human operator can transmit other commands to the robot like <span class="html-italic">replay, stop, resume, reteach,</span> and so forth with only hand gestures.</p>
Full article ">
3 pages, 179 KiB  
Editorial
Electronics for Sensors
by Giuseppe Ferri, Gianluca Barile and Alfiero Leoni
Sensors 2021, 21(6), 2226; https://doi.org/10.3390/s21062226 - 23 Mar 2021
Viewed by 1902
Abstract
Research on systems and circuits for interfacing sensors has always been, and will surely be, a highly prioritized, widespread, and lively topic [...] Full article
(This article belongs to the Special Issue Electronics for Sensors)
17 pages, 7531 KiB  
Article
Characterization of Pelvic Floor Activity in Healthy Subjects and with Chronic Pelvic Pain: Diagnostic Potential of Surface Electromyography
by Monica Albaladejo-Belmonte, Marta Tarazona-Motes, Francisco J. Nohales-Alfonso, Maria De-Arriba, Jose Alberola-Rubio and Javier Garcia-Casado
Sensors 2021, 21(6), 2225; https://doi.org/10.3390/s21062225 - 23 Mar 2021
Cited by 11 | Viewed by 6492
Abstract
Chronic pelvic pain (CPP) is a highly disabling disorder in women usually associated with hypertonic dysfunction of the pelvic floor musculature (PFM). The literature on the subject is not conclusive about the diagnostic potential of surface electromyography (sEMG), which could be due to [...] Read more.
Chronic pelvic pain (CPP) is a highly disabling disorder in women usually associated with hypertonic dysfunction of the pelvic floor musculature (PFM). The literature on the subject is not conclusive about the diagnostic potential of surface electromyography (sEMG), which could be due to poor signal characterization. In this study, we characterized the PFM activity of three groups of 24 subjects each: CPP patients with deep dyspareunia associated with a myofascial syndrome (CPP group), healthy women over 35 and/or parous (>35/P group, i.e., CPP counterparts) and under 35 and nulliparous (<35&NP). sEMG signals of the right and left PFM were recorded during contractions and relaxations. The signals were characterized by their root mean square (RMS), median frequency (MDF), Dimitrov index (DI), sample entropy (SampEn), and cross-correlation (CC). The PFM activity showed a higher power (>RMS), a predominance of low-frequency components (<MDF, >DI), greater complexity (>SampEn) and lower synchronization on the same side (<CC) in CPP patients, with more significant differences in the >35/P group. The same trend in differences was found between healthy women (<35&NP vs. >35/P) associated with aging and parity. These results show that sEMG can reveal alterations in PFM electrophysiology and provide clinicians with objective information for CPP diagnosis. Full article
(This article belongs to the Special Issue Biosignal Sensing and Processing for Clinical Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Electrodes location for the surface electromyogram (sEMG) signal recording.</p>
Full article ">Figure 2
<p>sEMG signals recorded from the pelvic floor musculature (PFM) of a subject from each group: woman over 35 and/or parous (&gt;35/P, first row), woman under 35 and nulliparous (&lt;35&amp;NP, second row) and woman with Chronic pelvic pain (CPP, third row). Left and right subfigures show signals recorded from each subject’s left and right PFM, respectively.</p>
Full article ">Figure 3
<p>Box-whisker plots of root mean square (RMS) values of the left and right PFM (left and right subfigures, respectively) in patients (CPP), healthy women over 35 and/or parous (&gt;35/P) and healthy women under 35 and nulliparous (&lt;35&amp;NP) during PFM contraction and relaxation (first and second rows, respectively). (*): significant difference between groups under the ends of the brace. Red (+): outlier within the group’s data distribution.</p>
Full article ">Figure 4
<p>Box-whisker plots of MDF values of the left and right PFM (left and right subfigures, respectively) in patients (CPP), healthy women over 35 and/or parous (&gt;35/P) and healthy women under 35 and nulliparous (&lt;35&amp;NP) during PFM contraction and relaxation (first and second rows, respectively). (*): significant difference between groups under the ends of the brace. Red (+): outlier within the group’s data distribution.</p>
Full article ">Figure 5
<p>Box-whisker plots of Dimitrov index (DI) values of the left and right PFM (left and right subfigures, respectively) in patients (CPP), healthy women over 35 and/or parous (&gt;35/P) and healthy women under 35 and nulliparous (&lt;35&amp;NP) during PFM contraction and relaxation (first and second rows, respectively). (*): significant difference between groups under the ends of the brace. Red (+): outlier within the group’s data distribution.</p>
Full article ">Figure 6
<p>Box-whisker plots of sample entropy (SampEn) values of the left and right PFM (left and right subfigures, respectively) in patients (CPP), healthy women over 35 and/or parous (&gt;35/P) and healthy women under 35 and nulliparous (&lt;35&amp;NP) during PFM contraction and relaxation (first and second rows, respectively). (*): significant difference between groups under the ends of the brace. Red (+): outlier within the group’s data distribution.</p>
Full article ">Figure 7
<p>Box-whisker plots of cross-correlation (CC) values between the upper and lower sides of the left and right PFM (left and right subfigures, respectively) in patients (CPP), healthy women over 35 and/or parous (&gt;35/P) and healthy women under 35 and nulliparous (&lt;35&amp;NP) during the five PFM contractions and relaxations between them (first and second rows, respectively). (*): significant difference between groups under the ends of the brace. Red (+): outlier within the group’s data distribution.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop