[go: up one dir, main page]

Next Issue
Volume 20, November-1
Previous Issue
Volume 20, October-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 20 (October-2 2020) – 241 articles

Cover Story (view full-size image): The development of a rapid and point-of-care diagnostic sensor for SARS-CoV-2 screening is important for controlling the disease spread. Here, we report a rapid, cost-effective, and simple cobalt-functionalized TiO2 nanotube (Co-TNT)-based electrochemical sensor, which detects the S-RBD of spike glycoprotein present on the surface of SARS-CoV-2 virus. The sensor is able to detect S-RBD protein in a very short time of ~30 sec, with a detection limit as low as 0.7nM. We envisage that the detection of S-RBD protein by the Co-TNT sensor is due to the formation of a complex between Co and the protein. We believe that the developed Co-TNT sensor has the potential to detect SARS-CoV-2 in clinical samples, including nasal, nasopharyngeal swabs, and saliva. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 22907 KiB  
Article
Planetary-Scale Geospatial Open Platform Based on the Unity3D Environment
by Ahyun Lee, Yoon-Seop Chang and Insung Jang
Sensors 2020, 20(20), 5967; https://doi.org/10.3390/s20205967 - 21 Oct 2020
Cited by 22 | Viewed by 4710
Abstract
Digital twin technology based on building a virtual digital city similar to a real one enables the simulation of urban phenomena or the design of a city. A geospatial platform is an essential supporting component of digital twin cities. In this study, we [...] Read more.
Digital twin technology based on building a virtual digital city similar to a real one enables the simulation of urban phenomena or the design of a city. A geospatial platform is an essential supporting component of digital twin cities. In this study, we propose a planetary-scale geospatial open platform that can be used easily in the most widely used game engine environment. The proposed platform can visualize large-capacity geospatial data in real time because it organizes and manages various types of data based on quadtree tiles. The proposed rendering tile decision method provides constant geospatial data visualization according to the camera controls of the user. The platform implemented is based on Unity3D, and therefore, one can use it easily by importing the proposed asset library. The proposed geospatial platform is available on the Asset Store. We believe that the proposed platform can meet the needs of various three-dimensional (3-D) geospatial applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Examples of city models developed using game solution of Google Maps platform [<a href="#B15-sensors-20-05967" class="html-bibr">15</a>]: (<b>a</b>) initial geospatial model from Google Maps platform; (<b>b</b>) model modified and improved augmented by a user.</p>
Full article ">Figure 2
<p>Web-based VWorld geospatial data: (<b>a</b>) VWorld service site [<a href="#B22-sensors-20-05967" class="html-bibr">22</a>]; the selected building is shown in red; (<b>b</b>) 5G radio wave environment and antenna visible area analysis using VWorld data.</p>
Full article ">Figure 3
<p>Game window examples of the proposed geospatial platform.</p>
Full article ">Figure 4
<p>Proposed geospatial platform in the game engine editor.</p>
Full article ">Figure 5
<p>Proposed platform structures. The blue blocks pertain to the proposed asset library in the game engine.</p>
Full article ">Figure 6
<p>Camera control for the three-dimensional (3-D) globe-based map: (<b>a</b>) rotation based on latitude and longitude; (<b>b</b>) tilt transformation at the intersection point p between the camera’s principal line and the surface of the globe; (<b>c</b>) rotation based on the camera principal line.</p>
Full article ">Figure 7
<p>Examples of the camera control: (<b>a</b>) rotation based on latitude and longitude in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>a; (<b>b</b>) tilt transformation in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>b; (<b>c</b>) rotation based on the camera principal line in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>c.</p>
Full article ">Figure 7 Cont.
<p>Examples of the camera control: (<b>a</b>) rotation based on latitude and longitude in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>a; (<b>b</b>) tilt transformation in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>b; (<b>c</b>) rotation based on the camera principal line in <a href="#sensors-20-05967-f006" class="html-fig">Figure 6</a>c.</p>
Full article ">Figure 8
<p>Positioning 3-D geospatial model: (<b>a</b>) initial position and orientation of the 3D model; (<b>b</b>) rotation <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> </mrow> </semantics></math> on the Y-axis; (<b>c</b>) rotation <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> </mrow> </semantics></math> on the cross product of (x, y, z) and the Y-axis; (<b>d</b>) result of positioning 3-D geospatial model.</p>
Full article ">Figure 8 Cont.
<p>Positioning 3-D geospatial model: (<b>a</b>) initial position and orientation of the 3D model; (<b>b</b>) rotation <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> </mrow> </semantics></math> on the Y-axis; (<b>c</b>) rotation <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> </mrow> </semantics></math> on the cross product of (x, y, z) and the Y-axis; (<b>d</b>) result of positioning 3-D geospatial model.</p>
Full article ">Figure 9
<p>Rendering results for representative buildings in Pyongyang, North Korea on the proposed platform: (<b>a</b>) Ryukyung Hotel; (<b>b</b>) Neungrado Stadium; (<b>c</b>) Kim Il Sung Square.</p>
Full article ">Figure 10
<p>Quadtree-based tile structure (IDX, IDY): Level 0 tiles consist of 50 tiles of the Earth’s surface.</p>
Full article ">Figure 11
<p>Example division of quadtree-based tiles (IDX; IDY): a level 0 tile (latitude: 18°–54°, longitude: 108°–144°, South Korea) is divided into four level 1 tiles or 16 level 2 tiles. (<b>a</b>) level 0 tile; (<b>b</b>) four level 1 tiles; (<b>c</b>) 16 level 2 tiles.</p>
Full article ">Figure 12
<p>Examples of a sector composition: (<b>a</b>) aerial image; (<b>b</b>) 3-D terrain mesh model generated from digital elevation model (DEM); (<b>c</b>) 3-D building model; (<b>d</b>) rendering result of geospatial data in a sector.</p>
Full article ">Figure 13
<p>Task flow of the main loop.</p>
Full article ">Figure 14
<p>Task flow of the proposed three types of culling test.</p>
Full article ">Figure 15
<p>Examples of rendering failure: changes in rendering target sectors due to changes in position and orientation of the camera (<b>a</b>) the child sectors are rendered before the creation of all four child sectors; (<b>b</b>) the child sectors are created without the creation of a parent sector.</p>
Full article ">Figure 16
<p>Rendering example of adjacent sectors and the quadtree structure: (<b>a</b>) rendering result for sectors in the game window; (<b>b</b>) wireframes of rendering sectors; (<b>c</b>) quadtree structure of rendering sectors (only colored sectors are rendered).</p>
Full article ">Figure 17
<p>Development pipeline for digital twin city platform: the blue blocks show the steps followed in the proposed platform.</p>
Full article ">
19 pages, 4779 KiB  
Article
SAR Target Recognition via Meta-Learning and Amortized Variational Inference
by Ke Wang and Gong Zhang
Sensors 2020, 20(20), 5966; https://doi.org/10.3390/s20205966 - 21 Oct 2020
Cited by 8 | Viewed by 2756
Abstract
The challenge of small data has emerged in synthetic aperture radar automatic target recognition (SAR-ATR) problems. Most SAR-ATR methods are data-driven and require a lot of training data that are expensive to collect. To address this challenge, we propose a recognition model that [...] Read more.
The challenge of small data has emerged in synthetic aperture radar automatic target recognition (SAR-ATR) problems. Most SAR-ATR methods are data-driven and require a lot of training data that are expensive to collect. To address this challenge, we propose a recognition model that incorporates meta-learning and amortized variational inference (AVI). Specifically, the model consists of global parameters and task-specific parameters. The global parameters, trained by meta-learning, construct a common feature extractor shared between all recognition tasks. The task-specific parameters, modeled by probability distributions, can adapt to new tasks with a small amount of training data. To reduce the computation and storage cost, the task-specific parameters are inferred by AVI implemented with set-to-set functions. Extensive experiments were conducted on a real SAR dataset to evaluate the effectiveness of the model. The results of the proposed approach compared with those of the latest SAR-ATR methods show the superior performance of our model, especially on recognition tasks with limited data. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The graphical model for the meta-learning framework. Open circles represent one or a group of random variables. The arrows indicate probabilistic dependencies between random variables.</p>
Full article ">Figure 2
<p>The overall structure of our model. The model samples a task from the synthetic aperture radar (SAR) dataset and divides it into a support set and a query set. The feature extractor uses a four-layer convolutional neural network (CNN) to extract image features. The classifier identifies the category of image features, and its weight is generated by the weight predictor.</p>
Full article ">Figure 3
<p>The network configuration of our model.</p>
Full article ">Figure 4
<p>Illustration of set-to-set transformation. MAX denotes the max-pooling operation, CAT denotes vector concatenation, and ADD denotes vector addition.</p>
Full article ">Figure 5
<p>The recognition rates obtained from different amounts of training data.</p>
Full article ">Figure 6
<p>Comparison of different methods under the standard operation conditions (SOCs) test.</p>
Full article ">Figure 7
<p>Illustration of target images at different depression angles. All targets have an azimuth angle of 45°.</p>
Full article ">Figure 8
<p>The recognition rates of different methods at a depression angle of 30°.</p>
Full article ">Figure 9
<p>The recognition rates of different methods at a depression angle of 45°.</p>
Full article ">Figure 10
<p>Comparison of different methods under the depression angle test. KRLDP, MCNN, and A-ConvNet only provide recognition results for a depression angle of 30°.</p>
Full article ">Figure 11
<p>Illustration of target images in different configurations (titled by their serial numbers). All targets have an azimuth angle of 90° and a depression angle of 17°.</p>
Full article ">Figure 12
<p>Comparison of different methods under the configuration test.</p>
Full article ">Figure 13
<p>Reliability diagrams of (<b>a</b>) our model with 100% data, (<b>b</b>) our model with 50% data, (<b>c</b>) our model with 10% data, (<b>d</b>) PPA with 100% data. (<b>e</b>) PPA with 50% data, (<b>f</b>) PPA with 10% data.</p>
Full article ">
13 pages, 6351 KiB  
Letter
Wireless Sensing of Concrete Setting Process
by Giselle González-López, Jordi Romeu, Ignasi Cairó, Ignacio Segura, Tai Ikumi and Lluis Jofre-Roca
Sensors 2020, 20(20), 5965; https://doi.org/10.3390/s20205965 - 21 Oct 2020
Cited by 3 | Viewed by 3469
Abstract
An RFID-based wireless system to measure the evolution of the setting process of cement-based materials is presented in this paper. The system consists of a wireless RFID temperature sensor that works embedded in concrete, and an external RFID reader that communicates with the [...] Read more.
An RFID-based wireless system to measure the evolution of the setting process of cement-based materials is presented in this paper. The system consists of a wireless RFID temperature sensor that works embedded in concrete, and an external RFID reader that communicates with the embedded sensor to extract the temperature measurement conducted by the embedded sensor. Temperature time evolution is a well known proxy to monitor the setting process of concrete. The RFID sensor consisting of an UWB Bow Tie antenna with central frequency 868 MHz, matched to the EM4325 temperature chip through a T-match structure for embedded operation inside concrete is fully characterized. Results for measurements of the full set up conducted in a real-scenario are provided. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Block Diagram of the RFID-based system for wireless temperature monitoring.</p>
Full article ">Figure 2
<p>Wrapped RFID antenna. Bow tie with T-match.</p>
Full article ">Figure 3
<p>Antenna parameters: (<b>a</b>) Input impedance. Re(Z<sub>11</sub>) left vertical axis and Im(Z<sub>11</sub>) right vertical axis and (<b>b</b>) impedance coupling factor.</p>
Full article ">Figure 4
<p>Antenna impedance coupling factor between the wrapped bow tie T-match antenna and the RFID-IC for <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>r</mi> </msub> </semantics></math> between 4 and 18.</p>
Full article ">Figure 5
<p>Manufactured RFID Temperature Sensor: (<b>a</b>) RFID sensor without cover; (<b>b</b>) assembled sensor; and (<b>c</b>) sensor after protection layer.</p>
Full article ">Figure 6
<p>RFID Reader: (<b>a</b>) Nordic ID eNUR RFID reader and (<b>b</b>) antenna of the eNUR reader.</p>
Full article ">Figure 7
<p>Dynamic range over <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>a</mi> <mi>i</mi> <mi>r</mi> </mrow> </msub> </semantics></math> at 868 MHz, for days 0.5, 1, 2, 4, and 8 of concrete’s setting reaction, for a sensor embedded 0.15 m (<math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>M</mi> <mi>U</mi> <mi>T</mi> </mrow> </msub> </semantics></math>) inside concrete (straight lines), and one embedded 0.02 m (dashed lines).</p>
Full article ">Figure 8
<p>In-field measurement at PROMSA: (<b>a</b>) Location of the embedded RFID sensors and (<b>b</b>) layout of the whole measurement scenario.</p>
Full article ">Figure 9
<p>Main Figure: Temperature measured by the RFID sensor compared to temperature from the thermocouples. Top right Figure: Differential temperature inside concrete, removing the effect of room temperature.</p>
Full article ">
18 pages, 7320 KiB  
Article
Robust Baseline-Free Damage Localization by Using Locally Perturbed Dynamic Equilibrium and Data Fusion Technique
by Shancheng Cao, Huajiang Ouyang and Chao Xu
Sensors 2020, 20(20), 5964; https://doi.org/10.3390/s20205964 - 21 Oct 2020
Cited by 2 | Viewed by 2150
Abstract
Mode shape-based structural damage identification methods have been widely investigated due to their good performances in damage localization. Nevertheless, the evaluation of mode shapes is severely affected by the measurement noise. Moreover, the conventional mode shape-based damage localization methods are normally proposed based [...] Read more.
Mode shape-based structural damage identification methods have been widely investigated due to their good performances in damage localization. Nevertheless, the evaluation of mode shapes is severely affected by the measurement noise. Moreover, the conventional mode shape-based damage localization methods are normally proposed based on a certain mode and not effective for multi-damage localization. To tackle these problems, a novel damage localization approach is proposed based on locally perturbed dynamic equilibrium and data fusion approach. The main contributions cover three aspects. Firstly, a joint singular value decomposition technique is proposed to simultaneously decompose several power spectral density transmissibility matrices for robust mode shape estimation, which statistically deals better with the measurement noise than the traditional transmissibility-based methods. Secondly, with the identified mode shapes, an improved pseudo-excitation method is proposed to construct a baseline-free damage localization index by quantifying the locally damage perturbed dynamic equilibrium without the knowledge of material/structural properties. Thirdly, to circumvent the conflicting damage information in different modes and integrate it for robust damage localization, a data fusion scheme is developed, which performs better than the Bayesian fusion approach. Both numerical and experimental studies of cantilever beams with two cracks were conducted to validate the feasibility and effectiveness of the proposed damage localization method. It was found that the proposed method outperforms the traditional transmissibility-based methods in terms of localization accuracy and robustness. Full article
(This article belongs to the Special Issue Sensors for Structural Damage Identification)
Show Figures

Figure 1

Figure 1
<p>A cantilever beam with two open cracks.</p>
Full article ">Figure 2
<p>System natural frequency indicator based on the singular value decomposition (SVD) of the PSDT method.</p>
Full article ">Figure 3
<p>Estimated mode shapes and their CVs for the first three modes: (<b>a</b>) the 1st mode shape; (<b>b</b>) CV of the 1st mode shape; (<b>c</b>) the 2nd mode shape; (<b>d</b>) CV of the 2nd mode shape; (<b>e</b>) the 3rd mode shape; (<b>f</b>) CV of the 3rd mode shape.</p>
Full article ">Figure 4
<p>Damage localization results of two cracks with 5% depth reduction. (<b>a</b>) Proposed data fusion approach; (<b>b</b>) Bayesian fusion.</p>
Full article ">Figure 5
<p>Damage localization results of two cracks with 5% depth reduction under (<b>a</b>) different damping ratios and (<b>b</b>) different noise levels.</p>
Full article ">Figure 6
<p>Nosie free damage localization results of (<b>a</b>) two cracks with different depth reductions and (<b>b</b>) different numbers of cracks with 10% depth reduction.</p>
Full article ">Figure 7
<p>(<b>a</b>) Experimental set-up and (<b>b</b>) a cantilever beam with two cracks.</p>
Full article ">Figure 8
<p>The acquired time domain signals of (<b>a</b>) excitation force, (<b>b</b>) measurement point 1, (<b>c</b>) measurement point 10 and (<b>d</b>) measurement point 21.</p>
Full article ">Figure 9
<p>System natural frequency indicator based on the SVD of the PSDT method.</p>
Full article ">Figure 10
<p>Estimated mode shapes and their individual damage localization results: (<b>a</b>) the 1st mode shape; (<b>b</b>) damage index of the 1st mode shape; (<b>c</b>) the 2nd mode shape; (<b>d</b>) damage index of the 2nd mode shape; (<b>e</b>) the 3rd mode shape; (<b>f</b>) damage index of the 3rd mode shape.</p>
Full article ">Figure 11
<p>Integrated damage indexes of experimental case 1. (<b>a</b>) Proposed data fusion approach; (<b>b</b>) Bayesian fusion.</p>
Full article ">Figure 12
<p>Integrated damage indexes of experimental case 2. (<b>a</b>) Proposed data fusion approach; (<b>b</b>) Bayesian fusion.</p>
Full article ">
12 pages, 2184 KiB  
Letter
Quantification of Arm Swing during Walking in Healthy Adults and Parkinson’s Disease Patients: Wearable Sensor-Based Algorithm Development and Validation
by Elke Warmerdam, Robbin Romijnders, Julius Welzel, Clint Hansen, Gerhard Schmidt and Walter Maetzler
Sensors 2020, 20(20), 5963; https://doi.org/10.3390/s20205963 - 21 Oct 2020
Cited by 21 | Viewed by 5223
Abstract
Neurological pathologies can alter the swinging movement of the arms during walking. The quantification of arm swings has therefore a high clinical relevance. This study developed and validated a wearable sensor-based arm swing algorithm for healthy adults and patients with Parkinson’s disease (PwP). [...] Read more.
Neurological pathologies can alter the swinging movement of the arms during walking. The quantification of arm swings has therefore a high clinical relevance. This study developed and validated a wearable sensor-based arm swing algorithm for healthy adults and patients with Parkinson’s disease (PwP). Arm swings of 15 healthy adults and 13 PwP were evaluated (i) with wearable sensors on each wrist while walking on a treadmill, and (ii) with reflective markers for optical motion capture fixed on top of the respective sensor for validation purposes. The gyroscope data from the wearable sensors were used to calculate several arm swing parameters, including amplitude and peak angular velocity. Arm swing amplitude and peak angular velocity were extracted with systematic errors ranging from 0.1 to 0.5° and from −0.3 to 0.3°/s, respectively. These extracted parameters were significantly different between healthy adults and PwP as expected based on the literature. An accurate algorithm was developed that can be used in both clinical and daily-living situations. This algorithm provides the basis for the use of wearable sensor-extracted arm swing parameters in healthy adults and patients with movement disorders such as Parkinson’s disease. Full article
(This article belongs to the Special Issue Wearable Sensors for Movement Analysis)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Definition of swings. (<b>b</b>) Placement and orientation of the right-handed coordinate system of inertial measurement unit and reflective markers.</p>
Full article ">Figure 2
<p>Block diagram of the arm swing algorithm.</p>
Full article ">Figure 3
<p>The angle of the inertial measurement unit (IMU) and optical data of a healthy participant and of a patient with Parkinson’s disease.</p>
Full article ">Figure 4
<p>Bland–Altman plots are shown with the arm swing amplitude and peak angular velocity at 2 km/h (<b>a</b>), 3 km/h (<b>b</b>), and 4 km/h (<b>c</b>) for the healthy adults and at the preferred speed (<b>d</b>) for patients with Parkinson’s disease. On the x-axes, the average of the IMU and optical results are presented, and on the y-axes the differences between IMU and optical results (IMU-optical) are presented.</p>
Full article ">
30 pages, 7832 KiB  
Article
Fabricating a Portable ECG Device Using AD823X Analog Front-End Microchips and Open-Source Development Validation
by Miguel Bravo-Zanoguera, Daniel Cuevas-González, Marco A. Reyna, Juan P. García-Vázquez and Roberto L. Avitia
Sensors 2020, 20(20), 5962; https://doi.org/10.3390/s20205962 - 21 Oct 2020
Cited by 19 | Viewed by 10940
Abstract
Relevant to mobile health, the design of a portable electrocardiograph (ECG) device using AD823X microchips as the analog front-end is presented. Starting with the evaluation board of the chip, open-source hardware and software components were integrated into a breadboard prototype. This required modifying [...] Read more.
Relevant to mobile health, the design of a portable electrocardiograph (ECG) device using AD823X microchips as the analog front-end is presented. Starting with the evaluation board of the chip, open-source hardware and software components were integrated into a breadboard prototype. This required modifying the microchip with the breadboard-friendly Arduino Nano board in addition to a data logger and a Bluetooth breakout board. The digitized ECG signal can be transmitted by serial cable, via Bluetooth to a PC, or to an Android smartphone system for visualization. The data logging shield provides gigabytes of storage, as the signal is recorded to a microSD card adapter. A menu incorporates the device’s several operating modes. Simulation and testing assessed the system stability and performance parameters in terms of not losing any sample data throughout the length of the recording and finding the maximum sampling frequency; and validation determined and resolved problems that arose in open-source development. Ultimately, a custom printed circuit board was produced requiring advanced manufacturing options of 2.5 mils trace widths for the small package components. The fabricated device did not degrade the AD823X noise performance, and an ECG waveform with negligible distortion was obtained. The maximum number of samples/second was 2380 Hz in serial cable transmission, whereas in microSD recording mode, a continuous ECG signal for up to 36 h at 500 Hz was verified. A low-cost, high-quality portable ECG for long-term monitoring prototype that reasonably complies with electrical safety regulations and medical equipment design was realized. Full article
Show Figures

Figure 1

Figure 1
<p>Sections of the portable ECG prototype. The circles on the person’s body represent possible positions of the electrodes. The two main components of the prototype are the AFE AD8232 and the Arduino modules (an Arduino Nano board complemented with a data logger and Bluetooth HC-05 Bboards). The Vs power supply terminal in the AFE AD8232 section is driven by the 3.3 V output of the Arduino Nano. External power is provided by a LiPo battery. This diagrammed circuit details a serial port through pins D8 and D9 of the Arduino Nano as Rx and Tx, respectively, using the SoftwareSerial library rather than the existing Arduino hardware USB port.</p>
Full article ">Figure 2
<p>Analog Device’s AD8232 simplified internal structure and the cardiac monitor configuration. LA, RA, and RL correspond to the electrodes; R11 and R12 to polarization resistors; R10 and R9 to resistors to protect the user in case of failure; R8 and C3 are protection to limit leakage currents &lt;10 uA; R4, R5, R6, R7, C4, and C5 represent the section of resistors and capacitors for low-pass filter; R6 and R7 represent resistors that determine the total gain of the circuit; C1, C2, R1, and R2 are resistors and capacitors that define the cutoff frequency of the high-pass filter; R3 (or Rcomp) for the filter response curve; R13 and R14 are voltage divider resistors for the midsupply reference; and C6 and C7 are noise filters. The image was modified from the source [<a href="#B24-sensors-20-05962" class="html-bibr">24</a>], Copyright © 2019, Analog Devices, Inc. All Rights Reserved.</p>
Full article ">Figure 3
<p>Cardiac monitor ECG circuit and response using Filter Design software [<a href="#B25-sensors-20-05962" class="html-bibr">25</a>], Copyright © 2019, Analog Devices, Inc. All Rights Reserved.</p>
Full article ">Figure 4
<p>SMD-DIP (breadboard) Adapter for AD8232 and AD8233 microchips.</p>
Full article ">Figure 5
<p>ECG monitor breadboard circuit with interchangeable AD8232 and AD8233 microchip adapters.</p>
Full article ">Figure 6
<p>Top and bottom view of the printed circuit, highlighting distinct areas. (<b>a</b>,<b>b</b>) components of the Arduino Nano board; (<b>c</b>,<b>d</b>) data logger shield; (<b>e</b>) HC-05 Bluetooth module terminals and Vout, Vin, GND pins; (<b>f</b>) ECG circuit with AD8232 microchip; and (<b>g</b>) push-button peripherals.</p>
Full article ">Figure 7
<p>(<b>a</b>) Serial cable/Bluetooth transmission program diagram; (<b>b</b>) MicroSD card recording program diagram.</p>
Full article ">Figure 8
<p>(<b>a</b>) Circular buffer for digital filter y[n] = x[n] + x [n − 3]; (<b>b</b>) code to implement the digital filter with difference equation on the Arduino platform, Bluetooth transmission, and smartphone visualization.</p>
Full article ">Figure 9
<p>Blackout event in a synthetic ECG signal. The input signal to the system for this figure was created using MATLAB software.</p>
Full article ">Figure 10
<p>Ishikawa diagram to identify causes of the problem. Identified problem: loss of information with the occurrence of blackout events during writing to microSD. Possible causes: Arduino type, function generator, error by data logger module, or SD card memory type.</p>
Full article ">Figure 11
<p>Structure, hypothesis, and effects of experimental design.</p>
Full article ">Figure 12
<p>Double buffer implementation for writing data to SD memory.</p>
Full article ">Figure 13
<p>Comparison between AD8232 and AD8233 on a breadboard circuit using a male volunteer ECG signal. (<b>a</b>) Signal obtained from the AD8232 microchip on a breadboard circuit. (<b>b</b>) Signal obtained from the AD8233 microchip on a breadboard circuit. Oscilloscope hardware: 14-bit Analog Discovery with 500 mV/div amplitude and 290 ms/div in time base; software is Waveforms. Observe the improvement for the AD8233 output signal in (<b>b</b>).</p>
Full article ">Figure 14
<p>Baseline wander during smooth gate. I-Lead ECG signal from human volunteer to check the baseline wander during a smooth gait of approximately 18 m/min. The signal was transmitted by Bluetooth at 115,200 baud rate and a sampling rate of 500 Hz.</p>
Full article ">Figure 15
<p>Noise output test when the electrode inputs are tied together. (<b>a</b>) Breadboard with the AD8232 AFE microchip; (<b>b</b>) Breadboard with the AD8233 AFE microchip; (<b>c</b>) PCB with the AD8232 AFE microchip. The left side illustrates the amplitude plots for each sample. The right side displays the statistical histogram of the test.</p>
Full article ">Figure 16
<p>I-Lead ECG signal comparison from human volunteer: (<b>a</b>) Upper trace: signal using the AD8232 microchip with oscilloscope hardware: Analog Discovery; and (<b>b</b>) Lower trace: medical desktop ECG Mitral M12USB. These ECG signals traces were taken over seven days from the same volunteer.</p>
Full article ">Figure 17
<p>Snapshots of different operating modes with their respective visualization software: (<b>a</b>) ECG signal transmitted by serial cable (breadboard prototype); (<b>b</b>) ECG signal transmitted by serial cable (custom PCB); (<b>c</b>) ECG recorded to a microSD card (breadboard prototype); (<b>d</b>) ECG recorded to a microSD card (custom PCB); (<b>e</b>) ECG signal transmitted by Bluetooth via smartphone (breadboard prototype); (<b>f</b>) ECG signal transmitted by Bluetooth via smartphone (custom PCB). The sampling frequency for all signals was 360 Hz. The ECG signals come from the prototypes using the AD8232 microchip. These signals were all obtained from a human volunteer on the same day.</p>
Full article ">Figure 18
<p>(<b>a</b>) Printed circuit board (custom PCB) top view and (<b>b</b>) bottom view.</p>
Full article ">
22 pages, 1186 KiB  
Article
Two-Tier PSO Based Data Routing Employing Bayesian Compressive Sensing in Underwater Sensor Networks
by Xuechen Chen, Wenjun Xiong and Sheng Chu
Sensors 2020, 20(20), 5961; https://doi.org/10.3390/s20205961 - 21 Oct 2020
Cited by 8 | Viewed by 2190
Abstract
Underwater acoustic sensor networks play an important role in assisting humans to explore information under the sea. In this work, we consider the combination of sensor selection and data routing in three dimensional underwater wireless sensor networks based on Bayesian compressive sensing and [...] Read more.
Underwater acoustic sensor networks play an important role in assisting humans to explore information under the sea. In this work, we consider the combination of sensor selection and data routing in three dimensional underwater wireless sensor networks based on Bayesian compressive sensing and particle swarm optimization. The algorithm we proposed is a two-tier PSO approach. In the first tier, a PSO-based clustering protocol is proposed to synthetically consider the energy consumption and uniformity of cluster head distribution. Then in the second tier, a PSO-based routing protocol is proposed to implement inner-cluster one-hop routing and outer-cluster multi-hop routing. The nodes selected to constitute i-th effective routing path decide which positions in the i-th row of the measurement matrix are nonzero. As a result, in this tier the protocol comprehensively considers energy efficiency, network balance and data recovery quality. The Bayesian Cramér-Rao Bound (BCRB) in such a case is analyzed and added in the fitness function to monitor the mean square error of the reconstructed signal. The experimental results validate that our algorithm maintains a longer life time and postpones the appearance of the first dead node while keeps the reconstruction error lower compared with the cutting-edge algorithms which are also based on distributed multi-hop compressive sensing approaches. Full article
(This article belongs to the Special Issue Underwater Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The segment-divided model of the 3-D network.</p>
Full article ">Figure 2
<p>The FFT domain signal of temperature data.</p>
Full article ">Figure 3
<p>The layer-divided model of the 3-D network.</p>
Full article ">Figure 4
<p>The flowchart of proposed algorithm.</p>
Full article ">Figure 5
<p>The demo for mapping part <span class="html-italic">a</span> of sub-particle <span class="html-italic">i</span> into <math display="inline"><semantics> <mi mathvariant="bold">Φ</mi> </semantics></math>, where <math display="inline"><semantics> <msub> <mi>c</mi> <mi>j</mi> </msub> </semantics></math> denotes random measurement coefficient.</p>
Full article ">Figure 6
<p>An example of hierarchical priorities initialization for part <span class="html-italic">b</span> and its corresponding topological routing path in the cross-section view.</p>
Full article ">Figure 7
<p>The comparison results in terms of total energy consumption.</p>
Full article ">Figure 8
<p>The comparison results in terms of remaining energy.</p>
Full article ">Figure 9
<p>The comparison results in terms of remaining living nodes.</p>
Full article ">Figure 10
<p>The comparison results in terms of average energy consumption with different node densities.</p>
Full article ">Figure 11
<p>The comparison results in terms of average hop numbers with two different node densities.</p>
Full article ">Figure 12
<p>The comparison results in terms of data recovery qualities.</p>
Full article ">Figure 13
<p>The comparison results in terms of MSE in different measurement ratio.</p>
Full article ">
30 pages, 4679 KiB  
Article
Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods
by Piatan Sfair Palar, Vinícius de Vargas Terres and André Schneider de Oliveira
Sensors 2020, 20(20), 5960; https://doi.org/10.3390/s20205960 - 21 Oct 2020
Cited by 4 | Viewed by 2949
Abstract
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction [...] Read more.
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction with an electromyographic armband as inputs. This armband is worn on the forearm and can detect gestures from the operator and rotation angles from the arm. Information from the industrial joystick and the armband are used to control the robot via a Fuzzy controller. The controller works with sliding autonomy (using as inputs data from the angular velocity of the industrial controller, electromyography reading, weld bead position in the storage tank, and rotation angles executed by the operator’s arm) to generate a system capable of recognition of the operator’s skill and correction of mistakes from the operator in operating time. The output from the Fuzzy controller is the level of autonomy to be used by the robot. The levels implemented are Manual (operator controls the angular and linear velocities of the robot); Shared (speeds are shared between the operator and the autonomous system); Supervisory (robot controls the angular velocity to stay in the weld bead, and the operator controls the linear velocity); Autonomous (the operator defines endpoint and the robot controls both linear and angular velocities). These autonomy levels, along with the proposed sliding autonomy, are then analyzed through robot experiments in a simulated environment, showing each of these modes’ purposes. The proposed approach is evaluated in virtual industrial scenarios through real distinct operators. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>Storage tank in a refinery. The weld beads are highlighted in red.</p>
Full article ">Figure 2
<p>Autonomous Inspection Robot 1 (AIR1). A robot conceived/developed to inspect weld beads in storage tanks containing liquefied petroleum gas. A profile sensor, facing downwards, is positioned in front of the robot.</p>
Full article ">Figure 3
<p>Sensors for perception of the environment previously attached to AIR1. Depth cameras and Lidar sensors were used to map the tanks and to predict the sizes of spherical pressure vessels (adapted from [<a href="#B34-sensors-20-05960" class="html-bibr">34</a>]).</p>
Full article ">Figure 4
<p>System architecture. An industrial joystick is operated by sending signals via radio frequency to a receiver. The data are processed in the Arduino DUE and sent to the computer that controls the robot. The operator also wears a Myo armband to assist with the robotic control.</p>
Full article ">Figure 5
<p>Diagram of the Fuzzy controller implemented.</p>
Full article ">Figure 6
<p>Membership functions of the variable MyoRMS.</p>
Full article ">Figure 7
<p>Membership functions of the variable MyoRoll.</p>
Full article ">Figure 8
<p>Membership functions of the variable JoyAngular.</p>
Full article ">Figure 9
<p>Membership functions of the variable WeldPos.</p>
Full article ">Figure 10
<p>Membership functions of the variable LoA—i.e., level of autonomy.</p>
Full article ">Figure 11
<p>Robot controls. There is a fusion between data from an industrial joystick and a Myo armband to control the robot.</p>
Full article ">Figure 12
<p>The velocities of the robot are published in ROS topics.</p>
Full article ">Figure 13
<p>Block diagram of the autonomous mode. The control system reported by Terres [<a href="#B52-sensors-20-05960" class="html-bibr">52</a>] was adopted. A fuzzy controller controls the robot in an autonomous way to follow the previously identified weld bead.</p>
Full article ">Figure 14
<p>Output of the Fuzzy controller for the input vector (50, −1.5, −0.5 −1). The output is Shared Mode, with 81% of the final speed being controlled by the operator and 19% by the autonomous system.</p>
Full article ">Figure 15
<p>Output of the Fuzzy controller for the input vector (50, −1.5, −0.5 −0.3). The level of autonomy in the output is Manual Mode.</p>
Full article ">Figure 16
<p>Autonomous Inspection Robot 1 (AIR-1) in the V-REP simulator.</p>
Full article ">Figure 17
<p>Scene used of a refinery with storage ranks and weld beads in the V-REP simulator.</p>
Full article ">Figure 18
<p>Navigation goal of the experiments. Point A is the initial position of the robot and point B is the final position. The optimal course is shown in orange and the turns are numerated.</p>
Full article ">Figure 19
<p>Manual Mode. The yellow line is the route traveled by the robot.</p>
Full article ">Figure 20
<p>Variation in the alignment error of the robot during experiment 1 in manual mode. The spikes in the error occur during turns due to the robot’s topology and are circled in red, with a number representing the associated curve.</p>
Full article ">Figure 21
<p>The experiment was performed in shared mode with 50% of the velocity controlled by the operator and 50% by the robot.</p>
Full article ">Figure 22
<p>Variation in the alignment error during experiment 3 in Shared Mode.</p>
Full article ">Figure 23
<p>Experiment carried out in Supervisory Mode.</p>
Full article ">Figure 24
<p>Variation in the alignment error of the robot in the course of experiment 2 in Supervisory Mode.</p>
Full article ">Figure 25
<p>Experiment performed with Autonomous Mode.</p>
Full article ">Figure 26
<p>Variation of the alignment error during experiment 3 with Autonomous Mode.</p>
Full article ">Figure 27
<p>Sliding Autonomy experiment.</p>
Full article ">Figure 28
<p>Autonomy and alignment error during the sliding autonomy experiment. The panel above shows the output of the fuzzy controller during the experiment, representing the level of autonomy. The panel below shows the alignment error. The dashed lines indicate the instances when there were spikes in the errors and in the level of autonomy.</p>
Full article ">Figure 29
<p>Path described by AIR1 during a curve.</p>
Full article ">Figure 30
<p>Emphasis on a curve where difficulties were encountered.</p>
Full article ">
21 pages, 1820 KiB  
Article
How to Use Heart Rate Variability: Quantification of Vagal Activity in Toddlers and Adults in Long-Term ECG
by Helmut Karl Lackner, Marina Tanja Waltraud Eglmaier, Sigrid Hackl-Wimmer, Manuela Paechter, Christian Rominger, Lars Eichen, Karoline Rettenbacher, Catherine Walter-Laager and Ilona Papousek
Sensors 2020, 20(20), 5959; https://doi.org/10.3390/s20205959 - 21 Oct 2020
Cited by 14 | Viewed by 3844
Abstract
Recent developments in noninvasive electrocardiogram (ECG) monitoring with small, wearable sensors open the opportunity to record high-quality ECG over many hours in an easy and non-burdening way. However, while their recording has been tremendously simplified, the interpretation of heart rate variability (HRV) data [...] Read more.
Recent developments in noninvasive electrocardiogram (ECG) monitoring with small, wearable sensors open the opportunity to record high-quality ECG over many hours in an easy and non-burdening way. However, while their recording has been tremendously simplified, the interpretation of heart rate variability (HRV) data is a more delicate matter. The aim of this paper is to supply detailed methodological discussion and new data material in order to provide a helpful notice of HRV monitoring issues depending on recording conditions and study populations. Special consideration is given to the monitoring over long periods, across periods with different levels of activity, and in adults versus children. Specifically, the paper aims at making users aware of neglected methodological limitations and at providing substantiated recommendations for the selection of appropriate HRV variables and their interpretation. To this end, 30-h HRV data of 48 healthy adults (18–40 years) and 47 healthy toddlers (16–37 months) were analyzed in detail. Time-domain, frequency-domain, and nonlinear HRV variables were calculated after strict signal preprocessing, using six different high-frequency band definitions including frequency bands dynamically adjusted for the individual respiration rate. The major conclusion of the in-depth analyses is that for most applications that implicate long-term monitoring across varying circumstances and activity levels in healthy individuals, the time-domain variables are adequate to gain an impression of an individual’s HRV and, thus, the dynamic adaptation of an organism’s behavior in response to the ever-changing demands of daily life. The sound selection and interpretation of frequency-domain variables requires considerably more consideration of physiological and mathematical principles. For those who prefer using frequency-domain variables, the paper provides detailed guidance and recommendations for the definition of appropriate frequency bands in compliance with their specific recording conditions and study populations. Full article
(This article belongs to the Special Issue ECG Monitoring System)
Show Figures

Figure 1

Figure 1
<p>Dynamically adjusted frequency bands. The figure on the left side shows the distribution of the frequency ranges for each participant during sleep. (<b>A</b>), each column represents one participant. The shade of the columns represents the percentage of the power spectral density (PSD) in the respective frequency (percentage of the respective participant’s total power). The dots represent the respiration frequency of each participant (the arithmetic mean of all dots for toddlers equals the value given in Table 2, i.e., 0.36 Hz or 21.7 min<sup>−1</sup>). (<b>B</b>) shows the percentages of participants in whom variability in the respective frequencies was continuously present, that is, in all 180 s segments over the entire recording period, displayed for HF5 and HF6. The figure illustrates how well (or poorly) the frequency bands captured the actual heart rate variability in adults and toddlers: In the majority of adults, but not in toddlers, the dominant frequencies are in the recommended frequency range for short-term heart rate variability (HRV, 0.15–0.40 Hz). The right figure shows the dynamic frequency band adaptation for the day.</p>
Full article ">Figure 2
<p>Prevalent frequency ranges (frequency domain). (<b>A</b>) shows the proportions of normalized high-frequency (HF) power in the frequency ranges of interest, which were averaged across participants and expressed as percentages of the total heart rate (HR) variability. Adding up the depicted values to HF1 (0.15–0.40 Hz, i.e., 2nd + 3rd column, see schematic bars below parts (<b>A</b>,<b>B</b>), HF2 (0.15–0.80 Hz), HF3 (0.24–1.04 Hz), and HF4 (0.15–1.04 Hz) provides an impression of the power spectral density estimates for these frequency bands, independently from the RRI level. Furthermore, the proportion of long-term variability can be seen in the low-frequency range (0.04–0.15 Hz). During sleep, the difference in the dominant HF frequency band between toddlers and adults—in toddlers 0.24–0.40 Hz—can be seen clearly. (<b>B</b>) shows the normalized HF power during the day, and (<b>C</b>,<b>D</b>) displays the separate analyses for periods with low and high levels of activity during the day.</p>
Full article ">Figure 2 Cont.
<p>Prevalent frequency ranges (frequency domain). (<b>A</b>) shows the proportions of normalized high-frequency (HF) power in the frequency ranges of interest, which were averaged across participants and expressed as percentages of the total heart rate (HR) variability. Adding up the depicted values to HF1 (0.15–0.40 Hz, i.e., 2nd + 3rd column, see schematic bars below parts (<b>A</b>,<b>B</b>), HF2 (0.15–0.80 Hz), HF3 (0.24–1.04 Hz), and HF4 (0.15–1.04 Hz) provides an impression of the power spectral density estimates for these frequency bands, independently from the RRI level. Furthermore, the proportion of long-term variability can be seen in the low-frequency range (0.04–0.15 Hz). During sleep, the difference in the dominant HF frequency band between toddlers and adults—in toddlers 0.24–0.40 Hz—can be seen clearly. (<b>B</b>) shows the normalized HF power during the day, and (<b>C</b>,<b>D</b>) displays the separate analyses for periods with low and high levels of activity during the day.</p>
Full article ">Figure 3
<p>Prevalent frequency ranges in root mean square successive differences of R-R intervals (RMSSD)/SD1. (<b>A</b>) shows the proportions of normalized power for the time series of differences of successive R-R intervals in the frequency ranges of interest, which were averaged across participants and expressed as percentages of the total HR variability. Adding up the depicted values to HF1 (0.15–0.40 Hz), HF2 (0.15–0.80 Hz), HF3 (0.24–1.04 Hz), and HF4 (0.15–1.04 Hz) provides an impression of the power spectral density estimates for these frequency bands, independently from the RRI level. Furthermore, the proportion of long-term variability can be seen in the low-frequency range (0.04–0.15 Hz). During sleep, the difference in the dominant HF frequency band between toddlers and adults—in toddlers, 0.24–0.40 Hz—can be seen clearly. (<b>B</b>) shows the normalized HF power during the day, and (<b>C</b>,<b>D</b>) displays the separate analyses for periods with low and high levels of activity during the day. Please compare <a href="#sensors-20-05959-f002" class="html-fig">Figure 2</a> and <a href="#sensors-20-05959-f003" class="html-fig">Figure 3</a> to gain an impression of the high-pass filter effect of the calculation of RMSSD or SD1.</p>
Full article ">
21 pages, 7848 KiB  
Article
Photon Counting Imaging with Low Noise and a Wide Dynamic Range for Aurora Observations
by Zhen-Wei Han, Ke-Fei Song, Hong-Ji Zhang, Miao Yu, Ling-Ping He, Quan-Feng Guo, Xue Wang, Yang Liu and Bo Chen
Sensors 2020, 20(20), 5958; https://doi.org/10.3390/s20205958 - 21 Oct 2020
Cited by 3 | Viewed by 2616
Abstract
The radiation intensity of observed auroras in the far-ultraviolet (FUV) band varies dramatically with location for aerospace applications, requiring a photon counting imaging apparatus with a wide dynamic range. However, combining high spatial resolution imaging with high event rates is technically challenging. We [...] Read more.
The radiation intensity of observed auroras in the far-ultraviolet (FUV) band varies dramatically with location for aerospace applications, requiring a photon counting imaging apparatus with a wide dynamic range. However, combining high spatial resolution imaging with high event rates is technically challenging. We developed an FUV photon counting imaging system for aurora observation. Our system mainly consists of a microchannel plate (MCP) stack readout using a wedge strip anode (WSA) with charge induction and high-speed electronics, such as a charge sensitive amplifier (CSA) and pulse shaper. Moreover, we constructed an anode readout model and a time response model for readout circuits to investigate the counting error in high counting rate applications. This system supports global rates of 500 kilo counts, 0.610 dark counts s−1 cm−2 at an ambient temperature of 300 K and 111 µm spatial resolution at 400 kilo counts s−1 (kcps). We demonstrate an obvious photon count loss at incident intensities close to the counting capacity of the system. To preserve image quality, the response time should be improved and some noise performance may be sacrificed. Finally, we also describe the correlation between counting rate and imaging resolution, which further guides the design of space observation instruments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram and photograph of the FUV imaging detector. (<b>a</b>) Components and structure of the detector. A photoelectron is converted from a photon through the photocathode with CsI. Then, the photoelectron enters a pore of the MCP and collides with the wall to produce secondary electrons. A potential of Δ<span class="html-italic">U</span><sub>1</sub> across the two MCPs accelerates these electrons through angled capillaries, producing an electron cloud. A final potential of Δ<span class="html-italic">U</span><sub>2</sub> guides this electron cloud to the germanium film for readouts. Here, Δ<span class="html-italic">U</span><sub>1</sub> and Δ<span class="html-italic">U</span><sub>2</sub> indicate the applied voltage differences from the high-voltage divider. (<b>b</b>) Schematic of the detector with induction readout mode. The electron cloud emanating from the MCPs can be coupled to the anode through the equivalent capacitance and finally is released through the germanium film. (<b>c</b>,<b>d</b>) Photographs of the FUV imaging detector with a wedge strip anode. In addition to providing a vacuum environment for the function of the MCPs, the enclosed cavity acts as an electromagnetic shield.</p>
Full article ">Figure 2
<p>(<b>a</b>) Profile of an FUV imaging detector module. It is composed of an MCP detector with a wedge strip anode, a high-voltage divider, and three charge-sensitive spectroscopy amplifiers. (<b>b</b>) Photograph of an FUV imaging detector. The aluminum shell was anodized in black to satisfy the requirements of thermal control. Three-channel pulse signals from CSAs travel down pulse shapers through the SMA connectors.</p>
Full article ">Figure 3
<p>A pattern of a wedge strip anode for induced charge. (<b>a</b>) Model of a wedge strip anode. (<b>b</b>) Photograph of a wedge strip anode; (<b>c</b>) Zoomed details from the blue dotted box at the top left of (<b>b</b>). (<b>d</b>) Zoomed details from the red dotted box in the central area of (<b>b</b>).</p>
Full article ">Figure 4
<p>A lumped-parameter presentation of the readout mode for induced charge. <span class="html-italic">C</span><sub>1</sub>, <span class="html-italic">C</span><sub>2</sub>, <span class="html-italic">C</span><sub>3</sub> to <span class="html-italic">C<sub>n</sub></span> = the capacitances between the germanium film and each anode pad, respectively. <span class="html-italic">C</span><sub>12</sub>, <span class="html-italic">C</span><sub>23</sub>, <span class="html-italic">C</span><sub>13</sub> to <span class="html-italic">C<sub>1n</sub></span>, <span class="html-italic">C<sub>2n</sub></span>, <span class="html-italic">C<sub>3n</sub></span> = the inter-electrode capacitances for each anode pad. <span class="html-italic">C<sub>Total</sub></span> = the capacitance between the germanium film and the virtual ground. <span class="html-italic">N(x,y)</span> represents the electron cloud position to be used in the calculation. <span class="html-italic">R<sub>g</sub>(x,y)</span> is the germanium film’s surface resistance between any node <span class="html-italic">N(x,y)</span> and the discharged resistance <span class="html-italic">R<sub>G</sub></span>.</p>
Full article ">Figure 5
<p>Block diagram of the readout circuits with the wedge strip anode. The labels <span class="html-italic">S</span>, <span class="html-italic">W</span> and <span class="html-italic">Z</span> in each module correspond to the channels of the strip electrode, wedge electrode and residual electrode, respectively. In practice, these three channels can be extended to more channels for other geometry configurations of anodes.</p>
Full article ">Figure 6
<p>Schematic drawing of the CSA. The oscillogram in the red dotted box at the top left shows several narrow pulses of input current and long-tail pulses with a duration of <span class="html-italic">R</span><span class="html-italic"><sub>f</sub>C</span><span class="html-italic"><sub>f</sub></span> in the red dotted box at the top right.</p>
Full article ">Figure 7
<p>Current flow diagram of the CSA. The current <span class="html-italic">I</span><span class="html-italic"><sub>D</sub></span> from the MCP detector flows to the currents <span class="html-italic">i<sub>f</sub></span> and <span class="html-italic">i<sub>in</sub></span>. The current <span class="html-italic">i<sub>b</sub></span> in the dotted line is typically ignored because of the OTA’s high input impedance characteristics. In the same way, the current <span class="html-italic">i<sub>L</sub></span> amounts to the convergence of the currents <span class="html-italic">i<sub>f</sub></span> and <span class="html-italic">i<sub>o</sub></span><sub>.</sub> Node <span class="html-italic">V</span><span class="html-italic"><sub>in</sub></span> and node <span class="html-italic">V<sub>out</sub></span> are presented in the form of voltage.</p>
Full article ">Figure 8
<p>Circuit topology of the pulse shaper. The oscillogram in the red dotted box at the bottom left shows several long-tail pulses from the CSA. Correspondingly, some quasi-Gaussian pulses with the peaking time <span class="html-italic">τ</span><span class="html-italic"><sub>p</sub></span> in the red dotted box at the bottom right can be generated at the shaper output end.</p>
Full article ">Figure 9
<p>(<b>a</b>) Schematic of the experimental setup for the system test. (<b>b</b>) Photograph of the laboratory test apparatus. Except for the MCP detector and the photodiode in the vacuum chamber, the rest of the components were placed outside, such as the CSA and the high-voltage power supply. The vacuum chamber was chosen not only for the FUV band but also for a dark background.</p>
Full article ">Figure 10
<p>The readout noise of the electronic system: (<b>a</b>) the distribution of noise density without the system noise reduction technique; (<b>b</b>) the noise histogram and its Gaussian fit for (a); (<b>c</b>) the distribution of noise density after the system noise reduction; (<b>d</b>) the noise histogram and its Gaussian fit for (c).</p>
Full article ">Figure 11
<p>The readout noise as a function of the MCP detector’s capacitance <span class="html-italic">C</span><span class="html-italic"><sub>D</sub></span>, feedback capacitance <span class="html-italic">C</span><span class="html-italic"><sub>f</sub></span>, peaking time of the pulse shaper <span class="html-italic">τ</span><span class="html-italic"><sub>p</sub></span> and input capacitance of readout circuits <span class="html-italic">C</span><span class="html-italic"><sub>i</sub></span>. The total input capacitance consists of the anode capacitance and the parasitic capacitance at the input terminal.</p>
Full article ">Figure 12
<p>Background noise measured with the incident source removed under different ambient temperature conditions (<span class="html-italic">T</span><span class="html-italic"><sub>A</sub></span>). Long collection times, such as 120 s, were selected to avoid random error in the experiment: (<b>a</b>) <span class="html-italic">T</span><span class="html-italic"><sub>A</sub></span> = 300 K, <span class="html-italic">N</span><span class="html-italic"><sub>D</sub></span> = 0.610 c/(s cm<sup>2</sup>); (<b>b</b>) <span class="html-italic">T</span><span class="html-italic"><sub>A</sub></span> = 310 K, <span class="html-italic">N</span><span class="html-italic"><sub>D</sub></span> = 0.770 c/(s cm<sup>2</sup>); (<b>c</b>) <span class="html-italic">T</span><span class="html-italic"><sub>A</sub></span> = 320 K, <span class="html-italic">N</span><span class="html-italic"><sub>D</sub></span> = 1.405 c/(s cm<sup>2</sup>); (<b>d</b>) <span class="html-italic">T</span><span class="html-italic"><sub>A</sub></span> = 330 K, <span class="html-italic">N</span><span class="html-italic"><sub>D</sub></span> = 1.793 c/(s cm<sup>2</sup>).</p>
Full article ">Figure 13
<p>Statistical characteristics of dark counts: (<b>a</b>) continuous collection for 360 min on a dark background; (<b>b</b>) dark counts variance over collection time; (<b>c</b>) dark counts changes with different pixel areas; (<b>d</b>) dark counts in the same area with different center points. The selected area should avoid the edge of the image because the edge is distorted.</p>
Full article ">Figure 14
<p>The corresponding curves of the counting rate at the output end with the changing incident intensity. The system counting error is not apparent when the incident intensity is below 50 kcps, regardless of whether the pulse shaper’s peaking time is 62.5 ns or 1000 ns. Nevertheless, the counting error gradually appears with increasing incident photon rates, as shown in the red dotted box in (<b>a</b>). More details of the curves in this area are shown in (<b>b</b>).</p>
Full article ">Figure 15
<p>The pattern for the resolution mask used for the experiment. The nine modules are distributed in graphic arrays. Each module contains line pairs with group numbers 1 and 2.</p>
Full article ">Figure 16
<p>Imaging of the resolution mask with different incident intensities (<span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span>) and peaking times for the pulse shaper (<span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span>). The edges of the images are slightly distorted due to the mutual capacitance between anode electrodes: (<b>a</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 100 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>b</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 200 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>c</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 300 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>d</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>e</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 200 ns; (<b>f</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 100 ns.</p>
Full article ">Figure 16 Cont.
<p>Imaging of the resolution mask with different incident intensities (<span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span>) and peaking times for the pulse shaper (<span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span>). The edges of the images are slightly distorted due to the mutual capacitance between anode electrodes: (<b>a</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 100 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>b</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 200 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>c</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 300 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>d</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 500 ns; (<b>e</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 200 ns; (<b>f</b>) <span class="html-italic">I</span><span class="html-italic"><sub>in</sub></span> = 400 kcps, <span class="html-italic">T</span><span class="html-italic"><sub>p</sub></span> = 100 ns.</p>
Full article ">Figure 17
<p>The image resolution plot versus incident photon counting rate with different peaking times of the pulse shaper. The resolution value that can be achieved by the system applies a fixed scale from the 1951 USAF resolution mask. For a specific peaking time of the pulse shaper, the resolution will decrease rapidly with a massive increase in incident intensity.</p>
Full article ">Figure 18
<p>Special pattern imaging to verify locally high incident intensity: (<b>a</b>) incident intensity at 200 kcps; (<b>b</b>) incident intensity at 300 kcps; (<b>c</b>) incident intensity at 400 kcps; (<b>d</b>) incident intensity at 500 kcps.</p>
Full article ">
23 pages, 10225 KiB  
Article
Privacy-Preserved Fall Detection Method with Three-Dimensional Convolutional Neural Network Using Low-Resolution Infrared Array Sensor
by Shigeyuki Tateno, Fanxing Meng, Renzhong Qian and Yuriko Hachiya
Sensors 2020, 20(20), 5957; https://doi.org/10.3390/s20205957 - 21 Oct 2020
Cited by 25 | Viewed by 4080
Abstract
Due to the rapid aging of the population in recent years, the number of elderly people in hospitals and nursing homes is increasing, which results in a shortage of staff. Therefore, the situation of elderly citizens requires real-time attention, especially when dangerous situations [...] Read more.
Due to the rapid aging of the population in recent years, the number of elderly people in hospitals and nursing homes is increasing, which results in a shortage of staff. Therefore, the situation of elderly citizens requires real-time attention, especially when dangerous situations such as falls occur. If staff cannot find and deal with them promptly, it might become a serious problem. For such a situation, many kinds of human motion detection systems have been in development, many of which are based on portable devices attached to a user’s body or external sensing devices such as cameras. However, portable devices can be inconvenient for users, while optical cameras are affected by lighting conditions and face privacy issues. In this study, a human motion detection system using a low-resolution infrared array sensor was developed to protect the safety and privacy of people who need to be cared for in hospitals and nursing homes. The proposed system can overcome the above limitations and have a wide range of application. The system can detect eight kinds of motions, of which falling is the most dangerous, by using a three-dimensional convolutional neural network. As a result of experiments of 16 participants and cross-validations of fall detection, the proposed method could achieve 98.8% and 94.9% of accuracy and F1-measure, respectively. They were 1% and 3.6% higher than those of a long short-term memory network, and show feasibility of real-time practical application. Full article
(This article belongs to the Special Issue Low Cost Mid-Infrared Sensor Technologies)
Show Figures

Figure 1

Figure 1
<p>The M5Stack and the infrared array sensor.</p>
Full article ">Figure 2
<p>Example of far-infrared imaging: (<b>a</b>) original image; (<b>b</b>) far-infrared imaging.</p>
Full article ">Figure 3
<p>Flow chart of the proposed system.</p>
Full article ">Figure 4
<p>Example of temperature distribution: (<b>a</b>) data before Gaussian filtering; (<b>b</b>) data after Gaussian filtering; (<b>c</b>) data of background; (<b>d</b>) data after background subtraction.</p>
Full article ">Figure 4 Cont.
<p>Example of temperature distribution: (<b>a</b>) data before Gaussian filtering; (<b>b</b>) data after Gaussian filtering; (<b>c</b>) data of background; (<b>d</b>) data after background subtraction.</p>
Full article ">Figure 5
<p>The human detection in the infrared data.</p>
Full article ">Figure 6
<p>The adjusted center point of the target in the infrared data.</p>
Full article ">Figure 7
<p>A 2D convolution and a 3D convolution: (<b>a</b>) description of the 2D convolutional neural network (CNN); (<b>b</b>) description of the 3D CNN.</p>
Full article ">Figure 8
<p>Flow chart of the 3D convolutional neural network (CNN).</p>
Full article ">Figure 9
<p>Flow chart of the long short-term memory (LSTM) network.</p>
Full article ">Figure 10
<p>Image description of <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>L</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mtext> </mtext> <msub> <mi>D</mi> <mi>S</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Diagram of experiment environment.</p>
Full article ">Figure 12
<p>Correct answer rates of LSTM network and 3D CNN for each frame number.</p>
Full article ">Figure 13
<p>Multi-target detection.</p>
Full article ">
10 pages, 1938 KiB  
Communication
Development of Magnetic Nanobeads Modified by Artificial Fluorescent Peptides for the Highly Sensitive and Selective Analysis of Oxytocin
by Yoshio Suzuki
Sensors 2020, 20(20), 5956; https://doi.org/10.3390/s20205956 - 21 Oct 2020
Cited by 4 | Viewed by 2405
Abstract
We describe two novel fluorescent peptides (compounds 1 and 2) targeting oxytocin with a boron-dipyrromethenyl group as the fluorophore bound to an artificial peptide based on the oxytocin receptor, and their application for the analysis of oxytocin levels in human serum using nanometer-sized [...] Read more.
We describe two novel fluorescent peptides (compounds 1 and 2) targeting oxytocin with a boron-dipyrromethenyl group as the fluorophore bound to an artificial peptide based on the oxytocin receptor, and their application for the analysis of oxytocin levels in human serum using nanometer-sized magnetic beads modified by fluorescent peptides (FMB-1 and FMB-2). Under the optimized experimental protocols, FMB-1 and FMB-2 emitted low levels of fluorescence but emitted much higher levels of fluorescence when associated with oxytocin. The detection limit of oxytocin by FMB-2 was 0.4 pM, which is approximately 37.5 times higher than that of conventional methods, such as ELISA. Using these fluorescent sensors, oxytocin was specifically detected over a wide linear range with high sensitivity, good reusability, stability, precision, and reproducibility. This fluorescent sensor-based detection system thus enabled the measurement of oxytocin levels in human serum, which has widespread applications for oxytocin assays across varied research fields. Full article
(This article belongs to the Special Issue Optical Probes and Sensors)
Show Figures

Figure 1

Figure 1
<p>Chemical structures and peptide sequences of artificial fluorescent peptides for the measurement of oxytocin levels (designated as compound <b>1</b> and compound <b>2</b>).</p>
Full article ">Figure 2
<p>Schematic representation of the immobilization of fluorescent peptides onto the surface of fluorescent magnetic beads (designated FMB-1 and FMB-2) (<b>a</b>) and the experimental procedure for the measurement of oxytocin levels using these beads (<b>b</b>).</p>
Full article ">Figure 3
<p>Fluorescence spectra of FMB-2 before and after addition of varying concentrations of oxytocin. [FMB-2] = 40 μg/mL; [Oxytocin] = 0~1000 pM; solvent = 20.0 mM HEPES buffer (pH 7.0); excitation wavelength = 490 nm.</p>
Full article ">Figure 4
<p>Fluorescence intensities (at 525 nm) of FMB-2 after the addition of different concentrations of oxytocin (<b>a</b>), the fluorescence ratio of FMB-2 after the addition of oxytocin in buffer at different pH (<b>b</b>), the reusability (<b>c</b>), and storage stability (<b>d</b>) of fluorescent magnetic beads. [FMB-2] = 40 μg/mL; excitation wavelength, 490 nm; storage temperature, 4 °C.</p>
Full article ">
11 pages, 2604 KiB  
Letter
Non-Contact Monitoring on the Flow Status inside a Pulsating Heat Pipe
by Yang Chen, Yongqing He and Xiaoqin Zhu
Sensors 2020, 20(20), 5955; https://doi.org/10.3390/s20205955 - 21 Oct 2020
Cited by 3 | Viewed by 2393
Abstract
The paper presents a concept of thermal-to-electrical energy conversion by using the oscillatory motion of magnetic fluid slugs which has potential to be applied in the field of sensors. A pulsating heat pipe (PHP) is introduced to produce vapor-magnetic fluid plug–slug flow in [...] Read more.
The paper presents a concept of thermal-to-electrical energy conversion by using the oscillatory motion of magnetic fluid slugs which has potential to be applied in the field of sensors. A pulsating heat pipe (PHP) is introduced to produce vapor-magnetic fluid plug–slug flow in a snake-shaped capillary tube. As the magnetic fluid is magnetized by the permanent magnet, the slugs of magnetic fluid passing through the copper coils make the magnetic flux vary and produce the electromotive force. The peak values of induced voltage observed in our tests are from 0.1 mV to 4.4 mV. The effects of the slug velocity, heat input and magnetic particle volume concentration on the electromotive force are discussed. Furthermore, a theoretical model considering the fluid velocity of the working fluid, the inner radius of the PHP and the contact angle between the working fluid and the pipe wall is established. At the same time, the theoretical and experimental results are compared, and the influences of tube inner radius, working fluid velocity and contact angle on the induced electromotive force are analyzed. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the typical structure of a pulsating heat pipe (PHP).</p>
Full article ">Figure 2
<p>Schematic of a sensor used for monitoring temperature difference.</p>
Full article ">Figure 3
<p>Three situations in which a liquid slug passes through a single coil.</p>
Full article ">Figure 4
<p>Schematic diagram of theoretical calculation model of induced electromotive force: (<b>a</b>) Liquid slug passing through the single coil; (<b>b</b>) liquid slug passing through the multi-turn coil.</p>
Full article ">Figure 5
<p>Schematic of the whole experimental system.</p>
Full article ">Figure 6
<p>The output voltage of three working fluids in the heat input of 90 W.</p>
Full article ">Figure 7
<p>The induced voltage signal with different heat input.</p>
Full article ">Figure 8
<p>Variation of induced electromotive force with the fluid velocity of working fluid under different contact angles between working fluid and pipe wall.</p>
Full article ">Figure 9
<p>Variation of induced electromotive force with the fluid velocity of working fluid under different inner radii of the PHP.</p>
Full article ">Figure 10
<p>The voltage waveform for 3.83% magnetic fluid at 150 W.</p>
Full article ">
17 pages, 37699 KiB  
Article
Development of a Linear Acoustic Array for Aero-Acoustic Quantification of Camber-Bladed Vertical Axis Wind Turbine
by Abdul Hadi Butt, Bilal Akbar, Jawad Aslam, Naveed Akram, Manzoore Elahi M Soudagar, Fausto Pedro García Márquez, Md. Yamin Younis and Emad Uddin
Sensors 2020, 20(20), 5954; https://doi.org/10.3390/s20205954 - 21 Oct 2020
Cited by 13 | Viewed by 3598
Abstract
Vertical axis wind turbines (VAWT) are a source of renewable energy and are used for both industrial and domestic purposes. The study of noise characteristics of a VAWT is an important performance parameter for the turbine. This study focuses on the development of [...] Read more.
Vertical axis wind turbines (VAWT) are a source of renewable energy and are used for both industrial and domestic purposes. The study of noise characteristics of a VAWT is an important performance parameter for the turbine. This study focuses on the development of a linear microphone array and measuring acoustic signals on a cambered five-bladed 45 W VAWT in an anechoic chamber at different tip speed ratios. The sound pressure level spectrum of VAWT shows that tonal noises such as blade passing frequencies dominate at lower frequencies whereas broadband noise corresponds to all audible ranges of frequencies. This study shows that the major portion of noise from the source is dominated by aerodynamic noises generated due to vortex generation and trailing edge serrations. The research also predicts that dynamic stall is evident in the lower Tip speed ratio (TSR) region making smaller TSR values unsuitable for a quiet VAWT. This paper compares the results of linear aeroacoustic array with a 128-MEMS acoustic camera with higher resolution. The study depicts a 3 dB margin between two systems at lower TSR values. The research approves the usage of the 8 mic linear array for small radius rotary machinery considering the results comparison with a NORSONIC camera and its resolution. These observations serve as a basis for noise reduction and blade optimization techniques. Full article
(This article belongs to the Special Issue Sensors for Wind Turbine Fault Diagnosis and Prognosis)
Show Figures

Figure 1

Figure 1
<p>Experimental configuration of NUST Anechoic Chamber.</p>
Full article ">Figure 2
<p>(<b>a</b>) Acoustic carpet sheet linear array (<b>b</b>) NORSONIC camera.</p>
Full article ">Figure 3
<p>Linear microphone array placed in front of SAV-45 vertical axis wind turbine (VAWT).</p>
Full article ">Figure 4
<p>SAV-45 VAWT Schematic.</p>
Full article ">Figure 5
<p>Microphone array mid rotor configuration.</p>
Full article ">Figure 6
<p>Broadside linear array resolution at 1 KHz.</p>
Full article ">Figure 7
<p>Array Hardware Topology.</p>
Full article ">Figure 8
<p>In-house code structure.</p>
Full article ">Figure 9
<p>Frequency spectrum for five-bladed VAWT at axial distance of 30 mm, 60 mm and 90 mm from microphone array (TSR 0.79).</p>
Full article ">Figure 10
<p>Frequency spectrum at three different TSR values at 60 mm axial location from source.</p>
Full article ">Figure 11
<p>Frequency spectrum of SAV-45 using acoustic camera for three TSR values.</p>
Full article ">Figure 12
<p>Frequency spectrum (NORSONIC camera—blue) and (microphone array—red).</p>
Full article ">Figure 13
<p>Error quantification spectrum for linear array compared to NORSONIC Camera.</p>
Full article ">Figure 14
<p>Data validation. Red–CFD simulation; blue–Weber’s experiment; green–SAV-45 array.</p>
Full article ">Figure 15
<p>Data validation with Pearson three-bladed VAWT (TSR Variation).</p>
Full article ">
22 pages, 2744 KiB  
Article
Pervasive Lying Posture Tracking
by Parastoo Alinia, Ali Samadani, Mladen Milosevic, Hassan Ghasemzadeh and Saman Parvaneh
Sensors 2020, 20(20), 5953; https://doi.org/10.3390/s20205953 - 21 Oct 2020
Cited by 14 | Viewed by 4271
Abstract
Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding [...] Read more.
Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding how to design efficient in-bed lying posture tracking systems. These gaps can be articulated through several research questions, as follows. First, can we design a single-sensor, pervasive, and inexpensive system that can accurately detect lying postures? Second, what computational models are most effective in the accurate detection of lying postures? Finally, what physical configuration of the sensor system is most effective for lying posture tracking? To answer these important research questions, in this article we propose a comprehensive approach for designing a sensor system that uses a single accelerometer along with machine learning algorithms for in-bed lying posture classification. We design two categories of machine learning algorithms based on deep learning and traditional classification with handcrafted features to detect lying postures. We also investigate what wearing sites are the most effective in the accurate detection of lying postures. We extensively evaluate the performance of the proposed algorithms on nine different body locations and four human lying postures using two datasets. Our results show that a system with a single accelerometer can be used with either deep learning or traditional classifiers to accurately detect lying postures. The best models in our approach achieve an F1 score that ranges from 95.2% to 97.8% with a coefficient of variation from 0.03 to 0.05. The results also identify the thighs and chest as the most salient body sites for lying posture tracking. Our findings in this article suggest that, because accelerometers are ubiquitous and inexpensive sensors, they can be a viable source of information for pervasive monitoring of in-bed postures. Full article
(This article belongs to the Special Issue Body Worn Sensors and Related Applications)
Show Figures

Figure 1

Figure 1
<p>The process of training feature-based ensemble trees and Deep LSTM classifiers. Symbol <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>k</mi> <mi>n</mi> </mrow> </msub> </semantics></math> represents the fully connected unit <span class="html-italic">n</span> at layer <span class="html-italic">k</span>. Symbol <span class="html-italic">k</span> refers to the number of the fully connected layers and symbol <span class="html-italic">m</span> shows the number of the output classes.</p>
Full article ">Figure 2
<p>(<b>a</b>) Visualization of accelerometer sensor positioning, and (<b>b</b>) activity prevalence for Class-Act dataset.</p>
Full article ">Figure 3
<p>(<b>a</b>) Visualization of accelerometer sensor positioning, and (<b>b</b>) activity prevalence for the integrated dataset.</p>
Full article ">Figure 4
<p>Mean and standard deviation of the magnitude of the accelerometer sensor data for different lying postures over all the subjects.</p>
Full article ">Figure 5
<p>Importance of the extracted features from sensor data for lying posture tracking.</p>
Full article ">Figure 6
<p>Confusion matrix of the ensemble tree classifier in classifying lying postures into supine, prone, and left side for the thighs, ankles, chest, arms, and wrists locations using Class-Act dataset.</p>
Full article ">Figure 7
<p>Confusion matrix of the AdaLSTM classifier in classifying lying postures into supine, prone, and left side for the thighs, ankles, chest, arms, and wrists locations using Class-Act dataset.</p>
Full article ">Figure 8
<p>Comparison between the mean and CoV of <span class="html-italic">F</span><sub>1</sub> score (%) of the ensemble tree and AdaLSTM classification models for nine body locations on the Class-Act dataset using LOSO validation.</p>
Full article ">Figure 9
<p>Confusion matrix of ensemble tree classifier in classifying lying postures into supine, prone, and left side for the thighs, ankles, arms, and wrists locations.</p>
Full article ">Figure 10
<p>Confusion matrix of AdaLSTM classifier in classifying lying postures into supine, prone, and left side for the thighs, ankles, arms, and wrists locations.</p>
Full article ">
16 pages, 1885 KiB  
Article
Tetramethylbenzidine: An Acoustogenic Photoacoustic Probe for Reactive Oxygen Species Detection
by Roger Bresolí-Obach, Marcello Frattini, Stefania Abbruzzetti, Cristiano Viappiani, Montserrat Agut and Santi Nonell
Sensors 2020, 20(20), 5952; https://doi.org/10.3390/s20205952 - 21 Oct 2020
Cited by 21 | Viewed by 5561
Abstract
Photoacoustic imaging is attracting a great deal of interest owing to its distinct advantages over other imaging techniques such as fluorescence or magnetic resonance image. The availability of photoacoustic probes for reactive oxygen and nitrogen species (ROS/RNS) could shed light on a plethora [...] Read more.
Photoacoustic imaging is attracting a great deal of interest owing to its distinct advantages over other imaging techniques such as fluorescence or magnetic resonance image. The availability of photoacoustic probes for reactive oxygen and nitrogen species (ROS/RNS) could shed light on a plethora of biological processes mediated by these key intermediates. Tetramethylbenzidine (TMB) is a non-toxic and non-mutagenic colorless dye that develops a distinctive blue color upon oxidation. In this work, we have investigated the potential of TMB as an acoustogenic photoacoustic probe for ROS/RNS. Our results indicate that TMB reacts with hypochlorite, hydrogen peroxide, singlet oxygen, and nitrogen dioxide to produce the blue oxidation product, while ROS, such as the superoxide radical anion, sodium peroxide, hydroxyl radical, or peroxynitrite, yield a colorless oxidation product. TMB does not penetrate the Escherichia coli cytoplasm but is capable of detecting singlet oxygen generated in its outer membrane. Full article
(This article belongs to the Special Issue Luminescent/Colorimetric Probes and Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>): Absorption spectra of TMB, <b>2</b> and <b>3,</b> in PBS. (<b>B</b>): TMB photoacoustic waveforms enhancement upon successive NaClO additions in PBS ([TMB] = 200 μM; [NaClO] = 0–150 μM;λ<sub>exc</sub> = 652 nm). Inset: Photoacoustic maximum amplitude in function of the amount of NaClO added. The excitation wavelengths were 652 (<b>2</b>; red line) and 450 (<b>3</b>; blue line) nm. [TMB] = 200 μM.</p>
Full article ">Figure 2
<p>Determination of <span class="html-italic">ϕ</span> for <b>2</b> (red line) in PBS (pH 7.4). The reference photoacoustic wave was obtained using an optically matched bromocresol purple solution (BCP; <span class="html-italic">ϕ</span> = 1 (0.1 M NaOH); blue line) [<a href="#B46-sensors-20-05952" class="html-bibr">46</a>] at the excitation wavelength (λ<sub>Exc</sub> = 652 nm). (<b>B</b>): Photoacoustic maximum amplitude vs. concentration of NaClO added in PBS (pH 7.4). [TMB] = 200 μM; λ<sub>exc</sub> = 652 nm.</p>
Full article ">Figure 3
<p>Photoacoustic maximum amplitude of TMB after reacting with different ROS and RNS. (<b>A</b>): Hydroxyl radical (<sup>•</sup>OH), generated via sodium nitrite photolysis in water (λ<sub>exc</sub> = 354 ± 20 nm). (<b>B</b>): (2,2,6,6-Tetramethylpiperidin-1-yl)oxyl (TEMPO). (<b>C</b>): Sodium nitrite (NaNO<sub>2</sub>). (<b>D</b>): Singlet oxygen (<sup>1</sup>O<sub>2</sub>), generated via Rose Bengal irradiation (λ<sub>exc</sub> = 520 ± 18 nm). (<b>E</b>): Hydrogen peroxide (H<sub>2</sub>O<sub>2</sub>). (<b>F</b>): Nitrogen dioxide (<sup>•</sup>NO<sub>2</sub>), generated via sodium nitrite acid decomposition. (<b>G</b>): Superoxide radical anion (O<sub>2</sub><sup>•−</sup>). (<b>H</b>): Peroxynitrite (ONOO<sup>−</sup>). (<b>I</b>): Nitric oxide (<sup>•</sup>NO), generated via sodium nitroprusside decomposition. As a control for B, C, E, F, G, H after the acquisition of the photoacoustic signal, 100 μM NaClO was added and their photoacoustic signal was measured again (red and blue line, respectively).</p>
Full article ">Figure 4
<p>Photoacoustic maximum amplitude enhancement of TMB for (<b>A</b>) untransformed DH10β (blue squares) and miniSOG-expressing (red circles, black rhombuses, and magenta triangles for miniSOG, miniSOG Q103L and miniSOG Q103V, respectively) <span class="html-italic">E. coli</span> cells for different irradiation times (lamp power 14.0 mW/cm<sup>2</sup>; λ<sub>Irr</sub> = 459 ± 10 nm). The values of <sup>1</sup>O<sub>2</sub> quantum yield (<span class="html-italic">Φ<sub>Δ</sub></span>) and relative generation of other ROS (<span class="html-italic">Φ<sub>OtherROS</sub></span>) by miniSOG and its mutants Q103L and Q103V are taken from [<a href="#B68-sensors-20-05952" class="html-bibr">68</a>]. (<b>B</b>): <span class="html-italic">E. coli</span> ATCC 25922 cells in the presence (200 μM; black circles) and absence (blue triangles) of MDPyTMPyP for different irradiation times (lamp power 5.5 mW/cm<sup>2</sup>; λ<sub>Irr</sub> = 420 ± 20 nm). The control experiment was realized coincubating MDPyTMPyP with 50 mM NaN<sub>3</sub> (red squares).</p>
Full article ">Scheme 1
<p>TMB (3,3′,5,5′-tetramethylbenzidine) reactivity towards oxidant agents.</p>
Full article ">Scheme 2
<p>(<b>a</b>) Standard reduction potentials for the different tested ROS, RNS, TEMPO, and TMB [<a href="#B55-sensors-20-05952" class="html-bibr">55</a>,<a href="#B56-sensors-20-05952" class="html-bibr">56</a>,<a href="#B57-sensors-20-05952" class="html-bibr">57</a>,<a href="#B58-sensors-20-05952" class="html-bibr">58</a>,<a href="#B62-sensors-20-05952" class="html-bibr">62</a>]. (<b>b</b>) The possible reaction pathways of TMB towards different ROS or RNS. (<b>c</b>) Table summarizing the reactivity towards TMB of the different ROS and RNS studied.</p>
Full article ">
10 pages, 1864 KiB  
Letter
Quality Assessment during Incubation Using Image Processing
by Sheng-Yu Tsai, Cheng-Han Li, Chien-Chung Jeng and Ching-Wei Cheng
Sensors 2020, 20(20), 5951; https://doi.org/10.3390/s20205951 - 21 Oct 2020
Cited by 7 | Viewed by 2770
Abstract
The fertilized egg is an indispensable production platform for making egg-based vaccines. This study was divided into two parts. In the first part, image processing was employed to analyze the absorption spectrum of fertilized eggs; the results show that the 580-nm band had [...] Read more.
The fertilized egg is an indispensable production platform for making egg-based vaccines. This study was divided into two parts. In the first part, image processing was employed to analyze the absorption spectrum of fertilized eggs; the results show that the 580-nm band had the most significant change. In the second part, a 590-nm-wavelength LED was selected as the light source for the developed detection device. Using this device, sample images (in RGB color space) of the eggs were obtained every day during the experiment. After calculating the grayscale value of the red layer, the receiver operating characteristic curve was used to analyze the daily data to obtain the area under the curve. Subsequently, the best daily grayscale value for classifying unfertilized eggs and dead-in-shell eggs was obtained. Finally, an industrial prototype of the device designed and fabricated in this study was operated and verified. The results show that the accuracy for detecting unfertilized eggs was up to 98% on the seventh day, with the sensitivity and Youden’s index being 82% and 0.813, respectively. On the ninth day, both accuracy and sensitivity reached 100%, and Youden’s index reached a value of 1, showing good classification ability. Considering the industrial operating conditions, this method was demonstrated to be commercially applicable because, when used to detect unfertilized eggs and dead-in-shell eggs on the ninth day, it could achieve accuracy and sensitivity of 100% at the speed of five eggs per second. Full article
Show Figures

Figure 1

Figure 1
<p>Spectrum experimental setup.</p>
Full article ">Figure 2
<p>(<b>a</b>) Detection device (<b>1</b>) the image-capturing zone; and (<b>2</b>) the projection marking zone); (<b>b</b>) schematic of image capturing; and (<b>c</b>) schematic of projection marking.</p>
Full article ">Figure 3
<p>(<b>a</b>) Average values of fertilized egg spectral data on Days 1–9; (<b>b</b>) average values of unfertilized egg spectral data on Days 1–9; and (<b>c</b>) average values of unfertilized eggs divided by average values of fertilized eggs.</p>
Full article ">Figure 4
<p>Change in grayscale values of fertilized eggs, unfertilized eggs, and dead-in-shell eggs.</p>
Full article ">
15 pages, 1964 KiB  
Article
The Measurement of Nanoparticle Concentrations by the Method of Microcavity Mode Broadening Rate
by Alexey Ivanov, Kirill Min`kov, Alexey Samoilenko and Gennady Levin
Sensors 2020, 20(20), 5950; https://doi.org/10.3390/s20205950 - 21 Oct 2020
Cited by 1 | Viewed by 2086
Abstract
A measurement system for the detection of a low concentration of nanoparticles based on optical microcavities with whispering-gallery modes (WGMs) is developed and investigated. A novel method based on the WGM broadening allows us to increase the precision of concentration measurements up to [...] Read more.
A measurement system for the detection of a low concentration of nanoparticles based on optical microcavities with whispering-gallery modes (WGMs) is developed and investigated. A novel method based on the WGM broadening allows us to increase the precision of concentration measurements up to 0.005 ppm for nanoparticles of a known size. We describe WGM microcavity manufacturing and quality control methods. The collective interaction process of suspended Ag nanoparticles in a liquid and TiO2 in the air with a microcavity surface is studied. Full article
(This article belongs to the Section Nanosensors)
Show Figures

Figure 1

Figure 1
<p>The process of forming a microsphere from an optical fiber in an oxygen-hydrogen burner flame: (<b>a</b>) The moment when the preform is introduced into the flame; (<b>b</b>) The formation of a microcavity.</p>
Full article ">Figure 2
<p>The results of the microcavity local inhomogeneities study made by optical tomography [<a href="#B52-sensors-20-05950" class="html-bibr">52</a>]: (<b>a</b>) Phase image of a microcavity, side view; (<b>b</b>) Two-dimensional tomogram slice of a microcavity in the equator area. The color shows the spatial distribution of the refractive index. The arrow indicates the local region where the defect is located, which appeared as a result of the optical fiber core melting.</p>
Full article ">Figure 3
<p>A scheme of a two-channel measurement system for liquid and gas media: PD1, PD2—photodetector; BS1, BS2—beam splitter; L—lens is mounted on a micrometric slide; M2—kinematic mounts for round optics with mirror; FI—free-space isolator; λ/2—half-wave plate.</p>
Full article ">Figure 4
<p>Measuring signal processing algorithm: (<b>a</b>) Signals from the laser control unit (violet), microcavity (blue) and reference interferometer (red); (<b>b</b>) Separation of the signals containing resonant peaks from the microcavity into individual spectral realizations in time; (<b>c</b>) Visualization of a spectrogram of mode monitoring; (<b>d</b>) Normalization of the spectrogram and compensation of the resonant frequency shift; (<b>e</b>) Selecting a resonant peak and plotting its width at half maximum at each moment in time; (<b>f</b>) Approximation of the mode broadening curve and finding the slope of its initial section.</p>
Full article ">Figure 5
<p>A calibration graph of the dependence of the mode broadening rate on the concentration of nanoparticles in a colloidal solution for particles with different sizes: 10 nm (red), 50 nm (black), 100 nm (blue).</p>
Full article ">Figure 6
<p>The dependence of the measurement system sensitivity on the width of the WGM belt. The dashed line shows an inverse-square fit. The insets show microcavities with excited WGMs and adsorbed particles with different active interaction areas.</p>
Full article ">
19 pages, 6540 KiB  
Article
Cross-Correlation Algorithm-Based Optimization of Aliasing Signals for Inductive Debris Sensors
by Xingjian Wang, Hanyu Sun, Shaoping Wang and Wenhao Huang
Sensors 2020, 20(20), 5949; https://doi.org/10.3390/s20205949 - 21 Oct 2020
Cited by 10 | Viewed by 2476
Abstract
An inductive debris sensor can monitor a mechanical system’s debris in real time. The measuring accuracy is significantly affected by the signal aliasing issue happening in the monitoring process. In this study, a mathematical model was built to explain two debris particles’ aliasing [...] Read more.
An inductive debris sensor can monitor a mechanical system’s debris in real time. The measuring accuracy is significantly affected by the signal aliasing issue happening in the monitoring process. In this study, a mathematical model was built to explain two debris particles’ aliasing behavior. Then, a cross-correlation-based method was proposed to deal with this aliasing. Afterwards, taking advantage of the processed signal along with the original signal, an optimization strategy was proposed to make the evaluation of the aliasing debris more accurate than that merely using initial signals. Compared to other methods, the proposed method has fewer limitations in practical applications. The simulation and experimental results also verified the advantage of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Radial magnetic field-based debris sensor: external (<b>a</b>) and internal (<b>b</b>) structure.</p>
Full article ">Figure 2
<p>Debris passing the sensor and the signal generated under different situations: (<b>a</b>) Single debris particle passing through the sensor; (<b>b</b>) Two debris particles passing through the sensor.</p>
Full article ">Figure 3
<p>The three segments of the signal model for two aliasing debris.</p>
Full article ">Figure 4
<p>Four typical superposition states: under different situations: (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mrow> <mrow> <mrow> <mo> </mo> <mi>T</mi> <mo>/</mo> <mn>4</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mrow> <mi>T</mi> <mo>/</mo> <mn>4</mn> <mo>,</mo> <mrow> <mrow> <mo> </mo> <mi>T</mi> <mo>/</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mrow> <mi>T</mi> <mo>/</mo> <mn>2</mn> <mo>,</mo> <mo> </mo> <mn>3</mn> <mrow> <mrow> <mi>T</mi> <mo>/</mo> <mn>4</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mrow> <mn>3</mn> <mi>T</mi> <mo>/</mo> <mn>4</mn> <mo>,</mo> <mrow> <mrow> <mo> </mo> <mi>T</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Cross-correlation algorithm; (<b>a</b>) the process of cross-correlation (<span class="html-italic">CC)</span> for a two-debris aliasing signal (left) and its result (right); (<b>b</b>) the process of <span class="html-italic">CC</span> for Debris A’s signal and its result; (<b>c</b>) the process of <span class="html-italic">CC</span> for Debris B’s signal and its result.</p>
Full article ">Figure 6
<p>Performance of the <span class="html-italic">CC</span> for aliasing signal when <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> moves; (<b>a</b>) result of <span class="html-italic">CC</span> for Debris A and B’s signal before summing; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>’s waveform, Situation 1; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>’s waveform, Situation 2.</p>
Full article ">Figure 7
<p>Comparation of relative size (<span class="html-italic">RS</span>) before and after a <span class="html-italic">CC.</span> (<b>a</b>) Debris B; (<b>b</b>) Overall.</p>
Full article ">Figure 8
<p>Aliasing signal processing strategy.</p>
Full article ">Figure 9
<p>Method’s performance for signals of low signal-to-noise ratio (<span class="html-italic">SNR</span>) while <span class="html-italic">k</span> = 1. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.48</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.6</mn> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Method’s performance for signals of low <span class="html-italic">SNR</span> while <span class="html-italic">k</span> = 1.2. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.48</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.6</mn> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Method’s performance for signals of high <span class="html-italic">SNR</span> while <span class="html-italic">k</span> = 1.2. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.48</mn> <mi>T</mi> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> <mo>=</mo> <mn>0.6</mn> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Real wax block experimental system.</p>
Full article ">Figure 13
<p>Wax block experimental system schematic.</p>
Full article ">Figure 14
<p>Performance of an experimental term when the space was 3 cm (<b>a</b>) and 6 cm (<b>b</b>).</p>
Full article ">Figure 15
<p>A detailed view of the dashed boxes in <a href="#sensors-20-05949-f014" class="html-fig">Figure 14</a>; (<b>a</b>) a detailed view of dashed box a; (<b>b</b>) a detailed view of dashed box b.</p>
Full article ">
23 pages, 6559 KiB  
Article
IR-UWB Sensor Based Fall Detection Method Using CNN Algorithm
by Taekjin Han, Wonho Kang and Gyunghyun Choi
Sensors 2020, 20(20), 5948; https://doi.org/10.3390/s20205948 - 21 Oct 2020
Cited by 26 | Viewed by 5192
Abstract
Falls are the leading cause of fatal injuries in the elderly such as fractures, and secondary damage from falls can lead to death. As such, fall detection is a crucial topic. However, due to the trade-off relationship between privacy preservation, user convenience, and [...] Read more.
Falls are the leading cause of fatal injuries in the elderly such as fractures, and secondary damage from falls can lead to death. As such, fall detection is a crucial topic. However, due to the trade-off relationship between privacy preservation, user convenience, and fall detection performance, it is generally difficult to develop a fall detection system that simultaneously satisfies all conditions. The main goal of this study is to build a practical fall detection framework that can effectively classify the various behavior types into “Fall” and “Activities of daily living (ADL)” while securing privacy preservation and user convenience. For this purpose, signal data containing the motion information of objects was collected using a non-contact, unobtrusive, and non-restraint impulse-radio ultra wideband (IR-UWB) radar. These data were then applied to a convolutional neural network (CNN) algorithm to create an object behavior type classifier that can classify the behavior types of objects into “Fall” and “ADL.” The data were collected by actually performing various activities of daily living, including falling. The performance of the classifier yielded satisfactory results. By combining an IR-UWB and CNN algorithm, this study demonstrates the feasibility of building a practical fall detection system that exceeds a certain level of detection accuracy while also ensuring privacy preservation and user convenience. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Classification of key issues and research topics related to fall detection.</p>
Full article ">Figure 2
<p>Proposed approach for fall detection using a convolution neural network.</p>
Full article ">Figure 3
<p>Conceptual drawing of object movement using IR-UWB radar sensor and IP camera.</p>
Full article ">Figure 4
<p>Visualized image of residual signal power value between current frame and the frame after 24th frame calculated according to Equations (1) and (2).</p>
Full article ">Figure 5
<p>Labeling method considering motion window size and behavior type information.</p>
Full article ">Figure 6
<p>Example of image visualized by preprocessing of IR-UWB data.</p>
Full article ">Figure 7
<p>Labeling method considering motion window size and behavior type information.</p>
Full article ">Figure 8
<p>Architecture of convolution neural network (CNN) implemented for fall detection with classification of IR-UWB visualization image.</p>
Full article ">Figure 9
<p>Concept difference between (<b>a</b>) single image classification and (<b>b</b>) single event classification.</p>
Full article ">Figure 10
<p>Informative diagram of data collection experimental environments: (<b>a</b>) sensor position, range of experimental area, action point, (<b>b</b>) the location of the test supervisor, (<b>c</b>) the location of the IR-UWB sensor, laptop, and IP camera.</p>
Full article ">Figure 11
<p>Scenario for building a database for creating a fall/activities of daily living (ADL) classification model.</p>
Full article ">Figure 12
<p>Pictures of different types of behavior performed in the experiment: (<b>a</b>) walking, (<b>b</b>) fall, (<b>c</b>) standing and (<b>d</b>) sitting and (<b>e</b>) U-MAIN HST-S1M-SE radar sensor.</p>
Full article ">Figure 13
<p>Results of motion image visualization: (<b>a</b>) fall, (<b>b</b>) walk, (<b>c</b>) rise, (<b>d</b>) sit, (<b>e</b>) stand.</p>
Full article ">Figure 14
<p>F1 score results of fall detection according to standard ratio.</p>
Full article ">
16 pages, 2545 KiB  
Article
A Pattern-Recognition-Based Ensemble Data Imputation Framework for Sensors from Building Energy Systems
by Liang Zhang
Sensors 2020, 20(20), 5947; https://doi.org/10.3390/s20205947 - 21 Oct 2020
Cited by 11 | Viewed by 3099
Abstract
Building operation data are important for monitoring, analysis, modeling, and control of building energy systems. However, missing data is one of the major data quality issues, making data imputation techniques become increasingly important. There are two key research gaps for missing sensor data [...] Read more.
Building operation data are important for monitoring, analysis, modeling, and control of building energy systems. However, missing data is one of the major data quality issues, making data imputation techniques become increasingly important. There are two key research gaps for missing sensor data imputation in buildings: the lack of customized and automated imputation methodology, and the difficulty of the validation of data imputation methods. In this paper, a framework is developed to address these two gaps. First, a validation data generation module is developed based on pattern recognition to create a validation dataset to quantify the performance of data imputation methods. Second, a pool of data imputation methods is tested under the validation dataset to find an optimal single imputation method for each sensor, which is termed as an ensemble method. The method can reflect the specific mechanism and randomness of missing data from each sensor. The effectiveness of the framework is demonstrated by 18 sensors from a real campus building. The overall accuracy of data imputation for those sensors improves by 18.2% on average compared with the best single data imputation method. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of the developed data imputation framework with two modules.</p>
Full article ">Figure 2
<p>Appearance of Nesbitt Hall.</p>
Full article ">Figure 3
<p>Example of missing data for whole building energy meter.</p>
Full article ">Figure 4
<p>Example of one real missing data point and one generated validation data point in the sensor chiller energy meter.</p>
Full article ">Figure A1
<p>Diagram of the arrangement of three air handling units (AHUs) in Nesbitt Hall.</p>
Full article ">Figure A2
<p>Configuration of the chiller plant loop in Nesbitt Hall from the web interface of a building automation system (BAS).</p>
Full article ">Figure A3
<p>Configuration of AHU_1 in Nesbitt Hall from the web interface of BAS.</p>
Full article ">Figure A4
<p>The configuration of VAV box with and without reheat in Nesbitt Hall from the web interface of BAS.</p>
Full article ">
17 pages, 6958 KiB  
Article
A High-Robust Automatic Reading Algorithm of Pointer Meters Based on Text Detection
by Zhu Li, Yisha Zhou, Qinghua Sheng, Kunjian Chen and Jian Huang
Sensors 2020, 20(20), 5946; https://doi.org/10.3390/s20205946 - 21 Oct 2020
Cited by 28 | Viewed by 4194
Abstract
Automatic reading of pointer meters is of great significance for efficient measurement of industrial meters. However, existing algorithms are defective in the accuracy and robustness to illumination shooting angle when detecting various pointer meters. Hence, a novel algorithm for adaptive detection of different [...] Read more.
Automatic reading of pointer meters is of great significance for efficient measurement of industrial meters. However, existing algorithms are defective in the accuracy and robustness to illumination shooting angle when detecting various pointer meters. Hence, a novel algorithm for adaptive detection of different pointer meters was presented. Above all, deep learning was introduced to detect and recognize scale value text in the meter dial. Then, the image was rectified and meter center was determined based on text coordinate. Next, the circular arc scale region was transformed into a linear scale region by polar transform, and the horizontal positions of pointer and scale line were obtained based on secondary search in the expanded graph. Finally, the distance method was used to read the scale region where the pointer is located. Test results showed that the algorithm proposed in this paper has higher accuracy and robustness in detecting different types of meters. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Algorithm flow.</p>
Full article ">Figure 2
<p>The flowchart of algorithm implementation.</p>
Full article ">Figure 3
<p>FOTS model of text detection network.</p>
Full article ">Figure 4
<p>The structure of the shared convolution neural network.</p>
Full article ">Figure 5
<p>Center fitting and polar coordinate expansion: (<b>a</b>–<b>c</b>) show the center fitted with the coordinates of scale values; (<b>d</b>–<b>f</b>) show the polar coordinate expansion of meter images.</p>
Full article ">Figure 6
<p>Determination of primary search region: (<b>a</b>) Point set composed of vertices of the scale value text positing box; (<b>b</b>) Scale value text region; (<b>c</b>) Primary search region obtained by expanding the scale value text region.</p>
Full article ">Figure 7
<p>Pixel projection.</p>
Full article ">Figure 8
<p>Scale line of secondary region search: (<b>a</b>) Secondary search region; (<b>b</b>) Region projection.</p>
Full article ">Figure 9
<p>The horizontal coordinates of the pointer and main tick mark in the primary search region.</p>
Full article ">Figure 10
<p>Photograph of the testbed.</p>
Full article ">Figure 11
<p>Setting of shooting angle.</p>
Full article ">Figure 12
<p>Test results of image rectification algorithm: (<b>a</b>–<b>d</b>) are the meter image under different shooting angle; (<b>e</b>–<b>h</b>) are the results after rectification.</p>
Full article ">Figure 13
<p>Detection results of meter image center</p>
Full article ">Figure 14
<p>Positioning of pointer and scale line: (<b>a</b>) Point set of vertexes of scale value text coordinates in the meter image after polar transform; (<b>b</b>) Primary search region; (<b>c</b>) Horizontal coordinates of the pointer and main scale lines on both sides in the primary search region.</p>
Full article ">Figure 15
<p>The result of polar coordinate transformation with different coordinates as the center: (<b>e</b>) is an enlarged view of (<b>a</b>), among them, the blue point is the accurate meter rotation center; the green point is the center with 15 pixel distance error; the red point is the center with a distance error of 95 pixels; (<b>b</b>) The result of polar coordinate transformation centered on the correct rotation center; (<b>c</b>) The result of polar coordinate transformation centered on the center with 15 pixel distance error; (<b>d</b>) The result of polar coordinate transformation centered on the center with 95 pixel distance error; (<b>f</b>–<b>h</b>) The primary search region ROI<sub>1</sub> obtained from <a href="#sensors-20-05946-f015" class="html-fig">Figure 15</a>b–d.</p>
Full article ">Figure 15 Cont.
<p>The result of polar coordinate transformation with different coordinates as the center: (<b>e</b>) is an enlarged view of (<b>a</b>), among them, the blue point is the accurate meter rotation center; the green point is the center with 15 pixel distance error; the red point is the center with a distance error of 95 pixels; (<b>b</b>) The result of polar coordinate transformation centered on the correct rotation center; (<b>c</b>) The result of polar coordinate transformation centered on the center with 15 pixel distance error; (<b>d</b>) The result of polar coordinate transformation centered on the center with 95 pixel distance error; (<b>f</b>–<b>h</b>) The primary search region ROI<sub>1</sub> obtained from <a href="#sensors-20-05946-f015" class="html-fig">Figure 15</a>b–d.</p>
Full article ">Figure 16
<p>The effect of shooting angle on reading error</p>
Full article ">
4 pages, 146 KiB  
Editorial
Smart Sensors and Devices in Artificial Intelligence
by Dan Zhang and Bin Wei
Sensors 2020, 20(20), 5945; https://doi.org/10.3390/s20205945 - 21 Oct 2020
Cited by 9 | Viewed by 3229
Abstract
As stated in the Special Issue call, “sensors are eyes or/and ears of an intelligent system, such as Unmanned Aerial Vehicle (UAV), Automated Guided Vehicle (AGV) and robots [...] Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
19 pages, 4538 KiB  
Article
Dwell Time Allocation Algorithm for Multiple Target Tracking in LPI Radar Network Based on Cooperative Game
by Chenyan Xue, Ling Wang and Daiyin Zhu
Sensors 2020, 20(20), 5944; https://doi.org/10.3390/s20205944 - 21 Oct 2020
Viewed by 2549
Abstract
To solve the problem of dwell time management for multiple target tracking in Low Probability of Intercept (LPI) radar network, a Nash bargaining solution (NBS) dwell time allocation algorithm based on cooperative game theory is proposed. This algorithm can achieve the desired low [...] Read more.
To solve the problem of dwell time management for multiple target tracking in Low Probability of Intercept (LPI) radar network, a Nash bargaining solution (NBS) dwell time allocation algorithm based on cooperative game theory is proposed. This algorithm can achieve the desired low interception performance by optimizing the allocation of the dwell time of each radar under the constraints of the given target detection performance, minimizing the total dwell time of radar network. By introducing two variables, dwell time and target allocation indicators, we decompose the dwell time and target allocation into two subproblems. Firstly, combining the Lagrange relaxation algorithm with the Newton iteration method, we derive the iterative formula for the dwell time of each radar. The dwell time allocation of the radars corresponding to each target is obtained. Secondly, we use the fixed Hungarian algorithm to determine the target allocation scheme based on the dwell time allocation results. Simulation results show that the proposed algorithm can effectively reduce the total dwell time of the radar network, and hence, improve the LPI performance. Full article
(This article belongs to the Special Issue Radio Sensing and Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the target trajectory and radar network in the simulation scene.</p>
Full article ">Figure 2
<p>Assignment result between radar and target in radar network.</p>
Full article ">Figure 3
<p>Dwell time of radar network on target.</p>
Full article ">Figure 4
<p>Target 1 dwell time allocation ratio.</p>
Full article ">Figure 5
<p>Target 2 dwell time allocation ratio.</p>
Full article ">Figure 6
<p>SINR for target 1.</p>
Full article ">Figure 7
<p>SINR for target 2.</p>
Full article ">Figure 8
<p>Radar dwell time convergence performance.</p>
Full article ">Figure 9
<p>Radar SINR convergence performance.</p>
Full article ">Figure 10
<p>Target 1 radar dwell time allocation ratio.</p>
Full article ">Figure 11
<p>RMSE for target tracking.</p>
Full article ">Figure 12
<p>Total Schleher intercept factor.</p>
Full article ">Figure 13
<p>Comparison of total dwell time of radar network with different numbers of radars.</p>
Full article ">
19 pages, 2836 KiB  
Article
Bi-Layer Shortest-Path Network Interdiction Game for Internet of Things
by Jingwen Yan, Kaiming Xiao, Cheng Zhu, Jun Wu, Guoli Yang and Weiming Zhang
Sensors 2020, 20(20), 5943; https://doi.org/10.3390/s20205943 - 21 Oct 2020
Cited by 3 | Viewed by 2554
Abstract
Network security is a crucial challenge facing Internet-of-Things (IoT) systems worldwide, which leads to serious safety alarms and great economic loss. This paper studies the problem of malicious interdicting network exploitation of IoT systems that are modeled as a bi-layer logical–physical network. In [...] Read more.
Network security is a crucial challenge facing Internet-of-Things (IoT) systems worldwide, which leads to serious safety alarms and great economic loss. This paper studies the problem of malicious interdicting network exploitation of IoT systems that are modeled as a bi-layer logical–physical network. In this problem, a virtual attack takes place at the logical layer (the layer of Things), while the physical layer (the layer of Internet) provides concrete support for the attack. In the interdiction problem, the attacker attempts to access a target node on the logical layer with minimal communication cost, but the defender can strategically interdict some key edges on the physical layer given a certain budget of interdiction resources. This setting generalizes the classic single-layer shortest-path network interdiction problem, but brings in nonlinear objective functions, which are notoriously challenging to optimize. We reformulate the model and apply Benders decomposition process to solve this problem. A layer-mapping module is introduced to improve the decomposition algorithm and a random-search process is proposed to accelerate the convergence. Extensive numerical experiments demonstrate the computational efficiency of our methods. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A simple example of the logical–physical network in IoT. The logical layer contains a sensor, an effector and three different processors. The physical layer is the communication network, where the time delay of each link is given by the number beside it. Each dotted line between the two layers connects the functional part and the communicating part of the same entity. The sensor collects information and sends it to either of the processors. The processor analyzes the information and then sends order to the effector. As the figure shows, the shortest logical flow <span class="html-italic">Sensor → Processor3 → Effector</span> corresponds to a physical path <span class="html-italic">A(Sensor) → E → G(Processor 3) → K → L(Effector)</span>, which weighs 9. Although the shortest <span class="html-italic">A</span>–<span class="html-italic">L</span> path on the physical layer is <span class="html-italic">A(Sensor) → E → H → L(Effector)</span> with total weight 8, it is not functionally feasible because no processor is on this path and thus no effective order can be sent to Effector.</p>
Full article ">Figure 2
<p>The process of Random-Search.</p>
Full article ">Figure 3
<p>The structure of a real bi-layer IoT network (after data cleaning).</p>
Full article ">Figure 4
<p>Interdiction effects in the bi-layer IoT network.</p>
Full article ">
19 pages, 4788 KiB  
Article
Chitosan-Based Nanocomposites for Glyphosate Detection Using Surface Plasmon Resonance Sensor
by Minh Huy Do, Brigitte Dubreuil, Jérôme Peydecastaing, Guadalupe Vaca-Medina, Tran-Thi Nhu-Trang, Nicole Jaffrezic-Renault and Philippe Behra
Sensors 2020, 20(20), 5942; https://doi.org/10.3390/s20205942 - 21 Oct 2020
Cited by 15 | Viewed by 3775
Abstract
This article describes an optical method based on the association of surface plasmon resonance (SPR) with chitosan (CS) film and its nanocomposites, including zinc oxide (ZnO) or graphene oxide (GO) for glyphosate detection. CS and CS/ZnO or CS/GO thin films were deposited on [...] Read more.
This article describes an optical method based on the association of surface plasmon resonance (SPR) with chitosan (CS) film and its nanocomposites, including zinc oxide (ZnO) or graphene oxide (GO) for glyphosate detection. CS and CS/ZnO or CS/GO thin films were deposited on an Au chip using the spin coating technique. The characterization, morphology, and composition of these films were performed by Fourier-transform infrared spectroscopy (FTIR), atomic force microscopy (AFM), and contact angle technique. Sensor preparation conditions including the cross-linking and mobile phase (pH and salinity) were investigated and thoroughly optimized. Results showed that the CS/ZnO thin-film composite provides the highest sensitivity for glyphosate sensing with a low detection limit of 8 nM and with high reproducibility. From the Langmuir-type adsorption model and the effect of ionic strength, the adsorption mechanisms of glyphosate could be controlled by electrostatic and steric interaction with possible formation of 1:1 outer-sphere surface complexes. The selectivity of the optical method was investigated with respect to the sorption of glyphosate metabolite (aminomethylphosphonic acid) (AMPA), glufosinate, and one of the glufonisate metabolites (3-methyl-phosphinico-propionic acid) (MPPA). Results showed that the SPR sensor offers a very good selectivity for glyphosate, but the competition of other molecules could still occur in aqueous systems. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Structures and pK<sub>a</sub> of glyphosate, chitosan (CS), and other studied compounds (molar mass of glyphosate: 169.07 g/mol; aminomethylphosphonic acid (AMPA): 111.04 g/mol; glufosinate: 181.13 g/mol; 3-methyl-phosphinico-propionic acid (MPPA): 152.09 g/mol) [<a href="#B40-sensors-20-05942" class="html-bibr">40</a>].</p>
Full article ">Figure 2
<p>Schema of the preparation procedure for SPR sensor with different experimental conditions.</p>
Full article ">Figure 3
<p>FTIR spectra of CS, CS/GO, and CS/ZnO composite thin film cross-linked by GA.</p>
Full article ">Figure 4
<p>Contact angle measurements on Au electrode, CS, CS/GO, and CS/ZnO composite thin film cross-linked by GA.</p>
Full article ">Figure 5
<p>AFM images of Au electrode, CS, CS/GO, and CS/ZnO composite thin films cross-linked by GA.</p>
Full article ">Figure 6
<p>Sensitivity of glyphosate sorption via a change in angle on CS film alone, CS/GO, and CS/ZnO film cross-linked by GA for two glyphosate concentrations (0.06 and 0.30 μM), at pH 5.5.</p>
Full article ">Figure 7
<p>Glyphosate sorption onto CS/ZnO SPR sensor cross-linked for two glyphosate concentrations (0.06 and 0.30 μM), at pH 5.5. Experimental conditions: cross-linking for 2 h with GA (2.5%, pH 4.5), EPI (0.1 mM, pH 10) at 40 °C or EPI (0.1 mM, pH 5.5) at 40 °C followed by washing with UPW.</p>
Full article ">Figure 8
<p>Role of pH without glyphosate in the change in SPR angle in the case of CS/ZnO film cross-linked by GA (dead time corresponds to around 156 s between injection and the cell output). Experimental conditions for each reported pH value: injection of UPW (pH 5.5) before injection of a solution at a given pH adjusted by 0.01 M HCl and 0.01 M NaOH into the SPR system.</p>
Full article ">Figure 9
<p>Role of pH (4.0 to 7.0) in the change in SPR angle in the case of CS/ZnO film cross-linked by GA in the presence of 0.30 μM glyphosate (dead time corresponds to around 156 s between injection and the cell output). Experimental conditions: (<b>i</b>) first, injection of a non-buffered solution at a given pH adjusted by 0.01 M HCl and 0.01 M NaOH without glyphosate into the SPR system to obtain a steady-state baseline, followed by the injection of the glyphosate solution at the same pH, and the change in SPR angle corresponding to the effect of glyphosate with respect to the baseline before glyphosate injection; (<b>ii</b>) same procedure at another pH value.</p>
Full article ">Figure 10
<p>Typical sensorgrams of glyphosate sorption onto CS/ZnO SPR sensors for glyphosate concentration from 0 to 0.59 µM (dead time corresponds to around 156 s between injection and the cell output).</p>
Full article ">Figure 11
<p>Glyphosate sorption onto CS/ZnO SPR sensors at pH 5.5: (<b>a</b>) full sorption isotherm of glyphosate; (<b>b</b>) calibration curve in the glyphosate concentration ranges between 0.03 and 0.59 μM.</p>
Full article ">Figure 12
<p>Effect of ionic strength (NaCl concentration) on the glyphosate sorption onto CS/ZnO SPR sensor at 0.30 μM glyphosate and pH 5.5.</p>
Full article ">Figure 13
<p>Sorption behavior of glyphosate, AMPA, glufosinate, and MPPA on CS/ZnO SPR sensor at 0.12 μM and pH 5.5. Experimental conditions: injection of each solution separately into the SPR system with a flow rate of 40 µL/min at 20.3 °C.</p>
Full article ">
15 pages, 8650 KiB  
Article
End-to-End Monocular Range Estimation for Forward Collision Warning
by Jie Tang and Jian Li
Sensors 2020, 20(20), 5941; https://doi.org/10.3390/s20205941 - 21 Oct 2020
Cited by 6 | Viewed by 3218
Abstract
Estimating range to the closest object in front is the core component of the forward collision warning (FCW) system. Previous monocular range estimation methods mostly involve two sequential steps of object detection and range estimation. As a result, they are only effective for [...] Read more.
Estimating range to the closest object in front is the core component of the forward collision warning (FCW) system. Previous monocular range estimation methods mostly involve two sequential steps of object detection and range estimation. As a result, they are only effective for objects from specific categories relying on expensive object-level annotation for training, but not for unseen categories. In this paper, we present an end-to-end deep learning architecture to solve the above problems. Specifically, we represent the target range as a weighted sum of a set of potential distances. These potential distances are generated by inverse perspective projection based on intrinsic and extrinsic camera parameters, while a deep neural network predicts the corresponding weights of these distances. The whole architecture is optimized towards the range estimation task directly in an end-to-end manner with only the target range as supervision. As object category is not restricted in the training stage, the proposed method can generalize to objects with unseen categories. Furthermore, camera parameters are explicitly considered in the proposed method, making it able to generalize to images taken with different cameras and novel views. Additionally, the proposed method is not a pure black box, but provides partial interpretability by visualizing the produced weights to see which part of the image dominates the final result. We conduct experiments to verify the above properties of the proposed method on synthetic and real-world collected data. Full article
(This article belongs to the Special Issue Sensors for Road Vehicles of the Future)
Show Figures

Figure 1

Figure 1
<p>Overview of architecture. The range is represented as a weighted sum of a set of potential distances and the whole architecture consists of weight map generation and distance map generation. The solid lines with arrows represent the forward implementation of our method, while the dashed lines with arrows indicate the loss calculation and back-propagation in the training stage.</p>
Full article ">Figure 2
<p>Our autonomous driving vehicle with image coordinate system, LiDAR coordinate system, and world coordinate system in our setting.</p>
Full article ">Figure 3
<p>The network structure of weight generation network. The U-Net structure is an encoder-decoder network. The encoder and decoder parts consist of fully convolutional networks. Fully connected layers are applied to the flattened encoded features to provide spatial position information. See the text for more details.</p>
Full article ">Figure 4
<p>Processed Apollo dataset. These masks represent different preset collision regions. Even for the same color image, different collision regions may induce different target ranges.</p>
Full article ">Figure 5
<p>Visualization of generated weight maps. The first row is the color image. The second row is the colorized weighted map. The last row is the color image superimposed on the preset collision region and the colorized weighted map. This interpretable representation of the last row will be used to illustrate experimental results in the following. We use cyan rectangles to highlight the notable regions and zoom in these regions for better visualization.</p>
Full article ">Figure 6
<p>Performance on target objects of various classes.</p>
Full article ">Figure 7
<p>Results on objects of various categories.</p>
Full article ">Figure 8
<p>Generalization capability. These six images are from KITTI and virtual KITTI datasets. They are collected with different cameras and views than the training dataset.</p>
Full article ">Figure 9
<p>Results on images with multiple objects in front.</p>
Full article ">Figure 10
<p>Performance on cars in the test set of the processed synthetic Apollo dataset.</p>
Full article ">Figure 11
<p>Performance on a data sequence collected by our autonomous driving vehicle.</p>
Full article ">Figure 12
<p>Samples of results in our collected sequence. The results of the Inverse Perspective Mapping (IPM)-based method are shown in the top row. The results of our method are shown in the bottom row.</p>
Full article ">Figure 13
<p>Mean absolute error (MAE) (in m) on the train set and test set under different ranges.</p>
Full article ">Figure 14
<p><math display="inline"><semantics> <mi>δ</mi> </semantics></math> corresponding to the threshold of relative error in the test set.</p>
Full article ">Figure 15
<p>Examples of five types of failure cases of our method. See the text for more details.</p>
Full article ">
15 pages, 2107 KiB  
Article
An Investigation of Rotary Drone HERM Line Spectrum under Manoeuvering Conditions
by Peter Klaer, Andi Huang, Pascale Sévigny, Sreeraman Rajan, Shashank Pant, Prakash Patnaik and Bhashyam Balaji
Sensors 2020, 20(20), 5940; https://doi.org/10.3390/s20205940 - 21 Oct 2020
Cited by 31 | Viewed by 6254
Abstract
Detecting and identifying drones is of great interest due to the proliferation of highly manoeuverable drones with on-board sensors of increasing sensing capabilities. In this paper, we investigate the use of radars for tackling this problem. In particular, we focus on the problem [...] Read more.
Detecting and identifying drones is of great interest due to the proliferation of highly manoeuverable drones with on-board sensors of increasing sensing capabilities. In this paper, we investigate the use of radars for tackling this problem. In particular, we focus on the problem of detecting rotary drones and distinguishing between single-propeller and multi-propeller drones using a micro-Doppler analysis. Two different radars were used, an ultra wideband (UWB) continuous wave (CW) C-band radar and an automotive frequency modulated continuous wave (FMCW) W-band radar, to collect micro-Doppler signatures of the drones. By taking a closer look at HElicopter Rotor Modulation (HERM) lines, the spool and chopping lines are identified for the first time in the context of drones to determine the number of propeller blades. Furthermore, a new multi-frequency analysis method using HERM lines is developed, which allows the detection of propeller rotation rates (spool and chopping frequencies) of single and multi-propeller drones. Therefore, the presented method is a promising technique to aid in the classification of drones. Full article
(This article belongs to the Special Issue Advanced Radar Techniques, Applications and Developments)
Show Figures

Figure 1

Figure 1
<p>Drone propeller characteristics and configurations.</p>
Full article ">Figure 2
<p>Schematic of micro-Doppler measurement in a simple model.</p>
Full article ">Figure 3
<p>Steps of the log harmonic summation algorithm applied on a HERM lines spectrogram.</p>
Full article ">Figure 4
<p>Example of a measured micro-Doppler spectrogram of the coaxial helicopter by W-band radar using (<b>a</b>) short window STFT and (<b>b</b>) long window STFT; (<b>c</b>) zoomed into the long window STFT showing individual HERM lines; (<b>d</b>) HERM lines spectrum at 1 s.</p>
Full article ">Figure 5
<p>Spectrogram with a low signal to noise ratio of the coaxial helicopter measured by W-band radar using (<b>a</b>) short window STFT and (<b>b</b>) long window STFT. Spectrogram of the coaxial helicopter measured by C-band radar with insufficient PRF to capture all individual blade flashes using (<b>c</b>) short window STFT and (<b>d</b>) long window STFT.</p>
Full article ">Figure 6
<p>(<b>a</b>) HERM lines spectrogram for one-propeller helicopter measured by C-band radar; (<b>b</b>) corresponding multi-frequency detector result.</p>
Full article ">Figure 7
<p>(<b>a</b>) HERM lines spectrogram for quadcopter measured by C-band radar; (<b>b</b>) corresponding multi-frequency detector result.</p>
Full article ">Figure 8
<p>(<b>a</b>) HERM lines spectrogram for hexacopter measured by C-band radar; (<b>b</b>) corresponding multi-frequency detector result.</p>
Full article ">Figure 9
<p>(<b>a</b>) HERM lines spectrogram for one-propeller helicopter measured by W-band radar; (<b>b</b>) zoomed into the HERM lines spectrogram showing individual HERM lines; (<b>c</b>) ccorresponding multi-frequency detector result.</p>
Full article ">Figure 10
<p>(<b>a</b>) HERM lines spectrogram for quadcopter measured by W-band radar; (<b>b</b>) zoomed into the HERM lines spectrogram showing individual HERM lines; (<b>c</b>) corresponding multi-frequency detector result.</p>
Full article ">Figure 11
<p>(<b>a</b>) HERM lines spectrogram for hexacopter measured by W-band radar; (<b>b</b>) zoomed into the HERM lines spectrogram showing individual HERM lines; (<b>c</b>) corresponding multi-frequency detector result.</p>
Full article ">
19 pages, 580 KiB  
Article
Variational Channel Estimation with Tempering: An Artificial Intelligence Algorithm for Wireless Intelligent Networks
by Jia Liu, Mingchu Li, Yuanfang Chen, Sardar M. N. Islam and Noel Crespi
Sensors 2020, 20(20), 5939; https://doi.org/10.3390/s20205939 - 21 Oct 2020
Viewed by 2326
Abstract
With the rapid development of wireless sensor networks (WSNs) technology, a growing number of applications and services need to acquire the states of channels or sensors, especially in order to use these states for monitoring, object tracking, motion detection, etc. A critical issue [...] Read more.
With the rapid development of wireless sensor networks (WSNs) technology, a growing number of applications and services need to acquire the states of channels or sensors, especially in order to use these states for monitoring, object tracking, motion detection, etc. A critical issue in WSNs is the ability to estimate the source parameters from the readings of a distributed sensor network. Although there are several studies on channel estimation (CE) algorithms, existing algorithms are all flawed with their high complexity, inability to scale, inability to ensure the convergence to a local optimum, low speed of convergence, etc. In this work, we turn to variational inference (VI) with tempering to solve the channel estimation problem due to its ability to reduce complexity, ability to generalize and scale, and guarantee of local optimum. To the best of our knowledge we are the first to use VI with tempering for advanced channel estimation. The parameters that we consider in the channel estimation problem include pilot signal and channel coefficients, assuming there is orthogonal access between different sensors (or users) and the data fusion center (or receiving center). By formulating the channel estimation problem into a probabilistic graphical model, the proposed Channel Estimation Variational Tempering Inference (CEVTI) approach can estimate the channel coefficient and the transmitted signal in a low-complexity manner while guaranteeing convergence. CEVTI can find out the optimal hyper-parameters of channels with fast convergence rate, and can be applied to the case of code division multiple access (CDMA) and uplink massive multi-input-multi-output (MIMO) easily. Simulations show that CEVTI has higher accuracy than state-of-the-art algorithms under different noise variance and signal-to-noise ratio. Furthermore, the results show that the more parameters are considered in each iteration, the faster the convergence rate and the lower the non-degenerate bit error rate with CEVTI. Analysis shows that CEVTI has satisfying computational complexity, and guarantees a better local optimum. Therefore, the main contribution of the paper is the development of a new efficient, simple and reliable algorithm for channel estimation in WSNs. Full article
Show Figures

Figure 1

Figure 1
<p>Noise Variance Impact.</p>
Full article ">Figure 2
<p>BER comparison.</p>
Full article ">Figure 3
<p>Average number of iterations until convergence.</p>
Full article ">Figure 4
<p>Mutual information comparison.</p>
Full article ">
39 pages, 6949 KiB  
Review
A Review of Solar Energy Harvesting Electronic Textiles
by Achala Satharasinghe, Theodore Hughes-Riley and Tilak Dias
Sensors 2020, 20(20), 5938; https://doi.org/10.3390/s20205938 - 21 Oct 2020
Cited by 49 | Viewed by 7547
Abstract
An increased use in wearable, mobile, and electronic textile sensing devices has led to a desire to keep these devices continuously powered without the need for frequent recharging or bulky energy storage. To achieve this, many have proposed integrating energy harvesting capabilities into [...] Read more.
An increased use in wearable, mobile, and electronic textile sensing devices has led to a desire to keep these devices continuously powered without the need for frequent recharging or bulky energy storage. To achieve this, many have proposed integrating energy harvesting capabilities into clothing: solar energy harvesting has been one of the most investigated avenues for this due to the abundance of solar energy and maturity of photovoltaic technologies. This review provides a comprehensive, contemporary, and accessible overview of electronic textiles that are capable of harvesting solar energy. The review focusses on the suitability of the textile-based energy harvesting devices for wearable applications. While multiple methods have been employed to integrate solar energy harvesting with textiles, there are only a few examples that have led to devices with textile properties. Full article
(This article belongs to the Special Issue Textile Electrodes and Sensors)
Show Figures

Figure 1

Figure 1
<p>Simple schematic of a typical solar cell comprising of two semiconductor materials (P-type and N-type) and two electrodes.</p>
Full article ">Figure 2
<p>Diagram showing a typical current-voltage curve for a solar cell. The point where the maximum power is achieved has been annotated on the figure. This figure has been drawn and is not generated from collected data.</p>
Full article ">Figure 3
<p>Microscope image of micro-structured PV cells developed by Sandia Laboratories. Reprinted from [<a href="#B98-sensors-20-05938" class="html-bibr">98</a>] with permission from Elsevier. (<b>A</b>) An array of cells attached to the wafer (<b>B</b>) The front and back the cells.</p>
Full article ">Figure 4
<p>(<b>a</b>) Rigid solar cells soldered onto flexible copper wires. The filaments would subsquently be covered with textile fibers. (<b>b</b>) A woven solar fabric.</p>
Full article ">Figure 5
<p>A textile based OPV developed by Lee et al. Reprinted from [<a href="#B105-sensors-20-05938" class="html-bibr">105</a>] with permission from Elsevier. (<b>a</b>) Schematic illustration of the textile OPV. (<b>b</b>) Photographs of the textile OPV integrated with clothing.</p>
Full article ">Figure 6
<p>The thin polymer PET foil based solar cell attatched to an elastomeric support developed by Kaltenbrunner et al. This image shows the three-diamentional deformablity of the structure when deformed by applying pressure from a 1.5 mm diameter tube. Taken from [<a href="#B112-sensors-20-05938" class="html-bibr">112</a>]; this image is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License.</p>
Full article ">Figure 7
<p>Photograph of a textile-based dye sensitized solar cell developed by Opwis et al. Adapted from [<a href="#B119-sensors-20-05938" class="html-bibr">119</a>]; this image is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).</p>
Full article ">Figure 8
<p>3D textile DSSC developed by Yun et al. Adapted from [<a href="#B120-sensors-20-05938" class="html-bibr">120</a>]; this image is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).</p>
Full article ">Figure 9
<p>Photograph of the flexible copper indium gallium selenide thin film solar cell on glass fabric developed by Knittel et al. Reprinted from [<a href="#B123-sensors-20-05938" class="html-bibr">123</a>] with permission from Wiley.</p>
Full article ">Figure 10
<p>Thin film solar cell on glass fiber fabric developed by Plentz et al. Reprinted from [<a href="#B124-sensors-20-05938" class="html-bibr">124</a>] with permission from Elsevier.</p>
Full article ">Figure 11
<p>SEM image of a TiO<sub>2</sub>-Ti wire primary electrode diameter twisted around a carbon nanotube (CNT) yarn counter electrode. Reprinted (adapted) with permission from [<a href="#B133-sensors-20-05938" class="html-bibr">133</a>]. Copyright 2012 American Chemical Society.</p>
Full article ">Figure 12
<p>Schematic of a simple metal core based organic PV fiber. Reprinted from [<a href="#B140-sensors-20-05938" class="html-bibr">140</a>] with permission from Elsevier. This simple design has conducting inner and outer electrodes (Au and Al) and active layers (TiOx and P3HT:PCBM) forming the solar cell. This schematic provides a good example of a coaxial type solar cell, where the cell is built from multiple layers covering a core material.</p>
Full article ">Figure 13
<p>Diagram illustrating the fabrication process and structure of the fiber-shaped solar cells developed by Liu et al. Reprinted with permission from [<a href="#B142-sensors-20-05938" class="html-bibr">142</a>]. Copyright 2012 American Chemical Society.</p>
Full article ">Figure 14
<p>Photographs of the DSSC developed by Pan et al. (<b>a</b>) The DSSC. (<b>b</b>) The DSSC lighting up an LED. Reprinted from [<a href="#B150-sensors-20-05938" class="html-bibr">150</a>] with permission from Wiley.</p>
Full article ">
Previous Issue
Back to TopTop